UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A numerical shape optimization framework for generic problems solved on unstructured triangular meshes Gosselin, Serge 2004

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_2004-0464.pdf [ 4.87MB ]
JSON: 831-1.0080792.json
JSON-LD: 831-1.0080792-ld.json
RDF/XML (Pretty): 831-1.0080792-rdf.xml
RDF/JSON: 831-1.0080792-rdf.json
Turtle: 831-1.0080792-turtle.txt
N-Triples: 831-1.0080792-rdf-ntriples.txt
Original Record: 831-1.0080792-source.json
Full Text

Full Text

A NUMERICAL SHAPE OPTIMIZATION F R A M E W O R K FOR GENERIC PROBLEMS SOLVED ON UNSTRUCTURED TRIANGULAR MESHES by SERGE GOSSELIN B.Eng. (Mechanical), Universite Laval , 2002 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES (Department of Mechanical Engineering) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA August 2004 © Serge Gosselin, 2004 Library Authorization In presenting this thesis in partial fulfillment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Name of Author (please print) Date (dd/mm/yyyy) Title of Thesis: A l3«^\rA S W ^ C ^ ^ c l ; ^ F t ^ n ^ O t Degree: tA, A . S c . Y e a r : Qc*>4 Department of W C ^ O A ^ . C J J E ^ V V I J U U ^ The University of British Columbia Vancouver, BC Canada Abstract Unt i l recently, experiments combined with tr ia l and error have been the preferred method-ology to refine designs in fields such as aerodynamics or heat transfer. Numerical shape opt imizat ion tools are becoming an important l ink in the design chain, accelerating this re-finement process. Opt ima l design is a complex task requiring the integration of the many components necessary to perform accurate numerical simulations. In this thesis, a simplified shape opt imizat ion framework for generic applications is presented. The part ial differential equations describing the physics of the opt imizat ion problems are solved on unstructured triangular meshes. The mesh generator guarantees the quality of the tr iangulation. The finite-volume method is used combined wi th an impl ici t Newton-Kry lov G M R E S solver for generic problems. The numerical and physical aspects of the problem are de-coupled inside the solver. Th is allows for simplif ied implementation of new physics package using interior and boundary fluxes. The shape to be optimized is defined wi th a set of control points. The movements of the boundary vertices are described by cubic spline interpolation. They are then propagated through the internal mesh by an explicit deformation law. Opt imizat ion constraints are enforced through a penalty formulation and the resulting unconstrained prob-lem is solved using either the steepest descent method or a quasi-Newton method. Gradients of the objective function with respect to the shape are calculated using finite differences. The correct operation of the opt imizat ion framework is verified wi th val idation problems. These allow the assessment of the performance of its different components. To indicate that a min-imum has been attained, the gradient norm must be reduced by several orders of magnitude. Overal l , the results show that the various components are properly integrated. The frame-work's range of applications is l imited by the impl ic i t solver, currently under development, and by finite-difference gradients. A discussion about necessary requirements to extend this framework to more practical applications is given. i i Contents 1 Introduction 1 1.1 Mot ivat ion 1 1.2 Review of shape opt imizat ion algorithms 2 1.2.1 Required elements for shape opt imizat ion 2 1.2.2 Opt imizat ion algorithms 6 1.2.3 Methods for gradient computation 8 1.3 Objectives and scope 9 2 Optimization problem formulation 11 2.1 Generic formulation 11 2.2 Mesh generation 12 2.2.1 Init ial tr iangulation 12 2.2.2 The Delaunay criterion 13 2.2.3 Edge recovery 14 2.2.4 Refinement 14 2.2.5 Boundary and control variables definition 16 i i i CONTENTS iv 2.3 Shape parameterization 17 2.3.1 Cubic spline interpolation 17 2.3.2 Deformation parameterization 19 2.4 Mesh deformation 22 2.5 Numerical solver 23 2.5.1 Finite-volume formulation 25 2.5.2 Solution reconstruction 26 2.5.3 Boundary conditions 28 2.5.4 G M R E S algorithm 28 2.6 Objective function and constraints 30 3 Optimization algorithm 33 3.1 Opt imal i ty conditions 34 3.2 Gradient evaluation 36 3.2.1 Gradient formulation 37 3.2.2 Fin i te differences 37 3.2.3 Adjoint methods 40 3.2.4 Automat ic differentiation 42 3.2.5 Incomplete sensitivities 43 3.3 Descent direction 44 3.3.1 Steepest descent method 46 3.3.2 B F G S method 47 CONTENTS v 3.4 Line search procedure 48 3.5 Implementation details 50 3.5.1 G r i d function definition 51 3.5.2 Numerical solution of the state 54 3.5.3 Opt imizat ion algorithm 55 4 Validation problems 5 9 4.1 Constrained geometric problem 59 4.2 Inverse problem using the Laplace equation 64 4.3 Inverse problem using the Poisson equation 67 4.4 Opt imizat ion problem in solid mechanics 71 5 Conclusions and recommendations 7 5 5.1 Future work 79 L i s t o f F i g u r e s 2.1 Geometry to be triangulated 13 2.2 Bounding triangles and ini t ia l boundary vertices 13 2.3 Delaunay tr iangulation 14 2.4 Non-Delaunay tr iangulation 15 2.5 Edge swapping to obtain a Delaunay tr iangulation 15 2.6 Init ial tr iangulation 15 2.7 Result ing mesh 16 2.8 Cubic splines: n polynomials for n + 1 points 18 2.9 Reference curve mapping the Cartesian space 19 2.10 Il lustration of the bisection algorithm 21 2.11 Mot ion of the boundary 21 2.12 Mesh deformation examples 24 2.13 Averaged and reconstructed solutions 27 2.14 Representation of the barrier geometric constraints 32 3.1 Typica l convergence for the steepest descent algorithm 47 v i LIST OF FIGURES v i i 3.2 Overview of the optimizat ion process 51 3.3 Details of the domain preparation and deformation 52 3.4 Details of the numerical solver 55 3.5 Details of the opt imizat ion algorithm 56 4.1 Init ial and final geometries for the constrained geometric problem 61 4.2 Convergence history of the geometric problem 62 4.3 Init ial and final geometries for the inverse Laplace problem 66 4.4 Convergence history of the inverse Laplace problem 67 4.5 Init ial and final geometries for the inverse Poisson problem 69 4.6 Meshed in i t ia l domain for the inverse Poisson problem 70 4.7 Convergence history of the inverse Poisson problem 70 4.8 Meshed domain for the solid mechanics test case 73 L i s t o f T a b l e s 4.1 Locations of the control points in the parametric space for the geometric problem 63 4.2 Locations of the control points in the parametric space for the Laplace problem. 67 4.3 Locations of the control points in the parametric space for the Poisson problem. 71 v i i i A c k n o w l e d g e m e n t s There are many people I would like to thank and without whom completing this thesis would have been a much more difficult task. F i rs t and foremost, I would like to thank my supervisor, Dr . Ca r l Ol l iv ier-Gooch. Thank you for your availability, your time and most important ly your patience. You always found time to answer my incessant questions about the code and to give me sound advice concerning research. Wi thout your help, I would probably st i l l be lost in the maze of C + + programming. I sincerely appreciated our relationship and I hope I wi l l st i l l be able to learn from you in the sequel to this adventure. I would also like to thank my lab mates. Charles who made me discover the pleasures of Vancouver. Thank you for your precious help, for sharing your experience with me and for always having been wi l l ing to guide me at the beginning of this research project. Chr is, thank you for ini t iat ing me to the wonders of the open source world. I wi l l never forget the backcountry ski ing and the painstaking hikes in the mountains. Ami r , wi th whom I had the most interesting talks about the state of the world at pool side. Veronica, who recently arrived in the lab and who comforted me in regard to my curious accent. Thank you al l for giving me your appreciation on this thesis and for coping wi th my own interpretations of Shakespeare's language. Special thanks to Ashley who patiently read chapter after chapter so that not too many "frenchisms" would remain. Your help was really appreciated. Je me permets une petite encartade francophone pour remercier ma famille et mes amis pour leurs encouragements soutenus. U n remerciement tout special a Marie-Josee, qui a du supporter les hauts et les bas par lesquels je suis passe au cours des deux dernieres annees. Sans ton soutien, je ne sais pas si j ' y serais parvenu. Serge ix C h a p t e r 1 I n t r o d u c t i o n 1.1 Motivation W i t h computational power more easily available, a growing number of scientists and en-gineers are becoming interested in relying on numerical simulations to study phenomena governed by part ial differential equations. Even if these specialists have a complete under-standing of the physical aspects underlying the problems being studied, they might lack the knowledge required to create valid numerical models of these phenomena. Mak ing numerical simulations more easily accessible was the main dr iv ing force that led to the development of the Advanced Numerical Simulat ion L ibrary (ANSL ib ) [51, 9, 5, 7, 8, 49], a simplified toolkit to solve almost any part ial differential equation numerically. A numerical solver, when coupled with proper opt imizat ion techniques, can be a powerful design tool. Start ing from the same premise that led to the creation of A N S L i b , it becomes interesting to study the feasibility of a generic framework allowing one to perform shape opt imizat ion with min imal interventions from the user. Th is framework could then be used wi th a wide range of practical applications including, but definitely not l imited to, acoustics [28], aerodynamics [33] and heat transfer [43]. The goal of opt imal design is to accurately determine the contours or surfaces of a body such that, based on a series of criteria, it attains opt imal performance. Al though the field of aero-dynamic shape opt imizat ion has received a lot of attention in recent years, opt imal design 1 CHAPTER 1. INTRODUCTION 2 problems can occur in many engineering applications. Designers could benefit from a simple generic framework for shape opt imizat ion. Instead of relying solely on costly experimental methods, they would have the possibil i ty of incorporating numerical opt imizat ion into the design loop. The design returned by the computer simulations can then be validated exper-imentally, in wind tunnels or tow tanks for instance. The reduced use of testing facilities would results in improved design t ime, possibly an increase in performance and ult imately, a reduction in cost. However, many challenges must be addressed in order to come up wi th a functional shape opt imizat ion algorithm. Solutions to part ial differential equations can be complex and an accurate computational representation might be difficult to obtain. For example, when dealing wi th aerodynamics calculation in the transonic regime, it is important to be able to precisely locate possible shock waves since they are responsible for a noticeable port ion of the total drag. This correct representation must apply throughout the problem's range of application. The task of transposing an applied opt imizat ion problem into a well-posed mathematical framework can also be difficult. To achieve this, accurate spatial discretization, geometry representation and flow simulation are required. Having these elements working in conjunc-t ion wi th a gradient-based optimizat ion algorithm constitutes the main focus of this thesis. However, many efficiency issues must st i l l be dealt wi th at a later stage of the research. A n overview of various techniques used to perform shape opt imizat ion is presented in the following sections. F i rs t , section 1.2.1 gives an outline of the elements that must be present in the algorithm, then section 1.2.2 reviews different approaches used to perform optimizat ion and finally, section 1.2.3 describes various methods for computing gradients. Section 1.3 states the objectives of this research project and outlines the contents of the remainder of the thesis. 1.2 Review of shape optimization algorithms 1.2.1 Required elements for shape optimization Wi th in an opt imal design framework, some key components must be present: CHAPTER 1. INTRODUCTION 3 • The definition of an objective function and constraints. • A discretization algorithm reducing the continuous physical problem to a finite d i -mensional discrete problem governed by a small set of geometric variables. Th is must include a parameterization of the geometry through control points allowing smooth deformations to any admissible configuration. Th is deformation must then be carried through the computational mesh. • A solver for the part ial differential equations governing the opt imizat ion problem. The solution to these state equations is controlled by the location of the aforementioned control points. • A n optimizat ion algorithm. • A n accurate computation of the objective function's gradients wi th respect to the geometry. The central challenge in shape opt imizat ion involves reducing a complex physical phe-nomenon into a mathematical framework leading to the formulation of a relatively simple objective function. Such a task is difficult and specific to every problem. The range of operating conditions must be determined and an assessment of the validity of the physics model must be made. Geometrically, many levels of complexity can be considered; for ex-ample, when modeling a wing, should the slats and flaps, the engine mounts or the winglets be taken into account? It is important to make sure that the geometric modeler and the mesh generator are able to represent the chosen details. A certain level of experience wi th computer simulation is therefore required to correctly assess the elements to be present in the opt imizat ion problem. Laporte and LeTallec [37], Giles [25] and Jou et al. [35] discuss some of the issues to be dealt wi th before obtaining a correct formulation. W i t h the objective function defined, constraints might need to be added. Two main fami-lies are usually considered: state constraints and geometric constraints. Add ing quadratic penalty terms to the objective function is, by far, the easiest way to enforce the state con-straints [22, 43, 41, 58]. The main advantage of this method resides in the fact that the problem can st i l l be cast in an unconstrained formulation and therefore, simpler opt imiza-t ion algorithms can st i l l be directly used. There are, however, drawbacks to this method, most notably efficiency issues, created by i l l -condit ioning, which can appear when approach-ing the desired min imum [26]. For geometric constraints, l imits, inside which the control CHAPTER 1. INTRODUCTION 4 points must be located, can be imposed. They therefore prevent the geometry from moving outside the defined design space. Penalty methods can also be used for geometric con-straints [46] such as the thickness of an airfoil. F inal ly, augmented Lagrangian methods [48] are another option to implement constraints. They do not suffer from the problems plaguing penalty methods, but they require the use of more complicated opt imizat ion algorithms and are therefore more complicated to implement. When it comes to domain discretization, both block-structured [47, 46] and unstructured [4, 43] meshes have been used in shape opt imizat ion algorithms. Even if structured solvers generally allow for faster solutions of the state equations, there are definite advantages to using unstructured grids. F i rs t , the algorithms used to propagate boundary deformations throughout the mesh are generally simpler to implement. It also allows for an easier use of solution-adapted meshes [13], allowing one to work with mesh independent solutions at a reduced computational cost. In the current implementation, unstructured grids are used. They are created with G R U M M P 1 [6, 5], a mesh generator developed by the A N S L a b 2 that can produce guaranteed-quality unstructured grids on domains with curved boundaries. To parameterize the shape, it is important to choose a technique offering sufficient flexibility to cover the whole design space. Th is wi l l allow truly opt imal design. O n the other hand, it is also desirable to use as few control points as possible. Th is creates a compact design space leading to better convergence rate during the opt imizat ion procedure. It is also advised to use a parameterization allowing deformation of the shape without creating discontinuities. A survey of parameterization techniques is made by Samareh in [54]. Choosing the grid points located on the geometry as the control variables is certainly the simplest possible parameterization [33, 41, 43]. Th is method, known as the discrete ap-proach 3 , is easy to implement and the geometry changes are only l imited by the number of discretization points located on the boundary. The use of the discrete approach in con-junct ion wi th automatic differentiation (see section 1.2.2) is appropriate because, in that case, the mesh becomes the only geometric entity known to the problem. This implies that no geometric modeler has to be differentiated. However, the number of control variables becomes enormous when three-dimensional geometries are studied. Furthermore, a local 1 Generation and Refinement of Unstructured Multi-element Meshes in Parallel 2Advanced Numerical Simulation Laboratory. http://tetra.mech.ubc.ca/ANSLab 3 Some authors also refer to it as CAD-free parameterization CHAPTER 1. INTRODUCTION 5 smoothing algorithm has to be used to prevent discontinuous deformations that might lead to an infeasible geometry. Using polynomial and spline representations can reduce the total number of control variables. Instead of directly representing the shape, a piecewise polynomial function can parameterize the deformation, based on a reference curve. This approach, used by Laporte and LeTallec [37], is quite versatile because of the countless possibilities available when choosing the reference curve. However, the extension to three dimensions might prove to be difficult. Other spline formulations include basis splines (B-splines) [11] and nonuniform rational B-splines (NURBS) [21]. Linked to the control points' movement is a grid perturbation algorithm, which deforms the computational grid based on the position of the deformable boundaries. Because unstruc-tured meshes are used, the process of creating a new computational grid at every iteration is easily automated for simple two-dimensional geometries. This possibility is attractive, especially when large domain deformations occur. However, the cost of creating new grids at every iteration might become too high, especially when working in three dimensions and deformation strategies need to be implemented. Explicit deformation, where the internal nodes are moved proportionally to their distance from the deformed boundary, is a robust approach and can be applied when the number of grid points is small. This method has been widely used for the propagation of mesh deformations and is the one chosen for this imple-mentation. Examples can be found in [38, 40]. Another approach models every mesh edge as a linear spring with an elasticity inversely proportional to its length [4, 20]. This spring system, which is subjected to constraints at the boundaries, must remain in force equilibrium after a perturbation occurs. A number of other methods are proposed by Mohammadi and Pironneau [43]. The numerical solver also plays an important role in the optimization process since many successive flow solutions are required. In order to solve a wide variety of optimal design prob-lems using different physics, the optimization algorithm has to be built around a generic flow solver. The main idea behind such a solver is to separate the numerical and physical aspects of the simulation. This is done by modularizing the solver code, with this modularitity often coming from the properties of object-oriented programming languages such as C++. One of the most sophisticated C++ libraries for the numerical solution of partial differential equa-tion is Diffpack [12]. However, it requires much knowledge of the finite-element method from CHAPTER 1. INTRODUCTION 6 the users. F E M l a b [1] is another finite-element generic solver which uses Mat lab to handle the numerical aspects of the simulation. Bo iv in [5] i l lustrated that the finite-volume method lends itself better to the decoupling of the numerics and the physics. A N S L i b [51, 7, 8, 49], the generic numerical toolkit used in this research, is based on the finite-volume method and presents the advantage of being able to compute high-order solutions [50, 57]. In terms of efficiency, generic solvers cannot be expected to be as fast as a dedicated code, mostly because no assumptions concerning specific aspects of the physics can be made. There are efforts currently being made to reduce the computational t ime, notably by Nejat [45] who is implementing a preconditioned generalized minimal residual ( G M R E S ) impl ici t solver into A N S L i b . For the moment, the impl ic i t solver has been shown to work for Poisson's equation and for solid mechanics problems. The current study wi l l therefore be l imited to these problems, since an efficient solver is imperative to perform shape opt imizat ion. 1.2.2 Optimization algorithms Numerical opt imizat ion methods allowing one to find the min imum of an objective func-tion can be subdivided into four categories: the direct search methods, the gradient-based algorithms, the stochastic methods and the full-coupled methods. To reach the min imum of an objective function, a direct search method relies solely on computing the objective function value for a series of designs. These techniques have the clear advantage of being derivative-free, 4 and they can therefore be applied when the design space is non-smooth or the objective function is non-differentiable. However, they are expected to be very slow, especially if many design variables are considered [56]. Direct search methods can be further separated into three sub-categories: pattern search methods, methods with adaptive sets of search directions and simplex search methods (not to be confused wi th the simplex method used in linear programming). The latter has recently been investigated on shape opt imizat ion problems [19]. One way to accelerate convergence is to locally approximate the objective function by a Taylor series expansion [48, 43, 37]. Th is is the idea behind gradient-based algorithms, which are potentially the best suited for most shape opt imizat ion applications. The opt imal 4 Direct search methods are often called zeroth order methods because they do not try to construct an approximation of the objective function through a Taylor series expansion. CHAPTER 1. INTRODUCTION 7 solution can be reached after relatively few function evaluations. There are, however, a few drawbacks to these methods. F i rs t , the design space must be smooth and the objective function differentiable. A lso, these algorithms wi l l inherently converge to a local min imum, which is not necessarily the absolute min imum inside the design space; the solution can depend on the start ing guess. The steepest descent algorithm [41, 33] is the simplest of the unconstrained gradient-based methods. Th is first order approach directly uses the gradient vector as a search direction in order to determine the next iterate. B y using a second-order approximation of the objective function, Newton methods achieve faster convergence rates. This unfortunately requires the Hessian matr ix, which is, in the case of shape opt imizat ion, usually too costly to compute. Curvature information, coming from the previous gradients, can be uti l ized to reduce the number of iterations. This approach, used in the quasi-Newton and conjugate gradient methods [16, 48], allows computation of an improved descent direction wi th no addit ional cost. These algorithms are, by design, used to solve unconstrained opt imizat ion problems. As was mentioned in section 1.2.1, the constraints can be introduced directly into the objec-tive function expression, v ia penalty terms, thereby allowing retention of the unconstrained formulation. Th is approach is simple, but definitely not the most efficient. For example, a more complicated, but potentially more efficient, interior-point algorithm [37] can be used. Throughout the optimizat ion process, these methods guarantee that al l the constraints are obeyed. Nonlinearly constrained problems can also be addressed quite effectively using se-quential quadratic programming (SQP); an excellent presentation is made by Nocedal and Wright [48] and by G i l l et al. [26]. Jameson and Vassberg provide a survey of different gradient-based optimizat ion techniques in [34]. Stochastic opt imizat ion methods include two techniques: genetic algorithms [15] and simu-lated annealing [59]. The former is based on the laws of natural selection while the latter emulates the processes in which a l iquid freezes or a metal crystallizes. Stochastic algorithms wi l l almost certainly (but not necessarily) find the global min imum of the objective function. They are also well suited for non-smooth objective functions and most importantly, they can be used with categorical design variables. There are enormous efficiency issues associated wi th them. A n opt imizat ion run could require up to 10 000 flow evaluations [46] to reach convergence. Th is makes these algorithms poorly suited to practical applications. CHAPTER 1. INTRODUCTION 8 The last opt imizat ion strategy is the fully-coupled method 5 . It aims to accelerate the opti-mizat ion process by solving, al l at once, the coupled system of non-linear equations that form the first-order opt imal i ty conditions for constrained problems, better known as the Karush-Kuhn-Tucker conditions (see [37, 48] for further details). These optimal i ty conditions are formed using a Lagrangian formulation, assuming the state equations to be constraints of the opt imizat ion problem. Solving this system al l at once is complex, but it results in important savings since the solution and the gradient do not need to be repeatedly computed. Studies by Laporte and Le Tallec [37] show that this method tends to be faster, but also less robust than gradient-based algorithms. Appl icat ions to aerodynamic shape opt imizat ion can be found in [36]. 1.2.3 Methods for gradient computation When gradient-based algorithms are used in shape opt imizat ion, an accurate evaluation of the objective function gradient is required to describe the sensitivity of the design wi th respect to the control variables. Using a finite difference approximation is certainly the most straightforward way to compute such a gradient [31, 58]. There are well-known difficulties associated with this method: a computational cost proport ional to the number of design variables, a risk of introducing round-off errors due to the subtraction of nearly equal terms and a dependence on the choice of the step size. These iast two l imitat ions can be avoided by using complex variables [4, 55], but this approach requires at least twice the effort as tradit ional finite differences, which does not seem appropriate for practical applications. Alternatively, the gradient may be obtained by solving the linear adjoint state equation. Th is method was introduced by Pironneau [52] and has the advantage of having a cost v ir tual ly independent of the number of design variables. The adjoint method is further subdivided into two families: the continuous approach and the discrete approach. For the continuous approach [33, 4, 32], the adjoint equation is derived from the continuous flow equations and then discretized, while for the discrete approach [46, 47], the adjoint comes directly from the discretized flow equations. The same solution strategy is generally used to solve the adjoint state and the direct problem [37]. In a generic context, the continuous method cannot be used because of its problem-specific nature. However, a common formulation could be implemented to form the linear adjoint equation from various physics discretized equations. 5 Some authors also use the term one-shot methods CHAPTER 1. INTRODUCTION 9 This makes this method part icularly attractive for generic opt imizat ion. A comparison between the continuous and discrete methods using the Euler equation is made by Nadarajah and Jameson in [44]. Automat ic differentiation can also be uti l ized to compute the objective function's gradient. The idea behind this method is that no matter how complicated the representation of a function is, it can be rewritten using some basic mathematical operations such as addit ion, mult ipl icat ion or trigonometric functions. A collection of expressions containing only these operations are then easy to differentiate. If a function can be obtained by a computer program, then each line of this program can be differentiated to calculate its gradient. A n excellent review of many automatic differentiation schemes is done by Gi lber t et al. [24]. These implementations are classified into two main categories: the direct and the inverse modes. The former is easier to implement but is less efficient, since the computational cost remains dependent on the number of design variables. The inverse mode intrinsically resembles the adjoint method presented above, 6 as every line of the code is considered as a constraint. Th is then leads to a Lagrangian formulation. However, its implementation is more difficult since memory usage has to be carefully controlled [41, 43]. The major benefit of the reverse mode is that the gradient computation is done at a cost independent of the number of control points. Many differentiators currently exist. Since the computer code used in this research is writ ten in C + + , A D O L - C [27] would seem to be the best choice. After doing some numerical experiments wi th automatic differentiation, Mohammadi [41] observed that, for small changes in the deformable geometries, the solution to the state equations remains almost constant. This implies that gradients can be obtained using finite-differences without having to recompute the state for every design variable. The result is a considerable reduction in computational costs. This incomplete sensitivity formulation [16, 42, 39, 43] can only be applied to a certain class of problems - namely when the objective function is defined as a boundary integral - and has shown encouraging results. 1.3 Objectives and scope The main objective of this thesis is to investigate the possibil i ty of performing shape opti-mization using a generic numerical solver. To achieve this, a gradient-based algorithm has 6It is called the "adjoint mode" by many authors. CHAPTER 1. INTRODUCTION 10 been developed. As optimal shape design is a complicated problem, especially when it comes to practical applications, efficiency issues are not addressed at the current stage of develop-ment. The first phase of this work consists of implementing the simplest possible methods allowing performance of generic optimization. Then, by testing these various components on simple problems, it becomes possible to identify areas where improvements in accuracy and/or efficiency have to be made. Finally some unimplemented solutions, corresponding to areas requiring improvements are presented in details to guide future work. This is especially true for gradient calculation. Because many areas of ANSLib are currently under heavy development, some problems that were originally planned to be studied had to be abandoned. For example, some technical difficulties encountered in the implementation of the G M R E S implicit solver have prevented testing aerodynamics applications (either using the Euler or Navier-Stokes equations). A l -though an explicit solver is available, the cost of computing many flow solutions to obtain a gradient evaluation would be prohibitive. This is part of the reason why efficient gradient calculation could not be explored deeper. As will be seen in chapter 3, an efficient solver is required to solve the adjoint state leading to a relatively cheap gradient evaluation. The goal at this point is to explore the feasibility of different alternatives that could lead to a robust, accurate and efficient generic shape optimization framework. To achieve this, the mathematical, physical and geometrical formulation of the optimization problem is presented in chapter 2. This is followed by a detailed look at the optimization algorithm in chapter 3. Chapter 4 presents the results of the problems used to validate the framework. Finally chapter 5 gives conclusions and recommendations to guide future work. Chapter 2 Optimization problem formulation 2.1 Generic formulation A shape opt imizat ion problem can be viewed as the constrained minimizat ion of an objec-tive function J whose evaluation involves the solution of part ial differential equations on a geometrical domain f l . Th is domain is controlled, in general, by a finite set of parameters z, known as the control variables. The problem's goal consists of obtaining the set z corresponding to Q* such that, among al l admissible values, where E regroups the equality constraints and X the inequality constraints. X represents the grid function used to discretize the computational domain while U is the state variable vector obtained by satisfying eq. 2.3 for any admissible set of control variables J { Q * ) = m m J { X (z),U(z)) (2.1) subject to Ci(X,U) = 0 i e £ Cj(X,U) < 0 j e l (2-2) r ( x , u ) = o v z e n a d m (2.3) 11 CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 12 This impl ic i t ly defines the state variables as functions of the control variables. In order to clearly define the problem, the remainder of this chapter details how eqs 2.1 to 2.3 are represented in the opt imizat ion framework. 2.2 Mesh generation The first step to numerically solving a part ial differential equation consists of preparing the computational domain by defining its boundaries and then decomposing it into smaller entities known as cells. The resulting discretization is called a mesh. In this research, an unstructured triangular domain decomposition is used. The mesh quality has a direct influence on a numerical solution's accuracy. Obtaining a high-quality discretization can however be a time consuming process if done manually. Considerable savings can be made by fully automating this process. G R U M M P , the mesh generator used in this work, is able to construct guaranteed-quality meshes from a boundary definition of the domain, provided by the user. The fact that no further manual correction, for example fixing an area wi th an invalid tr iangulation, is required once the mesh has been created, makes this tool useful in the context of automatic shape opt imizat ion. After many domain deformations, the tr iangulation may become inval id, therefore requiring an entirely new mesh. Being able to regenerate the mesh without interrupting the opt imizat ion process is a considerable advantage. Th is section outlines basic the techniques uti l ized by G R U M M P to bui ld a triangular domain decomposition. It is not meant to be a detailed review of mesh generation algorithms. Further details about the mesh generator can be found in [6, 5]. 2.2.1 Initial triangulation The first part of the meshing process involves creating an ini t ia l tr iangulation based on the user-defined boundaries. Th is is done by creating bounding triangles inside which al l the boundary vertices can fit and then by adding the boundary vertices, updating the tr iangula-t ion wi th every addit ion. To better i l lustrate this, the domain depicted on figure 2.1 is to be CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 13 Figure 2.1: Geometry to be triangulated Figure 2.2: Bounding triangles and initial boundary vertices triangulated. Figure 2.2 represents the evolution from the bounding triangles to the initial triangulation obtained after the boundary vertices insertion. It can be difficult to recognize the geometry since some of the desired boundary edges might not be present after the initial discretization. However, it is always possible to recover the shape of figure 2.1 by swapping edges. This is further discussed in section 2.2.3. 2.2.2 The Delaunay criterion The next step involves obtaining, by swapping edges, a triangulation respecting the Delaunay criterion. A triangulation in which none of the circumcircles (circles passing through every vertices of a triangle) contains another vertex is said to be Delaunay. This is illustrated in figures 2.3 and 2.4. For a given set of vertices, up to degenerate cases1, there only exists one possible triangulation meeting this criterion. It is also proven that, for these vertices, it 1 A square is an example of a degenerate case where two possible solutions respecting the Delaunay criterion exist. CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 14 Figure 2.3: Delaunay tr iangulation provides the best possible quality for triangles. In the example, only one edge needs to be flipped as shown in figure 2.5. 2.2.3 Edge recovery W i t h the Delaunay criterion satisfied, the boundary edges from the ini t ia l domain need to be recovered. It is in fact possible that some of the edges might not be present in the Delaunay tr iangulation. They can always be recovered by swapping. In the example, it is possible to see by looking at figure 2.5 that al l the edges are already present. The cells formed by the bounding triangles can then be removed leading to the tr iangulation shown in figure 2.6, which is the starting point for refinement. 2.2.4 Refinement Once the ini t ia l discretization has been obtained, the mesh needs to be refined to satisfy the sizing required by the user. Th is process is by far the most complicated and wi l l not be discussed here. However, it is important to note that, start ing from a Delaunay tr iangulation, CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 15 Figure 2.4: Non-Delaunay tr iangulation Figure 2.5: Edge swapping to obtain a Delaunay tr iangulation Figure 2.6: Init ial tr iangulation CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 16 Figure 2.7: Result ing mesh none of the cells added in the tr iangulation wi l l have an angle less than 25.7°. A l l the details concerning mesh refinement and quality assessments can be found in [5]. The resulting mesh, as generated by G R U M M P , is shown on figure 2.7. This mesh can be made finer by adjusting refinement and grading parameters at run time. 2.2.5 Boundary and control variables definition In order to define the domain's boundaries, the user needs to provide a list of boundary points. These points are then used to bui ld the geometric entities - lines, circles, arcs, cubic parametric curves or interpolated splines - forming the boundaries. The exact geometric representation allows the insertion of vertices directly on the boundary during the refinement process. This is essential to guarantee the quality of the tr iangulation [5]. As was mentioned in section 2.1, a set of control variables must also be defined. To minimize the user's interaction, the control variables' definition is made directly from the boundary file G R U M M P reads prior to creating the mesh. The control and boundary points there-fore coincide. This allows exploitation of G R U M M P ' s abil i ty to exactly represent geometric entities to define the boundaries, el iminating the need to include a stand-alone shape pa-rameterization algorithm in the opt imizat ion process (see section 2.3). CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 17 For the moment, only control variables defining the contours of a boundary are supported. Other parameters such as the angle of attack of a wing or translational variables describing the relative positions of the different parts of a multi-element airfoil could also be used. Taking these addit ional variables into account wi l l be the subject of future research. 2.3 Shape parameterization In shape optimizat ion problems, the first difficulty which must be overcome is constructing a grid function X (z) defining how the mesh behaves wi th respect to the control variables. Th is function's definition is done in two parts. The first, detailed in this section, defines how the boundary vertices are related to the control variables, while the second, presented in section 2.4, shows how boundary deformation are propagated to internal grid points. 2.3.1 Cubic spline interpolation The geometric representation of regular two dimensional curves can be done using cubic spline interpolation [11]. Given n + 1 interpolation points, where the function's value is known, and assuming continuous first and second derivatives at these points, it is possible to bui ld, at low cost, a piecewise cubic function respecting these interpolation values. Th is is shown on figure 2.8. Using the same notation as on the figure, the spline equation inside the interval [ £ j _ i , £ j ] is Pi (t) (t -6hi 6hi (2.4) where hi = ti — ti-i for i = l,2,...,n CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 18 Figure 2.8: Cubic splines: n polynomials for n + 1 points The values of in eq. 2.4 st i l l need to be computed. B y imposing the first derivative continuity, obtained by equating the derivatives of eq. 2.4 at points i and % + 1, the linear system of eq. 2.5 is obtained for i = 1 , 2 , . . . , n — 1. hi hi + hi+i fi-i + 2/j + u i u fi+1 U+i ~ ti-l hi + hi+i f (ti+1) - f (ti) f (U) - f (U.r) t. i+i U i-l (2.5) There are, in this system, only n—1 equations for n+1 unknowns. Two arbitrary conditions must therefore be added. In the present case, the second derivatives at the beginning and the end of the spline are set to be equal to 0: /o = /« = 0-The resulting linear system is tr idiagonal and can therefore be solved at low cost using, for instance, the Thomas algorithm [2]. CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 19 Figure 2.9: Reference curve mapping the Cartesian space 2.3.2 Deformation parameterization As mentioned in the first chapter, there are many options available to describe shapes as functions of a set of control points. Since G R U M M P readily supports geometries wi th curved boundaries and since boundary points coincide wi th control points (see section 2.2.5), the domain's shape parameterization is done automatical ly during mesh generation. However, shape deformations are made outside the mesh generator. Th is implies that an independent algorithm must ensure that boundaries are smoothly deformed after the control points are displaced. To achieve this, the original admissible Cartesian space can be remapped based on a parametric reference curve. As shown in figure 2.9, parameter t represents the posit ion along a curvil inear abscissa s while parameter z describes the positions of the points in the normal direction, at a given abscissa. O n this figure, z0 would have a negative value and z\ a positive one. Th is reference curve can be defined in various ways. One simple solution is to have it coincide wi th the start ing shape. A l l the required parametric information can be directly obtained from G R U M M P . For instance, the normal vectors are readily available because an exact geometric representation of the boundaries is used when generating the meshes. The values for the z parameter would of course be set to 0 for al l the points since they lie directly on the reference curve. This feature is not currently implemented, but would be desirable for practical applications. A t the moment, a slightly more complicated method is, in which the reference curve is defined analyt ical ly in a form similar to the one of eq. 2.6. Th is definition suits inverse problems well because the reference curve can be set as the target solution, CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 20 allowing an easier assessment of the results. ( /(*) 9(t) ) (2.6) The increased difficulty comes from the fact that, for a point P not located on the reference curve, it is impossible to obtain the pair (t, z) directly. The only information available is the Cartesian coordinate (x, y) of P and the normal vectors, n (t), at any t on the reference curve n(t). To determine t, a point P], located at t = ti on the reference curve, must be found such that P and Pi are collinear along the normal n (ti). Since the normal is a function of the yet undetermined ti, a search algorithm is required to obtain the parameter's value. Knowing that the cross product of a and b obeys where 6 is the angle from a to b, then from two starting guesses ta and t\, as shown on figure 2.10, a bisection method [23] looking for the zero of the cross product between n and v (the vector from P' to P) wi l l eventually converge to the value of U. Once al l boundary vertices and control points are mapped in the parametric space, the domain's boundaries can be deformed wi th the cubic spline interpolation approach of section 2.3.1. As presented in figure 2.11, based on the parametric coordinates (t,z) of the control points, it is now possible to locate, by interpolation, the location of the boundary vertices given their curvil inear abscissae. The new parametric locations are described by the vector where Pi (t) is the interpolation polynomial 's value, for t G [ta, obtained wi th 2.4. Since the connectivity information remains unchanged, the only remaining operation is projecting the boundary vertices from the parametric to the Cartesian space and updating the mesh information and moving the boundaries accordingly. axb > 0 if O < 0 < 7 r = 0 if 6 = 0, 7r (2.7) < 0 if T T < 9 < 0 u{t)=pi{t)n(t) (2.8) There is one important pit fal l associated wi th this method. Since the deformation param-eterization is implemented without any dependence on the mesh generator, the high-order CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 21 Figure 2.10: I l lustration of the bisection algorithm Figure 2.11: Mot ion of the boundary CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 22 capabilit ies of the numerical solver (see section 2.5) cannot be exploited for geometries with curved boundaries. To obtain a high-order accurate solution, the curvature of the bound-aries must be taken into account when performing operations necessary to the solution of the state. Th is information does not exist independently inside the solver, but is instead provided by G R U M M P . The routines used for deformation parameterization would need to calculate the curvature of the deformed boundaries for high-order schemes to work properly. Th is has not been implemented in this work and therefore, numerical solutions have to be l imited to second-order accuracy. W i t h the boundary movements calculated as a function of the control points, the construction of the grid function X (z) can be completed by defining how these deformations propagate to the internal grid points. 2.4 Mesh deformation In order to handle domain deformations, an algorithm must be implemented to spread ge-ometry modifications throughout the mesh. A robust, yet expensive, explicit deformation scheme accomplishes this task. Th is technique has been widely used, notably in [37, 43, 38]. Knowing the displacement of the control variables located on the boundary and having com-puted the displacement vectors of boundary vertices, 5r, the goal is to make the movement of each internal vertex 5ri proportional to its distance from the boundary vertices. Th is is described by eq. 2.9. <5r7 = — wmCtmi5rm (2.9) a i m€dQ where dfi represents the deformed boundary. In eq. 2.9, wm is a scalar weight which is, in two dimensions, equal to the sum of half the length of edges sharing the boundary vertex m, _ 1 represents the local value of a discrete Green's function wi th (5 > 2, an arbitrary parameter. The lower this parameter is set, the more localized the deformations wi l l remain, oti is a normalization factor given by aI = WmOt-ml me<9fi CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 23 The deformation law of eq. 2.9 has a theoretical interpretation which wi l l not be discussed here; details can be found in [38]. The cost of mesh deformation wi th this algorithm is proportional to the product of the number of boundary vertices by the number of internal vertices. The number of operations can become very high for three dimensional problems. Other approaches might need to be considered for bigger meshes, but for the two dimensional problems studied in this work, this technique is appropriate. Deformed meshes are presented in figure 2.12. W i t h successive large shape variations, the quality of the mesh wi l l eventually degrade. The only option at this point is to create a new computational grid, effectively causing the opt imizat ion problem to reset. As mentioned in section 2.2.5, the re-meshing process is made simpler by using boundary points as control variables. W i t h the grid function now defined, the method used to compute the state variables U can be presented. 2.5 Numerical solver In the context of shape opt imizat ion, the numerical solver is used to compute the solution to the part ial differential equations describing the physics of the problem at hand. Th is solution, obtained by satisfying 2.3, allows one to obtain the state variables U (z) on the domain Q. A central part of this research consists of investigating the feasibility of generic shape op-t imizat ion. A toolkit capable of numerically simulating a wide array of part ial differential equations is therefore required. It was outl ined in chapter 1 that the finite-element method has been the method of choice in previous research on generic solution of P D E s . However, as i l lustrated in [5, 51], the finite-volume method might provide a simpler alternative to de-coupling the physical and numerical aspects of the problem. The physics is taken into account by the calculation of fluxes, which are free from numerical intricacy. Th is section focuses on the general operation of A N S L i b , the numerical solver used in the present work. W i t h the domain decomposed into a number of smaller control volumes and the grid function X (z) defined, the P D E s of the problem can be discretized using a for-mulat ion presented in section 2.5.1. The resulting system of equations is then solved for CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 24 (b) Deformed mesh, f3 = 4 (c) Deformed mesh, (5 Figure 2.12: Mesh deformation examples = 32 CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 25 the control-volume averaged values of the unknowns. In order to get a smooth solution for accuracy purposes, the numerical solver performs a reconstruction of this solution over the computational domain. This process is detailed in section 2.5.2. The handl ing of the bound-ary conditions by A N S L i b is then discussed in section 2.5.3. Final ly, a brief introduction to the Newton-Krylov impl ici t solver is made in 2.5.4. 2.5.1 Finite-volume formulation To derive the finite-volume formulation for a part ial differential equation, the first step is to write this equation in a general conservation form. For a two dimensional problem, this leads to dU dF dG n where U is the solution vector, corresponding to the state variables, while F and G are respectively the x- and y-component of the flux vector. The source term is represented by S. Integrating eq. 2.10 over the control volume area 2 , the following is obtained: r dU J . r dF J A f dG J A r _ J . / — dA + / —dA+ —— dA = / S dA JA at JA ox JA ay JA Defining the vector F = Fi + Gj, then eq. 2.11 can be rewritten as / ^dA+ f V-FdA = I SdA (2.12) JA at JA JA Apply ing Gauss's theorem to the second term of the left-hand side, eq. 2.12 becomes [ ^-dA+i F-nds= f SdA (2.13) JA at JdA JA If it is assumed that the size and shape of the control volume is fixed, eq. 2.13 can be simplified further. 4/ UdA+i F-hds= f SdA (2.14) at JA JdA JA 2In three dimensions, the integration would be done over the volume. CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 26 The contour integral in eq. 2.14 is evaluated along the boundaries of the control volumes, Equat ion 2.14 is mathematical ly equivalent to eq. 2.10. It has only been rewritten in such a way that the fluxes are evaluated through a contour integral. The next step introduces the finite-volume approximation into eq. 2.14. It supposes that the value of U, wi thin a control volume, is constant and equal the control-volume average. For a variable Y of the control volume j, the control-volume average is defined as B y substituting the state variables vector U in eq. 2.13 by the approximation of eq. 2.15 and by rearranging terms, the finite-volume approximation of eq. 2.10 is finally obtained. The right-hand side of this equation is known as the flux integral, even though it includes the source term integral. E q . 2.16 is solved for every control volume. The integration is carried out with the Gauss quadrature [17] whose accuracy must match the solution accuracy (see section 2.5.2). Th is method seeks to optimize the integration scheme by carefully selecting points at which the function is evaluated. It allows exact integration of a polynomial function of degree 2m — 1 wi th m points. Th i rd and fourth order solutions, using respectively quadratic and cubic reconstruction, use two integration points while second order (linear reconstruction) only requires one point. B y preserving the average over every control volume, the finite-volume method allows the conservation of the state variables. Th is makes the method part icularly attractive for fluid flows where the conservation of mass, momentum and energy is cr i t ical. 2.5.2 Solution reconstruction with F representing the flux vector across these boundaries. (2.15) (2.16) In the previous section, it was mentioned that by using the finite-volume method, only the control-volume averaged values are computed. The difference between the average solution in adjacent control volumes is first-order in the mesh spacing. The solution of eq. 2.16 would CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 27 (a) Control-volume averaged (b) Reconstructed Figure 2.13: Averaged and reconstructed solutions therefore only be first-order accurate if the fluxes at the control volume boundaries were calculated using the averaged values. The solution needs to be reconstructed over the domain to achieve a higher order of accuracy [57, 50]. B y this process, the averaged solution is replaced, in every cell, by a low-degree polynomial , creating a smooth approximation over the entire control volume. Depending on the desired order of accuracy, the polynomial function can be linear (second order), quadratic (third order) or cubic (fourth order). B y minimiz ing, in a least-squares sense, the error in predicting the values in the neighboring control volumes, while preserving the mean in the current ce l l 3 , the coefficients of the polynomial function can be determined. These polynomial functions are then used to compute, wi th an improved accuracy, the fluxes at the cells interfaces. Figure 2.13 4 , on which the height of a point represents its value, il lustrates the difference between a control averaged solution and a second-order accurate reconstructed solution. B y comparing both representations, it is evident that the fluxes computed using the reconstructed values wi l l be more accurate. It is important to note that the polynomials are only val id over their given control volumes. A t a common interior boundary face, the values extrapolated from two neighboring cells wi l l 3 The computed control volume average should match the average of the polynomial produced by recon-struction. Conservation of the mean acts as a constraint on the coefficients. 4Taken from [5] CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 28 be different. Bo th of these values are available for flux computation. It is up to the user to decide whether one or the other, or perhaps a combination or both, should be used. 2.5.3 Boundary conditions The fluxes at control volume boundaries coinciding wi th the domain's boundaries need to agree wi th the problem's boundary conditions. To impose these conditions, constraints can be directly added to the least-squares reconstruction problem. Dirichlet, Neumann or mixed conditions are enforced by imposing the desired solution or its derivative at Gauss points directly located on the boundary. However, this means that the conditions are only strict ly enforced at these points. Elsewhere, they are satisfied to wi th in truncation error (the error coming from the use of reconstruction polynomials). The flux F , calculated by reconstruction at these boundary faces, can then be used directly to solve eq. 2.16. For more complex boundary conditions (imposed stress in solid mechanics, compressible flows conditions), a weak formulation must be used. Instead of using a constraint, a boundary flux Fb, defined by the user, is imposed. When solving the system of eq. 2.16, the solver wi l l use this user-defined flux instead of F. A N S L i b recognizes two different types of fluxes: the interior fluxes, at faces inside the compu-tat ional domain; and the boundary fluxes, at faces corresponding to the domain's boundaries. The interior fluxes are always computed using F , while at the boundaries, F or Fb can be used depending on the type of boundary conditions. The solver wi l l choose what type of flux must be used depending on the implementation of the physics of the problem. More elaborate details can be found in [51]. W i t h the numerical aspects of the problem defined, the system of eq. 2.16 can now be solved to steady state by a Newton-Krylov G M R E S technique presented in the next subsection. 2.5.4 GMRES algorithm Because of the repeated solution evaluations required in opt imizat ion problems, convergence to steady state should ideally be attained in a relatively short amount of t ime. A n efficient solver is an essential part of a shape optimizat ion algorithm. Even if explicit time-advance CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 29 solvers are accurate, their convergence rates are far too slow for opt imizat ion applications. A n impl ic i t solver using Newton's method wi th a preconditioned Kry lov G M R E S (generalized min imal residual) is currently being implemented in A N S L i b [45]. It allows substantial savings in computational t ime compared to an explicit solver and is therefore well suited for this research work. However, since the implementation work is st i l l under way, application had to be l imited to problems in conduction and solid mechanics. A t steady state, eq. 2.16 can be rewritten as = R(X,U) = 0 (2.17) where R is the nonlinear system of equations formed from the spatial discret izat ion 5 . New-ton's method can be uti l ized to solve eq. 2.17: JnAUn = -Rn (2.18) with Jn being the flow Jacobian matr ix evaluated at iteration n. OR J ~ dU' Equat ion 2.18 is solved for MJ and the update in solution can is made according to eq. 2.19. Un+1 = Un + AUn (2.19) The Jacobian is a large sparse matr ix. It is usually non-symmetric and often i l l-condit ioned. A n effective Newton's method implementation requires the efficient solution of the linear system of eq. 2.18. A Kry lov G M R E S techniques in matrix-free form [53] is used to achieve this. To keep memory requirements at a reasonable level, the restarted version of the G M R E S algorithm, denoted G M R E S (A;), is ut i l ized. It aims to minimize the norm of the residual vector over a Kry lov subspace of size k. Increasing the number of search directions (increasing k) leads to a more accurate solution update for each G M R E S iteration. The drawback is that memory usage increases l inearly and computational t ime quadratical ly wi th k. The matrix-vector products on the right-hand side of eq. 2.18 are approximated, at every 5Until steady state is reached, the values taken by R are the residuals. 1 A JdA F • fids j SdA JA CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 30 iteration, with a first-order forward difference: J v ^ R(X,U + ev)-R(X,U) ( 2 2 0 ) where v is a normalized vector and e a small scalar value typical ly chosen to be equal to the square root of machine precision. Consequently, the Jacobian matr ix does not need to be formed explicit ly. This is the reason why this implementation is dubbed matrix-free G M R E S . To improve the efficiency, preconditioning of the G M R E S approximation of the Jacobian matr ix, clustering the eigenvalue spectrum around unity, can be applied to the right-hand side of eq. 2.18. JV^VAU= -R where V represents the preconditioner matr ix. Since preconditioning is st i l l under develop-ment, it is neither discussed, nor used, in this research. 2.6 Objective function and constraints W i t h the grid function X (z) and the state variables U (z) characterized, the objective func-t ion J [X (z) ,U (z)) can now be defined. Th is objective function is obviously specific to every opt imizat ion problem. Object-oriented programming simplifies the implementation of the objective function. On ly certain problem specific routines need to be wr i t ten 6 . The opt imizat ion framework remains exactly the same. The objective function can take many forms. For example, for aerodynamics problems, a contour integral, evaluating the drag or the lift, is often used. On the other hand, domain integrals can be used in diffusion problems. It is important to note that the Gauss integration schemes, presented in section 2.5.2, are used to calculate the value of the objective function requiring the evaluation of an integral. For gradient-based algorithms, this function must be continuously differentiable over the design space. Most practical opt imizat ion problems have to be constrained to be physically feasible. For example, a drag minimizat ion problem requires the lift to be constrained. Mathematical ly, 6Implementation details for the complete optimization framework are presented in section 3.5. CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 31 the shape that would provide the absolute min imum drag, in such cases, is an infinitely small point. Obviously, this has no practical interest. Shape opt imizat ion problems are usually subject to two types of constraints: state and geometric constraints. The first family includes everything that is related to the physics of the problem (every quantity that depends directly on U (z)). The second family is used to constrain the shape of the domain and its boundaries. The simplest way of implementing state constraints is by adding quadratic penalty terms directly into the objective function definition [22, 58]. B y doing so, the problem can st i l l be casted in an unconstrained framework. Consequently, the relatively simple unconstrained optimizat ion algorithm can st i l l be used even if constraints are present in the problem. As presented in eq. 2.2, equality and inequality constraints can be encountered. For equality constraints of the form A — A0 = 0, the penalty term Ce is written as where A and B are parameters computed based on the actual values of the functions X and U, A0 and B° are the target values of these parameters and finally, r\ and 7 are constants dictat ing the magnitude of each penalty term. Setting these constants can be challenging: if they are set too high, the convergence of the optimizat ion process wi l l be extremely slow, while if they are set too low, the constraints wi l l not be properly enforced. One common approach starts wi th low values to approach the desired solution quickly. After convergence is reached, 77 and 7 are gradually increased unt i l , after two successive restarts, no more change is observed in the solution. However, this technique requires a more involved implementation on the part of the user. W i t h the penalty terms included, the objective function can now be writ ten as (2.21) while for inequality constraints B — B° < 0, the penalty C j is expressed as (2.22) J = $ + Ce+Ci (2.23) CHAPTER 2. OPTIMIZATION PROBLEM FORMULATION 32 Figure 2.14: Representation of the barrier geometric constraints where $ = $ (X, U) is the expression that is to be minimized wi th respect to the set of constraints. In this work, two different methods are used to enforce the geometric constraints. First ly, penalty methods can be uti l ized to constrain geometric parameters such as the thickness or the area contained inside a closed boundary. O n the other hand, some problems, such as the design of turbine blades, require the geometry to be contained inside a bounding box. Th is prevents two neighboring blades to be too close from each other. Contro l points are therefore forced to remain inside a certain part of the design space. As i l lustrated in figure 2.14, maximum and min imum values of the parameter z are assigned to every control point at the beginning of the opt imizat ion run. They are then l imited to move wi th in z m i n < z < zmax. If the absolute min imum is located outside the constrained design space, some of the points wi l l be located on this boundary of the bounding box when convergence is reached. It is important to note that, to respect the first order opt imal i ty condit ion (see section 3.1), the gradient components corresponding to the points located on the l imits of the design space should be set to zero. A l l the elements necessary to express a numerical shape opt imizat ion problem into a well-posed mathematical framework have been described. The next step is to define the opti-mizat ion algorithm that uses these tools to find the min imum of the objective function. C h a p t e r 3 O p t i m i z a t i o n a l g o r i t h m It has been shown, in the previous chapter, that the constrained generic shape optimizat ion problem presented in section 2.1 can be expressed wi th an unconstrained formulation. Th is is accomplished by imposing the constraints wi th a set of penalty terms added to the objective function. Other constraints, such as the ones imposed by the solution of the state equations (eq. 2.3), are automatical ly satisfied, at every location in the domain, by the numerical solver. Therefore, they do not need to be explicit ly considered. The simplici ty of the algorithms designed for unconstrained problems is, most certainly, their predominant advantage. A gradient-based algorithm, relying on a line search procedure, is used in the present work. Because of the relatively smal l number of state solutions generally required to find a min imum, it represents an appropriate choice for practical applications. To ensure that a min imum is found by the implemented algorithm, a set of opt imal i ty conditions must be fulfi l led. These conditions are elaborated on in section 3.1. As the name indicates, in gradient-based methods, the objective function's gradient evalu-ation, wi th respect to the design variables, is a cr i t ical operation. The accuracy and the computational cost of the gradient play a central role in determining the performance of the whole opt imizat ion framework. Having the shape modifications exclusively ruled by the value of these gradients is enough to justify the need for accuracy. Furthermore, because repeated re-evaluations are required at every iteration, efficiency is also essential. More details on gra-dients are offered in section 3.2. A t this in i t ia l state of research, the finite-difference method - a simple yet inefficient alternative - has been adopted. However, due to its importance, 33 CHAPTER 3. OPTIMIZATION ALGORITHM 34 an overview of preferable methods for gradient computation is deemed necessary to guide future research. Adjoint methods and automatic differentiation techniques are respectively outl ined in sections 3.2.3 and 3.2.4. W i t h knowledge of the gradient, a search direction vector, dictat ing the motion of the control points, can be obtained. These displacements should lead to an improved design, which causes the value of the objective function to decrease. Two approaches yielding a search vector from the gradient are implemented: the steepest descent method and the B F G S method. They are explained in more detail in section 3.3. The line search procedure then determines by what distance the control points should be moved, along the search direction, to decrease the value of the objective function. The choice of this step length is explained in section 3.4. Final ly, in section 3.5, implementation details are presented. This allows appreciation of how al l the elements presented in chapter 2 interact wi th the optimizer to solve a given shape opt imizat ion problem. 3.1 Optimality conditions In unconstrained opt imizat ion, the goal is to minimize an objective function J that depends on a certain number of variables z, wi th no restriction at al l on the values of these variables. Unfortunately, gradient-based algorithms do not guarantee convergence to the global min i -mizer 1 z* of J over the domain Za^m. They are however able to find a local minimizer z\, which is a point that achieves the smallest value of J in its neighborhood. For the remainder of this work, only local minimizers, denoted z*, are considered. When the objective function is smooth and twice continuously differentiable, it is possible to recognize a minimizer z* by examining the objective function's gradient VJ (z*) and Hessian V2J(z*) [48]. It is possible to write, by Taylor's theorem, in the neighborhood of z: where q represents feasible search directions over the design space and a is a scalar value such that z + aq is included inside the design space. Thus to have a minimizer at z*, eq. 3.2 (3.1) 1 A point z* is a global minimizer if J (z*) < J (z) for all feasible z. CHAPTER 3. OPTIMIZATION ALGORITHM 35 must be respected, in the neighborhood of z*. J(z*+aq)>J(z*) (3.2) Therefore, from eqs 3.1 and 3.2, a necessary condit ion for J to have a local minimizer at z* is VJ(z*)q>0 (3.3) for every feasible q. For unconstrained opt imizat ion, every direction is feasible, which means that for every q, the direction —q must also be considered. The condit ion of eq. 3.3 wi l l only hold if VJ (z*) = 0 . (3.4) Equat ion 3.4 is called the first order necessary condit ion and holds for any stationary point (minima, maxima and saddle points). To find the nature of the stationary point, the next term in the Taylor expansion of eq. 3.1 must also be considered. J(z + aq) = J (z) + aVJ (zf q + ^qTV2J (z)q + 0 (a3 |M|3) If it is assumed that eq. 3.3 must hold at the stationary point, then, to have a min imum as defined in eq. 3.2, it is necessary that qTV2J(z*)q>0 (3.5) holds for every q ^ 0. For unconstrained opt imizat ion, the situation simplifies to: • Second order necessary conditions for a min imum are VJ(z*) = 0 V2J(z*) is positive semi-definite. • Second order sufficient conditions for a min imum are VJ (z*) = 0 CHAPTER 3. OPTIMIZATION ALGORITHM V2J(z*) is positive definite. 36 In the current implementation, the second order conditions do not need to be enforced explicitly. Since the opt imizat ion is done through a line search method (see sections 3.3 and 3.4), it is required that the value of J decrease at every iteration. It is therefore obvious that the algorithm cannot converge to a maximum. It is also highly unlikely that it terminates on a saddle point because such a point is numerically unstable. 3.2 Gradient evaluation The computation of the objective function's sensitivities wi th respect to the control variables plays a central role in gradient-based shape opt imizat ion algorithms [43, 37, 48]. A n accurate knowledge of the gradient is required to eventually reach the desired min imum. Unfortu-nately, gradient calculation also constitutes the most challenging part of the process. Even though the steepest descent method (see section 3.3.1) has allowed solution of opt imizat ion problems, as well as linear systems of equations, since the nineteenth century [14], efficient techniques for gradient calculation wi th respect to the shape only appeared in the mid-1980's [52]. In the context of generic shape opt imizat ion, the gradient computation problem is further complicated because some methods, such as solving the adjoint equation at the continuous level, are problem specific. Ei ther automatic differentiation or an adjoint method imple-mented at the discrete level could be used. However, the current implementation only uses finite-differences. As wi l l be shown in section 3.2.2, this choice can lead to some unavoidable problems. Th is section first presents how the gradient of the objective function can be formulated based on the different components of the problem. Afterwards, a detailed review of gradient evaluation by finite differences is made. Even though they were not implemented, a brief introduction to adjoint methods and automatic differentiation has been deemed necessary, as this is where future work wi l l need to be concentrated. Final ly, a new idea allowing cheap computation of gradients is presented. CHAPTER 3. OPTIMIZATION ALGORITHM 37 3.2.1 Gradient formulation As mentioned earlier, the opt imizat ion problem's goal is to find the min imum of an objective function J. In the case of shape opt imizat ion, this objective function depends on a set of Nc points controll ing the deformable geometry through parameterization. These control points, in turn, define the posit ion of NQ grid locations on which the A V state variables are calculated by solving the state equations. Formally, the objective function can be written as 3 • Zadm zy—>J{z) = J{X (z),U(z)) where Zadm defines the admissible design space for the control points, z contains the set of control points located inside the geometric domain Q, X is the grid function and U is the vector of state variables. These are the solutions to a non-linear system of equations obtained by spatial discretization R (X, U) = 0 The objective function expresses dependencies on the grid and on the solution to the state equations, which both are functions of the control points. Using the chain rule, the differen-t iat ion of the objective function wi th respect to the control points gives dJ = dJdXdJdU dz dX dz dU dz { ' ) where | ^ is a [1 x NG] row vector, ^ a [NG X Nc] matr ix, | ^ a [1 x Nv] row vector and | j a [Ny x Nc] matr ix. This ult imately leads to a gradient vector having Nc components. In eq. 3.6, the main difficulty resides in obtaining the state sensitivities. A popular way of dealing wi th this term is to proceed by introducing an adjoint state. Since future work should concentrate towards obtaining accurate gradients efficiently, adjoint methods are described in section 3.2.3. For the moment however, only an approximation by finite differences has been implemented. 3.2.2 Finite differences Fini te differences are the simplest approach to obtain a reasonably accurate estimate of the sensitivities wi th respect to control parameters. A n approximation of the gradient vector Q£ CHAPTER 3. OPTIMIZATION ALGORITHM 38 can be obtained by evaluating J at Nc + 1 points located close, in the design space, to the actual solution. The one-sided-difference formula is defined, in the present context, as: wi th ej a unit vector chosen to be perpendicular to the surface at the location of every control point. B y applying eq. 3.7 for i = 1 , 2 , N c , an approximation of the gradient vector can be built. The evaluation procedure consists of perturbing, one at a time, the Nc control points, moving them by e. Then, after obtaining the new locations of the internal grid points wi th the deformation law of section 2.4, the solution to the state equations is re-computed for the altered geometry. The objective function's value is then re-evaluated based on this new solution. App ly ing eq. 3.7, the gradient components can finally be obtained. The main advantage of this method is that the flow solver can be treated as a black box. Th is makes finite differences attractive to use in conjunction wi th commercial software. However, some clear difficulties can be associated wi th this method. First ly, efficiency issues come wi th the need to compute the solution of the state for every gradient component. Thus, the number of control parameters must be l imited to a smal l number. For two dimensional problems, generally requiring few parameters to control fl, the burden of computational work is st i l l wi th in reasonable tolerances. However, it quickly becomes impract ical to use finite differences when the number of control variables increases, for example when dealing wi th complex three dimensional geometries. Furthermore, the choice of e can be difficult. Recognizing that eq. 3.7 is based on Taylor's theorem [48] and assuming the objective function to be twice continuously differentiable, then If a value L is chosen to bound the size of the Hessian HV2^! in eq. 3.8 in the neighborhood dj J {X (Zi + eet), U (ZJ + ee»)) - J (X ,U {*)) (3.7) of z, then the last term is bounded by 11|<5||2 leading to J(z + 5)-J(z)-VJ(z)T5\\<-\\6\\2 (3.9) If the vector 6 is chosen to be equal to ee*, then by rearranging eq. 3.9 and by observing that CHAPTER 3. OPTIMIZATION ALGORITHM 39 VJ(z)T6 = VJ(z)Teei = e it is possible to rewrite eq. 3.7 as dj J(z + eet) - J (z) + C, where |C| < -e (3.10) dzi e and to remark that the error C decreases as e approaches zero. Th is suggests that e should be chosen as smal l as possible. Unfortunately, this expression does not take into account the roundoff errors that are introduced when J is evaluated on a computer using floating point arithmetics. In double-precision arithmetics, the unit roundoff, u, is of the order of 1 0 - 1 6 . If it is assumed that J is bounded by Lj in the neighborhood of z, then the differences between exact values J and computed values Jc must obey the following relations: Using the computed values in eq. 3.10, instead of the exact values, leads to the error term Natural ly, the goal is to select e that gives the smallest error possible. The minimiz ing value is If it is assumed that the problem is well-scaled, meaning that the ratio of the objective function values to the second derivative values remains modest, choosing the perturbation size to be equal to the square-root of machine precision is close to opt imal. Th is last equation gives an idea of the order of magnitude for e. However, it is impossible to guarantee that the assumption leading to eq. 3.11 wi l l hold. Furthermore, since the objective functions are usually highly non-linear, the estimated gradient value might depend on the choice of e [37]. Typical ly, its value, in practical applications, should be in the range of 10~ 5 to 10~ 8 . The use of a large step size wi l l introduce significant truncation errors, while a small one wi l l cause subtractive cancellation errors. \Jc(z)-J(z)\ < uLj \Jc (z + eei) -J(z + eei)| < uLj C < - e + 2 „ f (3.11) CHAPTER 3. OPTIMIZATION ALGORITHM 40 The last l imi tat ion, and perhaps the most important one, is inherent in the form of eq. 3.7. It is easy to observe that the numerator is obtained from the difference of two nearly identical terms, which is then divided by a very small value. Th is leads to condit ioning issues. The state equation must therefore be solved to the smallest possible residuals to ensure that the objective function evaluation contains as many significant figures as possible. Th is is why, in future investigations, it wi l l be crucial to find a better approach to compute the objective function's gradient. The methods presented in sections 3.2.3 and 3.2.4 were not implemented for this research project. A detailed introduction is however necessary to guide the direction of future work. 3.2.3 Adjoint methods As mentioned earlier, the state sensitivity term, is the most difficult term to evaluate in eq. 3.6. To overcome this difficulty, the introduction of an adjoint state can be contemplated. The idea is to consider the state equation T (X, U) = 0 as an addit ional constraint to the opt imizat ion problem and to introduce its Lagrange multipl iers xp [48]. The Lagrangian can be written as C (z, U,1>) = J (X (z), U (z)) + (X (z), U (z)) . (3.12) The adjoint state variables ip associated wi th U are obtained by solving the linear adjoint state equation [37]: m - ° ( 3 - 1 3 ) which after development is written as m + — w ( 3 - 1 4 ) The derivative term on the left-hand side comes from differentiating the state equations, while | ^ can either be computed by automatic differentiation or by using a centered dif-ference formula. The expression on the left-hand side of eq. 3.14 can be obtained by either differentiating the state equations on the continuous level or by start ing from the discretized version of this same state equation. The first option is known as the continuous approach [32]. It aims to find the analyt ical expression of | ^ for a given state equation. Then by discretizing the resulting expression, CHAPTER 3. OPTIMIZATION ALGORITHM 41 eq. 3.14 can be solved numerically to obtain the adjoint variables tp. This technique has been used in aerodynamic shape opt imizat ion [33]. However, in the context of generic opt imiza-t ion, this method is not applicable since the differentiation of T is problem dependent. A n approach better suited to the current research would consist of constraining the opt imizat ion problem wi th the discretized version of the state equation R (X, U)= 0. The adjoint equation would therefore be written as du^ = - w - ( 3 - 1 5 ) The term | ^ in eq. 3.15 is the same Jacobian matr ix as in eq. 2.18. The Newton-Krylov G M R E S algorithm could therefore be used to efficiently solve the discrete adjoint state. Ne-mec and Zingg [47] present an implementation of a preconditioned Newton-Kry lov G M R E S algorithm computing the discrete adjoint state on structured grids. The solver presented in section 2.5.4 is a similar one, but is designed to work wi th unstructured grids. It is reason-able to expect that, in the future, it wi l l be possible to obtain the solution to eq. 3.15 using this solver. Once the adjoint variables have been calculated, the gradient wi th respect to the control points can directly be computed using dj (dj dRT \ dX l l = {dx + dx ( 3 - 1 6 ) The approach leading to the expression of eq. 3.16 is presented in detail in [37]. The residual sensitivities, can be computed by automatic differentiation or by the following centered difference formula: dX ~ 2e ( 3 - 1 7 ) Since the state solution does not need to be recomputed, the use of eq. 3.17 is not expensive. The terms | ^ and can be obtained in the same way. B y using an adjoint state, the cost of computing the objective function's gradient becomes almost independent of the number of control parameters, making this method very attractive for large-scale problems. Furthermore, compared to finite-differences, the state solution does not necessarily need to be computed to tolerances as close to machine precision. Th is can drastically accelerate the opt imizat ion process since every flow solution wi l l require less effort. CHAPTER 3. OPTIMIZATION ALGORITHM 42 3.2.4 Automatic differentiation Automat ic differentiation is another technique for gradient evaluation. The principle behind automatic differentiation is that a function, having its value calculated by a computer pro-gram, can be differentiated by differentiating every single line of this program. No matter how complicated its definition is, this function wi l l always be formed from basic mathemat-ical operations including, but not l imited to, addit ion, subtraction, mult ipl icat ion, division and trigonometric functions. There are various methods to perform automatic differenti-ation. They mostly differ in the way they implement this central idea. A very complete review is made in [24]. The various implementations can be divided into two families: the forward mode and the reverse (or adjoint) mode. The forward mode is simpler to implement, but can be compu-tationally expensive. O n the other hand, the adjoint mode allows computation of the flow sensitivities at a cost independent of the number of control parameters. Improperly imple-mented, it might lead to the use of prohibit ive amounts of memory because of the required storage. The adjoint mode is usually the preferred choice because of its higher efficiency. To il lustrate the principles behind automatic differentiation in reverse mode, an example can be used. Let 's assume a simple computer program performing the following operations to evaluate the function / . u\ = x u2 = x2 + 2u\ f = It! + U2 As was done with adjoint methods, a Lagrangian formulation is used, but now, every line of the program is considered as a constraint C = Ui + u2 + tpi (u2 - x2 - 2ufj + V>2 ( « i - x) where ipi are the Lagrange multipl iers. The adjoint state system (see eq. 3.13) therefore takes the form — = l - 2 ^ l M l + V > 2 = 0 — = 1 + ^ = 0 (3.18) OUi ou2 CHAPTER 3. OPTIMIZATION ALGORITHM 43 To find the values for fy, the system of eq. 3.18 must be solved in reverse order. A great deal of data therefore has to be kept in memory. Th is is why an incorrect usage can become an issue. A careful implementation requires the process to be split in many parts. Once are known, then dC df — = — = -2lpiX - 1p2 ox ax More details concerning this technique can be found in [48, 43]. It would be doubtful that, in its current form, A N S L i b could be automatical ly differentiated. For instance, because they are not differentiable, boolean operations could not appear in the solver code. This is pr imari ly why this technique is not explored further for this project. Some independent pieces of the code, such as the functions called to compute the mesh deformation or to parameterize the shape, could however be differentiated. The sensitivities | ^ and ^ could then be directly obtained and used in eq. 3.16. Among al l differentiators available, A D O L - C [27] should be considered because it is designed to work in conjunction with computer codes writ ten in the C / C + + language. 3.2.5 Incomplete sensitivities Based on various numerical experiments using automatic differentiation, Mohammadi [41] observed that the variations of internal mesh nodes had no effect on an objective function evaluated through a boundary integral. In such problems, most of the contributions to the gradient only come from the first few cells located close to the boundary. Th is observation has important implications. In fact, it implies that the sensitivity of the objective function wi th respect to the solution of the state equation can be neglected. du*0 E q . 3.6 can therefore be reduced to CHAPTER 3. OPTIMIZATION ALGORITHM 44 Th is result applies when the objective function is defined as a contour integral of the geometry to be optimized (or at least some parts of it) and is of the form where dfl represents the contour of the shape to be optimized. Th is result is part icularly important as it allows substantial reduction in the computational effort required for gradient evaluation. In fact, the state sensitivities | ^ no longer need to be evaluated. Bo th the objective function value and its gradient can therefore be obtained in vir tual ly the same amount of time necessary to calculate the solution to the state equation. The terms in eq. 3.19 can be individual ly evaluated using automatic differentiation and finite differences. The whole gradient expression can also be approximated using The shape needs to be modified through the control points. The mesh is then deformed accordingly. However, the solution does not need to be recomputed as is the case in section 3.2.2. It is simply reconstructed on the deformed mesh, assuming that the value at the cell's centroids remain unchanged. B y applying eq. 3.21, an approximation of the gradient can be obtained. Aerodynamic shape optimizat ion is the principal field of application of incomplete sensitivi-ties. In fact, many problems such as lift-constrained drag minimizat ion, lift enhancement or the maximizat ion of l ift-to-drag ratio, have objective functions that share the form of eq. 3.20. Using this method, Dadone et al. [16] were able to perform overnight opt imizat ion on three dimensional wings in transonic regime, using the Euler equations. Other applications are also presented in [42, 39]. (3.20) dj J (X (zi + ea), U (zi)) -J(X (Zi), U (*)) (3.21) 3.3 Descent direction Once the gradient vector is obtained, a local approximation of the objective function can be made using a Taylor series expansion. Th is allows computation of a search direction for the current iterate k. Since the line search algorithms used in this work require progress CHAPTER 3. OPTIMIZATION ALGORITHM 45 towards the min imum, at every iteration [48,18] it is expected that qk be a descent direct ion 2 . Referring to eq. 3.1, i f a is small enough, it is easy to see that qk wi l l be a descent direction if VJ(zk)Tqk<0. (3.22) If the search direction is assumed to have the form qk = -Bll\U{zk) , (3.23) where Bk is a symmetric and nonsingular matr ix, then it is easy to see that Bk must be positive definite to satisfy eq. 3.22. (zkf qk = - V J (zkf B^WJ (zk) < 0 Once the search direction vector is known, the update of the design variables is made ac-cording to: Zk+i = zk + akqk (3.24) where ak is the step length obtained through a line search procedure. To obtain global convergence, not only must the descent direction be chosen carefully, but the step length a to be taken in this direction must also obey a set of conditions. These are presented in detail in section 3.4. If the conditions on the line search are respected, then the Zoutendjik's theorem holds 3 : J2cos2ek\\VJ{zk)\\<oo (3.25) k>0 where 8k is the angle between VJ(zk) and qk. Inequality 3.25 implies that cos26k\\VJ{zk)\\^Q. (3.26) If the method for choosing the search direction qk ensures that 6k is bounded away from 90°, it follows that l im | | V J ( z f e ) | | = 0 . (3.27) 2The term "descent direction" will be used to describe a search direction that provides a decrease in the objective function for a small enough step size. 3 The proof of this theorem can be found in [48]. CHAPTER 3. OPTIMIZATION ALGORITHM 46 In other words, provided that the search directions are never too close to orthogonality with the gradient, the gradient norm converges to zero, satisfying the first-order necessary optimality condition. Two different ways of computing a descent direction (choosing Bkl) are implemented. These are the steepest descent method, presented in section 3.3.1, and the BFGS method4, which is outlined in section 3.3.2. 3.3.1 Steepest descent method The steepest descent method chooses the simplest symmetric positive-definite matrix to obtain a descent direction. For Bk = / , then The convergence rate for this method is linear. Unfortunately, a problem arises when the Hessian matrix (W2J (zk)) has widely varying eigenvalues (if its condition number is large). For a two dimensional case, this would result in very elongated level sets as illustrated in figure 3.1. It can be observed that consecutive steepest descent directions tend to zig-zag, leading to extremely slow convergence. For an idealized method, with a quadratic objective function and an exact line search, it can be shown that where K is the condition number of the Hessian. If « >• 1, then it is easy to observe that the convergence to the minimum will be slow. This situation is very common in practical applications and can be observed when the sensitivity with respect to certain control variables is much greater than to others. To accelerate the optimization process, a better technique to compute the descent direction must be implemented. 4 Named after Broyden, Fletcher, Goldfarb and Shanno who independently discovered this method. qk = -VJ7" (zk) (3.28) CHAPTER 3. OPTIMIZATION ALGORITHM 47 Figure 3.1: Typ ica l convergence for the steepest descent algorithm 3.3.2 BFGS method To improve convergence rates, instead of using a linear approximation of J, as was the case for the steepest descent method, a quadratic model of the objective function can be uti l ized [48]: mk (q) = J(z) + qTVJ (zk) + -qTHkq &J(zk + q) (3.29) where Hk — V2J(zk) is the Hessian matr ix. The min imum of mk is found when Vmk = 0 = VJ(zk) + Hkq. (3.30) Solving this system for q, the search direction of eq. 3.31, known as the Newton's direction, is obtained: qk = -Hk-lVJ{zk) . (3.31) It is easy to verify that eq. 3.31 only yields a descent direction if Hk is positive definite. From eq. 3.29, is is also possible to see that the step length a = 1 is a natural one for this method. However, there are many shortcomings to this method. Most importantly, it requires the knowledge of the Hessian matr ix, which is, in shape opt imizat ion, very costly to compute. Furthermore, the solution of the linear system of eq. 3.30 must be obtained at every iteration. Final ly, this Hessian matr ix is not guaranteed to be positive definite. The descent direction condit ion required for line search algorithms would therefore not be respected. The B F G S method, which can be categorized among quasi-Newton methods, aims to com-pute a descent direction respecting the model of eq. 3.29 without having to explicit ly compute the Hessian. It looks for a matr ix Bk that is symmetric positive definite, is easily invertible and somehow approximates the Hessian. If the notation 7ik = B^1 is used, so that: Qk = -T-CkVJ (zk) (3.32) CHAPTER 3. OPTIMIZATION ALGORITHM 48 then, start ing from Ho = I, the B F G S update formula can be writ ten as Vksk) V Vksk) Vksk where « w = | / - * H 1 / - ^ + | i (3.33) Sk — Zk+l - Zk Vk = V J ( z w ) - V J ( 2 F C ) . (3.34) The B F G S formula of eq. 3.33 wi l l yield a positive-definite Hk matr ix that approximates the inverse Hessian if the curvature condit ion shk > 0 (3.35) is respected. Th is condit ion imposes a restriction on the choice of zk+i and therefore, on the step length ak. Equat ion 3.35 is guaranteed to hold if the Wolfe conditions on the line search, presented in the next section, are respected. The B F G S method allows for super-linear rates of convergence in the neighborhood of the min imum if a ful l Newton step (a = 1) is used. Further details on how this formula is obtained are given in [10, 18, 48]. 3.4 Line search procedure It has been mentioned in the previous section that global convergence to the min imum depends on the step length ak. Ideally, ak should be chosen to provide a substantial reduction of J. Choosing the global min imum of the one-dimensional function defined by c/)(a) = J(X{z + aq),U(z + aq)) a>0 (3.36) would be ideal. F ind ing this min imum, which amounts to performing an exact line search, can be very expensive and is not necessary to guarantee global convergence. Pract ical ly, an inexact line search identifying a step length providing an adequate reduction of J is more appropriate. For the algorithm to be globally convergent, the step lengths must be chosen to respect CHAPTER 3. OPTIMIZATION ALGORITHM 49 certain conditions. For example, it is not sufficient to require a simple decrease in the objective function: J{zk + akqk) < J{zk). The strong Wolfe conditions [48, 10] are a popular set of conditions for performing inexact line search. They can be writ ten as J (zk + akqk) < J (zk) + cxakVJ (zk)T qk VJ(zk + akqk)Tqk < c 2 VJ(zk)Tqk (3.37) where 0 < C\ < c 2 < 1. These two equations are respectively known as the sufficient decrease condit ion and the curvature condit ion. They ensure a sufficient decrease in the objective function for a sufficiently large step length. Furthermore, if the B F G S method is used, respecting eq. 3.37 guarantees that the approximate inverse Hessian 7Y, obtained with the B F G S update, remains positive definite. Given that qk is a descent direction, the sufficient decrease condit ion can always be satisfied wi th a small step length. The curvature condit ion exists to prevent selection of a small step length, which would not make enough progress towards the min imum. Unfortunately, the curvature condit ion requires the knowledge of the gradient at every tested step length. It would be too costly to enforce it explicit ly when using finite-differences gradients. A backtracking approach can allow one to dispense with the curvature condit ion and only use the sufficient decrease condit ion to terminate the line search. Backtracking proceeds by first selecting a long enough step size. If it satisfies the sufficient decrease condit ion, then it is accepted; if not, it is mult ipl ied by a contraction factor p (in this work, a value of p — 0.5, is used). The process is repeated unt i l a satisfactory step length is found. It is important to note that the curvature condit ion can always be checked a posteriori. If it was not satisfied, then the in i t ia l step size of the backtracking process is increased for the next iteration. To l imit the number of state solutions required and to prevent the solver from crashing because of badly shaped geometries, it is important to provide the backtracking routine with a good guess for the in i t ia l step size. For methods that do not produce well-scaled search directions, such as the steepest descent method, current information from the problem can be used. The guess can be made by assuming that the change in the objective function wi l l CHAPTER 3. OPTIMIZATION ALGORITHM 50 be the same as the one obtained for the previous iteration. 0 VJ (zfc_i)Tgfc_i , . V J ( z f c ) qk For quasi-Newton methods, a full Newton step a = 1, should always be tried and will eventually be accepted, thereby producing super-linear rates of convergence. As mentioned in section 3.3.2, the initial approximation for the inverse Hessian is set such that Ho = I-This choice, equivalent to doing a steepest descent iteration, is however not well-scaled to the problem and might prevent usage of (a = 1) in subsequent iterations. To scale Ho, eq. 3.39 is used after the first step has been computed, but before the first BFGS update is performed. Ho = ^ p / • (3.39) Vk Vk This equation approximates an eigenvalue of the exact initial inverse Hessian scaling Ho to the size of [ V 2 J " ( z 0 ) ] _ 1 - For more details, see [48]. Being able to scale the search direction vector makes the BFGS method even more attractive when compared to the steepest descent method. This generally allows to decrease the number of step lengths tried during the backtracking process, thus resulting in fewer state solutions. This leads to a noticeable reduction in the computational time. If the cost of computing the gradient is of the order of obtaining the solution to the state equation as would be the case when using an adjoint method, the line search procedure could be accelerated by building a cubic interpolant of eq. 3.36 [48]. This technique should be considered in future work. A l l the elements required to perform unconstrained optimization have now been described. The next section provides an insight on the details concerning the implementation of these strategies. 3.5 Implementation details This section presents some of the details concerning the implementation of the various ele-ments presented in chapter 2 and 3. As illustrated in figure 3.2, the optimization process can CHAPTER 3. OPTIMIZATION ALGORITHM 51 be subdivided into three parts: the preparation of the computational domain which includes the definition of the grid function, the numerical solution of the partial differential equations describing the physics of the problem and finally the optimization algorithm itself. ( Boundary file - Initial solution ) Domain preparation and deformation Computational mesh 3 Numerical solution of the state Numerical solution Optimization algorithm Optimal solution Fig. 3.3 Fig. 3.4 Fig. 3.5 Figure 3.2: Overview of the optimization process The operation of each of these parts are schematically illustrated in figures 3.3 to 3.5. Details about the grid function definition are given in section 3.5.1. The solver's operation is ex-plained in section 3.5.2 and finally, some additional details about the optimization algorithm are presented in section 3.5.3. 3.5.1 Grid function definition Domain preparation is an important part of numerical simulation. When performing shape optimization, an additional difficulty comes from the fact that this domain is deformable. Defining how the boundaries are represented, how they move and how these displacements modify the computational mesh is an important part of the optimization process. Figure 3.3 presents, in detail, the interaction between the elements that are necessary to define the grid function X (z). CHAPTER 3. OPTIMIZATION ALGORITHM 52 Boundary file I Boundary information Control points definition | Mesh generator Shape parameterization | Parametric space Control points location (Deformation interpolation^ I nt. vertices relocation ( Computational mesh J Figure 3.3: Details of the domain preparation and deformation Boundary definition and control points As it was mentioned in section 2.2.5, the control points are boundary points located on deformable boundaries. B y proceeding this way, the informations required for shape param-eterization and deformation is contained in the boundary file read at run time. For the moment, reference curves are defined wi th analytic functions. However, in the future, when the mesh generator and the solver are well integrated together, this way of defining control points wi l l allow the mesher to automatical ly take shape parameterization in charge. G R U M M P would be able to handle the boundary deformations by using the exact geometric representations it generates to construct guaranteed-quality meshes. The domain CHAPTER 3. OPTIMIZATION ALGORITHM 53 deformations would then be taken into account by the mesh generator itself. Mesh refinement The number of cells in the mesh can be critical to the accuracy of the solution to the state equations. Increasing the number of cells, especially in the region where the changes in the solution are important, usually results in improvements in accuracy. The computational grids are constructed based on the boundary file provided by the user. Refinement is made based on geometric characteristics of the boundaries. The user can also control the refinement and grading of the mesh by setting parameters at run time. Reference curve Since this optimization framework is still in its primary development phase, only reference curves defined analytically are used in this research. This implies that additional functions defining this curve need to be created for every studied problems. This requirement should disappear in the future. Remeshing After a few iterations, some cells in the mesh can become too deformed. This can result in a loss of accuracy when solving the state equations. It is sometimes better to create a new mesh based on the actual locations of the control (and boundary) points. The decision to re-create the mesh is made based on the minimal and maximal angles in the triangulation forming the current mesh. These two angles (0 m j n and 0max) are evaluated during the first iteration. Then, after every geometry modification, all the angles in the mesh are verified. If one of them is smaller than O.9 0 m i n or bigger than ^g 5-, then the mesh is re-generated. Other quality measures are available from G R U M M P . Testing more sophisticated remeshing criteria should be part of future research. It is important to note that, since the objective function depends on the computational grid, the optimization problem needs to be reset when a new mesh is generated. Because CHAPTER 3. OPTIMIZATION ALGORITHM 54 discrete mathematics are used, the desired min imum wi l l be modified. The extent of this modification depends on how well the mesh is adapted to the studied problem. Mesh deformation To follow the motion of the boundary, the internal grid points must also be displaced. In this research, an explicit deformation law is used. It is controlled by the parameter f3, which can be set, at run time, by the user. The lower this parameter, the more localized the mesh deformations remain. 3.5.2 Numerical solution of the state To calculate a numerical solution, a reconstruction-based finite-volume solver repeatedly performs the following operations (il lustrated on figure 3.4): Start ing from an ini t ia l solution, provided by the physics class or by the solution from the previous iteration, 1. If necessary, set the boundary constraints for the reconstruction. 2. Reconstruct the solution over al l control volumes. 3. Evaluate the fluxes along control volume boundaries. 4. Accumulate the fluxes in the proper control volumes. 5. Evaluate the source term in every control volume. 6. Solve eq. 2.18 by minimiz ing the residuals over the Kry lov subspace. 7. Perform the Newton update of the state variables. Of al l these operations, 1, 3 and 5 are specific to each problem. The remaining steps are strict ly numerical methods and no knowledge of the problem is necessary to perform them. The idea behind the generic solver is to have al l these numerical operations performed by a CHAPTER 3. OPTIMIZATION ALGORITHM 55 Physics class Boundary condition 1 Figure 3.4: Details of the numerical solver finite-volume toolkit. Th is toolkit then makes calls to an external physics package to retrieve the necessary information to perform the problem-specific calculations. B y using object-oriented programming, these physics packages can al l be built using the same standard interfaces. These interfaces are known as classes. This allows simple imple-mentation of physics packages by the user, without having to worry about the numerical issues. A more thorough discussion about the implementation of the physics class and the generic solver is made in [5]. 3.5.3 Optimization algorithm Presented in figure 3.5 are the details of the interaction between the various operations used in the opt imizat ion algorithm. The main elements required by the gradient-based opt imizat ion algorithm are: the evaluation of the objective function, the computation of the gradient, the determination of the search direction and the line search. CHAPTER 3. OPTIMIZATION ALGORITHM 56 to fig. 3.3 Until Wolfe conditions (eq. 3.37) are met Figure 3.5: Details of the opt imizat ion algorithm Objective function and constraints For every optimizat ion problem, problem-specific functions have to be defined. They are needed to evaluate the value of the objective function, which includes penalty terms ac-counting for constraints. Once again, using object oriented programming allows the creation of classes. The creation of new opt imizat ion problems can therefore be made in a short amount of t ime. Gradient computation and search direction Currently, the gradient is computed using finite-differences. Th is implies that, for every gra-dient component, a complete state solution has to be computed. Th is technique is adequate for the problems studied in this work. Th is choice l imits the application to simple problems that only require a l imited number of control variables. CHAPTER 3. OPTIMIZATION ALGORITHM 57 Once the gradient is known, a search direction, which should yield a decrease in the objective function, can be obtained. Two different techniques are implemented: the steepest descent method and the B F G S method. Wolfe conditions The Wolfe conditions are used to determine whether a given step length respects the re-quirements leading to global convergence. If the sufficient decrease condit ion is satisfied, supposing a long enough step length, the new geometry is accepted, resulting in an im-proved solution. Otherwise, the step length is reduced by a factor of 2 and a new solution of the state equations is calculated. Because it requires the knowledge of the gradient of the objective function for every step length tr ied, the curvature condit ion can only be assessed a posteriori, once the sufficient decrease condit ion has been verified. The values c\ = 1 0 - 4 and c2 = 0.9 are used in al l the validation problems presented in chapter 4. Setting c\ higher (e.g. 10~ 2) can accelerate convergence, but can also cause the line search procedure to fail more often. Stopping conditions The stopping conditions are implemented to determine if either the opt imizat ion run has found the desired min imum, or if it has reached a solution from which no further progress can be achieved. It was said in section 3.1 that the first-order necessary condit ion for a stationary point is that the gradient of the objective function be equal to zero. The goal of an unconstrained opt imizat ion algorithm is therefore to drive the gradient norm as close to zero as possible. In the validation problems presented in the next chapter, convergence is considered to be attained when the gradient L2 -Norm is reduced by four orders of magnitude. It might be possible that the algorithm reaches a solution where no further progress towards the min imum can be made. Usually, this situation happens in the neighborhood of the min i -mum, where the gradient might not be precise enough to compute a feasible search direction. If the step length accepted by the line search is such that the maximum displacement among CHAPTER 3. OPTIMIZATION ALGORITHM 58 all the control points is less than e (see section 3.2.2), then the optimization run is stopped. The value of the gradient at this point can be assessed by the user. One can decide whether the actual solution is satisfying or not, in which case one can reformulate the problem by using, for example, a finer mesh or more control points. C h a p t e r 4 V a l i d a t i o n p r o b l e m s In this chapter, the correct operation of the many elements composing the shape opt imizat ion framework is verified. To accomplish this, four test problems are used. The first one, presented in section 4.1, is a strict ly geometric problem. The idea is to test components such as the deformation parameterization or the gradient computation without having to solve part ial differential equations, obviously allowing for faster opt imizat ion runs. Th is problem is also used to compare two different methods of constraining wi th penalty terms. In section 4.2, a physics package corresponding to the Laplace equation, is introduced for the second test case. It can be seen as a potential flow problem where no circulat ion, and therefore no lift, is present. Since this is an inverse problem, the target to reach is already known. Section 4.3 presents another inverse problem, this t ime using the Poisson equation. A more complex geometry is tested, showing the versatil ity of the deformation parameterization. The final problem, presented in section 4.4, uses a solid mechanics physics package. 4.1 Constrained geometric problem For this first test problem, no physics package is used. The objective function is strict ly defined based on the domain's shape and grid function: J — J ( X { £ ) ) . The t ime required for every function evaluation is therefore relatively smal l since the numerical solver is not ut i l ized. Th is allows for a quick assessment of the performance of the deformation parameterization, the mesh deformation, the gradient computation and the line search. Furthermore, because 59 CHAPTER 4. VALIDATION PROBLEMS 60 constraints are to be enforced, the use of penalty terms, included directly into the objective function, can also be tested. The objective of the problem consists of minimiz ing the perimeter V of a given shape, such that its area A is equal to an arbitrary constant A0. More formally, this can be writ ten as: The shape resulting from the solution of eq. 4 . 1 should be a circle. The value of A0 is set to be equal to IT such that an unit circle should be obtained. The objective function can therefore be expressed as: where 77 is a penalty coefficient. The solution of eq. 4 . 1 is not unique, in the sense that every circle satisfying the area constraint constitutes a local minimizer 1 . W i t h gradient-based opt imizat ion, there is no guarantee that, for instance, the algorithm converges to the unit circle centered at the origin. The final geometry depends on the in i t ia l solution. The in i t ia l geometry, as well as the shape obtained after convergence, are shown in figure 4 . 1 . The objective function evaluation is made directly by G R U M M P . Even though no part ial differential equation is solved inside the domain, it st i l l needs to be triangulated in order to evaluate the objective function. B y summing the areas of these triangles, the total area A is calculated. The perimeter V is measured by measuring the length of al l boundary edges. To parameterize the deformation, a reference curve is required. For this problem, it is defined as a circle centered at the origin: where R — 0.5 is the radius of the circle and t € [0,2[ is the curvil inear abscissa in the parametric space. The point t = 0 is located on the x—axis. The value of t increases following the reference curve counter-clockwise. A total of 12 control points are used to define the shape. They are approximatively located at every 30 degrees along the reference curve. From these control points, the location of the boundary vertices can be determined wi th 1 O n the continuous level, every unit circle would actually be a global minimizer of the objective function. min V : A = A •0 (4.1) (4.2) CHAPTER 4. VALIDATION PROBLEMS 61 Figure 4.1: Init ial and final geometries for the constrained geometric problem cubic spline interpolation. The interior mesh is deformed using the explicit law, presented in section 2.4, wi th /3 = 2. Because the evaluation of the objective function can be made quickly, a centered-difference formula, which is second-order accurate, is used to compute the gradient vector. Once the gradient is known, the descent direction is either calculated using the steepest descent method or the B F G S method. Two different methods are used to set the penalty coefficient 77. The simplest way starts with a large enough 77, keeping it constant throughout the opt imizat ion process. A slightly more complicated approach starts wi th a smaller value and eventually performs updates. The problem is first part ial ly converged (in this case, unt i l the gradient's L2-Norm is reduced by three orders of magnitude). The coefficient 77 is then mult ipl ied by a factor of 10. Th is is repeated unt i l < 0.005. A t this point, the problem is ful ly converged. The start ing CHAPTER 4. VALIDATION PROBLEMS 62 values of n are set such that r, = Tj(X0) where r is a constant. For the simple method, r is ini t ia l ly set at 150 while it is set at 15 for the method wi th updates. Convergence histories of this test problem are presented in figure 4.2. 0 20 40 O 20 40 60 80 100 120 140 Number of iterations Number ol iterations 100 120 140 160 (a) Constant penalty coefficient (b) Updated penalty coefficient Figure 4.2: Convergence history of the geometric problem These plots clearly indicate that the B F G S method performs better than the steepest descent method. Th is can be explained by the fact that, in the neighborhood of the min imum, the quadratic penalty term dominates the expression of the objective function. Since the B F G S method builds a local quadratic approximation and then chooses a step length such that the min imum of this approximation is obtained, the minimizer is l ikely to be reached more quickly. In the results obtained with the steepest descent method, oscillations can be observed in the gradient norm. These are the result of the impossibi l i ty to obtain well-scaled CHAPTER 4. VALIDATION PROBLEMS 63 search directions. It therefore becomes difficult to compute a step length that leads close to the min imum. The successive solution updates rather tend to jump from one side to the other of the desired min imum. This causes the progress to be quite slow. Since al l the methods roughly converge to the same point, thereby demonstratring good global convergence behavior, only the results obtained wi th the B F G S method using con-straint coefficient updates are presented in table 4.1. As it can be seen, the geometry obtained after convergence is not centered at the origin. If it were, the values of z would al l be equal to 0.5. It is however nearly a perfect circle 2 . Depending on the method used, the resulting perimeter vary from 6.28175 to 6.28177 while the areas are contained between 3.1363 and 3.1370. Of course, the target perimeter is 27r, while the target area is 7r. Th is gives a maxi-mum error of 0.023% for the perimeter and 0.168% for the area. The target solution could probably st i l l be better approximated by increasing the value of the constraint coefficient rj. control point t Zinit. 1 0 0.7130 0.5053 2 0.1667 0.8500 0.4808 3 0.3333 0.3120 0.4611 4 0.5 0.2500 0.4520 5 0.6667 0.4397 0.4547 6 0.8333 0.5378 0.4691 7 1 0.7655 0.4919 8 1.167 0.5378 0.5161 9 1.333 0.4397 0.5358 10 1.5 0.7000 0.5472 11 1.667 0.5000 0.5444 12 1.833 0.3000 0.5291 Table 4.1: Locations of the control points in the parametric space for the geometric problem. Updat ing the coefficient appears to yield better performances when using the steepest descent method. However, for the B F G S method, there is no clear advantage for either of the techniques. Fewer iterations are required to reach convergence when the coefficient is kept constant because the ini t ia l gradient norm is higher in this case. The problem therefore needs to be converged further in order to reach the stopping criterion, explaining the increased 2 Obviously, a perfect circle cannot be obtained since the circumference is approximated using a collection of straight boundaries. CHAPTER 4. VALIDATION PROBLEMS 64 number of iterations. Note that the moment at which the constraints are updated can easily be seen on figure 4.2. A t this point, the gradient norms jump dramatically. Th is is a combined effect of the objective function suddenly changing and of the mesh being re-generated. The framework needed for two variants of the penalty method has been implemented and the choice wi l l be left to the user. It is however important to note that, according to Fiacco and McCormick [22], updating of the penalty coefficient improves the l ikel ihood of global convergence to the desired min imum. Because penalty methods can be affected by i l l -condit ioning, the domain of attraction may be prohibit ively smal l if the value of n is chosen to be too large. Th is is especially the case if many constraints are present. This problem verified the correct operation of various components of the opt imizat ion frame-work. The results are as expected. The next step is to consider objective functions defined based on the solution of a state equation. 4.2 Inverse problem using the Laplace equation Having verified the proper operation of various components of the opt imizat ion framework, the numerical solver can now be added into the process. The second test problem requires the solution of a part ial differential equation, namely the Laplace equation, in order to evaluate the objective function value. Th is equation is relatively easy to solve numerically. Even if its study does not present great practical interest, it is nevertheless an excellent tool for val idation purposes. The objective of the problem is to find the shape that minimizes the following objective function: The Laplace equation can be used to describe potential flows [60]. The state variable <fi, representing the potential function, is obtained by solving: To find the expression of the fluxes, eq. 4.4 can be writ ten in the form of eq. 2.10 using: (4.3) V20 = O. (4.4) u = (4>) CHAPTER 4. VALIDATION PROBLEMS 65 '-(2) °-(g) 5 = 0 . The fluxes can therefore be expressed as: ^ = ( D ' (4'5) This problem aims at finding the shape of a domain 0 , such that the contour integral around its boundary dfi matches a known potential function (f>0. The target geometry for this problem is a circle. To create a potential flow around a circle, a uniform flow of velocity Voo and a doublet can be superposed 3 . After simplif ication, the potential function on the boundary dCl of a unit circle is given by: 0o = 2 | | V o J c o s 0 (4.6) where 0 is the angle between the x-axis and a segment joining the origin and a point along the circle. The boundary conditions are set such that They are imposed by constraining the reconstruction. = 100 and that | f = 0 on dQ. The reference curve is defined exactly the way it was for the problem of section 4.1. The control points, which are listed in table 4.2, are also defined in the same manner as in the geometric problem. The in i t ia l geometry, shown on figure 4.3, is an ellipse centered at the origin having a major axis of 2.2 and a minor axis of 1.8. The Laplace equation is converged unt i l the L2-Norm of the residuals is 10~ 1 3 . Values of e ranging from 1 0 - 5 to 1 0 - 7 were tested for the one-sided finite-difference formula used to compute the gradient. They al l returned similar results. The smallest value of 10~ 7 is retained for this inverse problem. In the convergence histories presented on figure 4.4, it is again possible, as expected, to see that the B F G S method performs better than the steepest descent method. Th is is especially true when the solution approaches the min imum. It is also important to notice that a sharp increase in the gradient norm can be observed when the mesh is re-generated. Since 3For potential flows, the velocity vector is given by: V = (u,v)T = (j^-, f^j CHAPTER 4. VALIDATION PROBLEMS 66 1.5 1 0.5 =- 0 -0.5 -1 -1.5 -1.5 -1 -0.5 0 0.5 1 1.5 x Figure 4.3: Init ial and final geometries for the inverse Laplace problem the objective function depends on the grid, the new mesh yields a high value for the term | ^ of eq. 3.6. The value of the objective function also gets modified. However since the grid refinement is chosen such that the solution is grid-converged 4, the effect can hardly be noticed. The absolute min imum of the objective function is equal to 0. As it can be seen on figure 4.4, this ideal value is not reached by either of the methods. It was observed that refining the mesh further helps to approach this ideal value. Th is is a sign that these are mostly due to discretization errors. The parametric locations for the results obtained with the B F G S method and given in table 4.2 are very close to the target z = 0.5. Th is demonstrates that the addit ion of the numerical solver into the opt imizat ion framework gives val id results. The next step is to test the framework wi th a more complicated shape. 4Grid convergence analysis is made by evaluating the objective function using the target geometry. Initial geometry Final geometry Target geometry Control points • CHAPTER 4. VALIDATION PROBLEMS 67 Number of iterations Number of iterations Figure 4.4: Convergence history of the inverse Laplace problem control point t 1 0 0.6 0.4996 2 0.1667 0.5380 0.5007 3 0.3333 0.4397 0.5009 4 0.5 0.4 0.5011 5 0.6667 0.4397 0.5013 6 0.8333 0.5380 0.5010 7 1 0.6 0.5004 8 1.167 0.5380 0.5026 9 1.333 0.4397 0.5013 10 1.5 0.4 0.5003 11 1.667 0.4397 0.5015 12 1.833 0.5380 0.5005 Table 4.2: Locations of the control points in the parametric space for the Laplace problem. 4.3 Inverse problem using the Poisson equation In the previous problems, the opt imizat ion framework has been shown to work wi th simple geometries. However, one of the future objectives wi l l be to solve design problems in two dimensional aerodynamics. It is therefore fitt ing to verify the framework operation when dealing wi th airfoil-l ike geometries. For this problem, instead of solving a P D E describing the flow around the airfoil, the Poisson equation characterizes a diffusion phenomenon inside the domain f2. The inverse Poisson problem presented in this section is inspired by a test case discussed in [37]. CHAPTER 4. VALIDATION PROBLEMS 68 The objective function for this problem is defined as: J(X,U) = j (u-u^fdA (4.7) where u is the steady-state solution of the Poisson equation V2u = Sintt u = OondQ (4.8) This is vir tual ly the same state equation as the one used in section 4.2, except that the source term S is no longer equal to 0. The expression for the fluxes is exactly the same. The target solution u0 is given by u0 (x, y)=y2- 0.0169a; (x - l ) 2 (4.9) For this target solution to be achieved on H, 5 must be defined as S (x, y) = -0.1014a; + 2.0676 (4.10) and <9ST, corresponding to the target domain's boundary (and to the reference curve) is expressed as X) = ( ( i _ 1 ) 2 ),te[0,2{ (4.11) y j V 0.13t (t - 1) (« - 2) ) The boundary dQ, is interpolated wi th a total of 16 control points, of which 14 are allowed to move (see table 4.3). The points located at the leading (t = 1) and trai l ing (t = 0) edges are fixed and are assigned a parameter z = 0. The normal vector to the reference curve is given by: The in i t ia l , final and target geometries are il lustrated on figure 4.5. The in i t ia l meshed domain fi0 on which the state equation is solved is shown on figure 4.6. Since the values of the largest components of the state variable vector are 0 ( 1 O ~ 3 ) , the state equation can be converged to a L2-Norm of residuals of O ( 1 0 - 1 9 ) . For finite-difference gradient calculation, e is set to be equal to 5 • 1 0 - 1 1 . Because the previous problems have CHAPTER 4. VALIDATION PROBLEMS 69 clearly i l lustrated that the B F G S method offers better performance, the steepest descent search direction is not used for this test case. The convergence history of the inverse Poisson problem is shown on figure 4.7. 0.1 i 1 1 1 1 Initial geometry Final geometry Target geometry Control points • 0.05 >- 0 -0.05 -0.1 ' 1 ' 1 1 1 0 0.2 0.4 0.6 0.8 1 x Figure 4.5: Init ial and final geometries for the inverse Poisson problem The first thing that can be noticed from figure 4.7 is that the gradient cannot be reduced by four orders of magnitude. The algorithm fails to compute a search direction providing a decrease in the objective function before this stopping condit ion is reached. The optimizat ion process is therefore stopped. The fact that the gradient magnitude reaches a plateau around iteration 30 is a sign that the l imit of application of finite-difference gradients has been attained. A t this point, the error can be larger than the difference between two solutions separated by e. Th is is largely due to the fact that, when the min imum is approached, the objective function value eventually becomes quite smal l . From eq. 3.18, it is possible to see that, when the algorithm stops, the numerator is O (e • 10~ 8). Th is is a value approaching the residuals' L2 -Norm to which the state equation is solved. There is therefore no possibil i ty of continuing the optimizat ion process. Larger e have been tried, leading to even worse CHAPTER 4. VALIDATION PROBLEMS 70 Figure 4.6: Meshed in i t ia l domain for the inverse Poisson problem Mumbw of iterations Number of iterations Figure 4.7: Convergence history of the inverse Poisson problem results. This is probably due to the introduction of truncation errors in the solution. Th is is consistent with what was said about the theoretical opt imal choice for e described in section 3.2.2. To improve the performance of the optimizat ion algorithm, a better technique to compute the gradient wi l l need be implemented. It is also possible to see, on figure 4.7, that the objective function value increases when the mesh is re-created. The absolute change is however quite smal l and is a good sign that the mesh used provides a grid-converged solution. A s it could be expected, by looking at the control points' final locations in table 4.3, the problem is not fully-converged. It is safe to say that, wi th the possibil ity to evaluate the gradient more accurately, the final results would be much closer to desired values. Th is example shows that the algorithm performs well wi th more complex geometries. CHAPTER 4. VALIDATION PROBLEMS 71 control point t Zinit. Zfin. 1 0 0 0 2 0.07849 0.007619 -0.0004189 3 0.1940 0.002626 0.0001149 4 0.2921 -0.02331 0.001328 5 0.5524 0.005805 0.0003469 6 0.6812 0.01170 0.0008219 7 0.7729 0.006730 0.0005648 8 0.8566 0.001452 -0.0003425 9 1 0 0 10 1.146 0.003330 0.002609 11 1.226 0.004786 0.00006255 12 1.319 0.01170 0.0008952 13 1.448 0.005805 0.0001185 14 1.708 -0.01732 0.001476 15 1.806 -0.003350 0.0006108 16 1.922 0.004011 0.001255 Table 4.3: Locations of the control points in the parametric space for the Poisson problem. 4.4 Optimization problem in solid mechanics The final test problem is an attempt at verifying the opt imizat ion framework capabil i ty of solving a direct problem (as opposed to the inverse problems of sections 4.2 and 4.3). To achieve this, a physics package designed to solve solid mechanics equations is used. These equations are most often solved wi th the finite element method. However, A N S L i b allows the solution of plane-stress problems, given that the necessary description of the problem's physics are provided. The differential equations of motion of a deformable solid are [5] ^ l + Bi = 0 (4.13) where Bi are the body forces in the i direction. Using the smal l displacement theory, the strains e, for an isotropic material can be expressed using the following relationship: CHAPTER 4. VALIDATION PROBLEMS 72 where u is the displacement of the material. The plane stress assumption dictates that: \ (4.15) a. vy E ( exx + ve. 1 - v* xx i "°yy £yy ~r~ VEXx {>?)<• xy / where E is the Young's Modulus and u, the Poisson's ratio. B y combining eqs 4.13, 4.14 and 4.15, the following can be obtained: 9 I i_v2 i^xx + u£yy) ' dr \ E r \ 2{i+u)t-xy E 2{l+v)c-xV ~\~ VS>x Bx -Bq, (4.16) Respectively noting u and v the material displacements in x and y, the P D E of eq. 4.16 can be written in the form of eq. 2.10 using: U = F — — G = E l - l / 2 E 4 ( 1 + (S+"S) ) / UU, I uu v) \dy dx E 4 ( 1 + v) \dy dx) s = E l - i / 2 Bx B„, The domain on which eq. 4.16 is applied can be seen in figure 4.8. For this problem, the Young's modulus is E — 2.1 • 10 6 while Poisson's ratio is v — 0.3. The normal and shear stress at the boundaries can be enforced using a boundary flux. The flux is expressed as: -ah -nx + Tb-ny -n -hx -ab-nx Fh = (4.17) The signs are chosen to agree with the sign convention for stresses (tensile stress is positive, compressive stress is negative). The boundary conditions for the current problem impose a tensile stress at the left and the right of the domain while the top and the bottom are left CHAPTER 4. VALIDATION PROBLEMS 73 Figure 4.8: Meshed domain for the solid mechanics test case free of stresses. The objective of the problem is to minimize the Von Mises stresses while keeping the area of the domain constant. The Von Mises stress is defined, at every point, as: where A is the area of the meshed domain. The constraint is enforced by adding a penalty term to the objective function as seen in eq. 4.19. The reference curve for this problem is defined as the in i t ia l circle that can be seen in figure 4.8. Twelve control points, located at every thir ty degrees along this circle are used. The design space was also l imited to insure that the hole remains defined wi th in the domain. Unfortunately, the L2-Norm of the residuals cannot be converged to less than O ( 1 0 - 1 0 ) wi th the solver in its current state. Since the components of the displacement vector are O ( 1 0 - 3 ) and the objective function is computed based on these displacements, there is no hope of obtaining a valid gradient using finite-differences. Th is test problem has identified certain shortcomings wi th the impl ici t solver. These are in the process of being corrected. However, at the moment, they l imit the application of the opt imizat ion framework to problems using (4.18) and therefore, the objective function takes the form (4.19) CHAPTER 4. VALIDATION PROBLEMS 74 the Laplace and Poisson equations as exposed in sections 4.2 and 4.3. Furthermore, the shape opt imizat ion technique used in the current framework is not the most appropriate for practical structural opt imizat ion. Topology opt imizat ion [30] is surely a better choice than shape opt imizat ion when it comes to structural design. The fundamental concept of topology opt imizat ion consists of div iding the design space, where the structure should exist, into many smaller entities. Each of these entities can be removed or added onto the structure, allowing for a far greater range of possible designs. Deterministic and heuristic opt imizat ion schemes can be used to obtain the opt imal design. These techniques might become useful in the future when dealing wi th multi-physics problems where the aerodynamic loads on a wing are considered. However, for the moment, efforts should be concentrated on making the current shape opt imizat ion framework more efficient. C h a p t e r 5 C o n c l u s i o n s a n d r e c o m m e n d a t i o n s A generic numerical shape optimization framework using triangular unstructured meshes has been developed. It integrates four distinct modules: a mesh generator, a generic numerical solver, a deformation parameterization routine coupled with a mesh deformation algorithm and an optimizer. Starting from the computational domain's boundary definition, which includes a list of the control points, the mesh generator creates a computational grid composed of triangular elements. The quality of the triangulation is guaranteed by the meshing algorithm. The solutions to the partial differential equations, describing the physical aspects of a given problem, are obtained with a Newton-Krylov GMRES implicit solver. Inside this solver, the physical and numerical aspects of the problem are completely de-coupled, allowing for generic applications. The gradient of the objective function with respect to the shape is computed with finite-differences. Based on this gradient, the geometry is deformed such that the objective function's value decreases. The deformation is defined through a set of control points. The position of every boundary vertex is then calculated, based on the locations of these control points, by cubic spline interpolation. With the new locations of these boundary vertices, the interior mesh is modified with an explicit deformation law. The constrained optimization problem is cast as an unconstrained problem by adding quadratic penalty terms to the objective function. Both the steepest descent method and the BFGS quasi-Newton method are used to solve this unconstrained problem. On the basis of the validation problems presented in chapter 4, several conclusions concerning 75 CHAPTER 5. CONCLUSIONS AND RECOMMENDATIONS 76 the performance and accuracy of the opt imizat ion framework can be drawn: • A n efficient gradient-based opt imizat ion method has been implemented. It allows us, at least in principle, to tackle any opt imizat ion problem whose physics is described by part ial differential equation solvable in A N S L i b and whose design space can be described by continuous variables. Some validation wi l l st i l l need to be performed to confirm the performance and accuracy of the algorithm on different problems. • The creation of a computational grid is the in i t ia l step in the solution of the shape opt imizat ion problem. In this research, the mesh generator G R U M M P is used to create an unstructured triangular mesh of the domain. Because it util izes algorithms guaranteeing the quality of the tr iangulation, it lends itself very well to automatic opt imizat ion applications. If mesh elements become too distorted, new grids can be created on-the-fly, without any intervention from the user. Unstructured tr iangular meshes are an excellent choice for applications where moving boundaries are involved. The algorithms to deform unstructured meshes are, in general, simpler, faster and easier to implement than wi th their structured counterparts. It should also be noted that the process of creating an unstructured grid is ful ly-automated, resulting in an overall t ime gain for domain preparation. • Even though generic applications could not be explored in this work, A N S L i b currently allows the solution of many part ial differential equations. The impl ic i t Newton-Krylov G M R E S solver cannot, at the present moment, converge the solution's residuals suf-ficiently for the objective function evaluation to be sufficiently accurate. This , in turn, also causes the finite-difference gradient approximation to be inaccurate. The convergence issues are probably the result of an i l l-condit ioned Jacobian matr ix. Pre-condit ioning is currently being implemented and should lead to more accurate and efficient solutions. Since many state equation solutions are needed to solve a shape op-t imizat ion problem, the improved efficiency brought by an impl ic i t solver is essential. The current research cannot make further progress unt i l the solver's development is complete. • Being able to correctly simulate the physics involved in an opt imizat ion problem only constitutes a fraction of the task at hand. Therefore, having the possibil i ty of using a generic solver does not automatical ly make the opt imizat ion framework applicable CHAPTER 5. CONCLUSIONS AND RECOMMENDATIONS 77 to every conceivable problem. Many problems may require special assumptions. For example, in structural design, many holes may be needed in a given beam to reach optimality. Design variables other than control points defining the contours of a shape must then be considered. Th is is one of the reasons why creating a truly generic shape optimizat ion framework is impossible. The current implementation has been developed wi th applications in aerodynamics in mind. It is believed that future work should concentrate on possible improvements in this area. Design variables such as the angle of attack or the space between the elements of an airfoil in high-lift configuration could then be considered. The versatil ity of the generic solver would st i l l be exploited. It would allow, for example, to quickly switch from the Euler to Navier-Stokes equations depending on the desired type of analysis. • Gradient evaluation through finite-differences does not provide sufficient accuracy. This issue is inherent to the form of the one-sided difference equation (eq. 3.7) used to approximate the gradient. Because the numerator is composed of the difference of two values that, as the min imum is approached, become very similar, a point is eventually reached where this difference is no longer significant. When this situation occurs, as wi th the Poisson problem of section 4.3, a search direction providing a decrease in the objective function can no longer be computed. The opt imizat ion process is therefore brought to a halt. • The cubic spline deformation parameterization is working part icularly well and is also quite versatile. Typ ica l contours only require 12 to 20 control points for accurate repre-sentation, while st i l l providing sufficient shape flexibility. The design space is therefore relatively compact, leading to better convergence rates. On the other hand, since this feature is implemented outside of the mesh generator, crucial information such as the local curvature is not accessible. Curved boundaries are therefore only approximated by a collection of straight segments. Th is has the consequence of preventing the use of the high-order capabilit ies of A N S L i b . Currently, only second-order accurate solutions can be obtained. • Once the locations of the boundary vertices is known, the internal mesh is deformed. The impl ic i t law appears to be an appropriate choice. Even though the required number of operations is O (n-</n), where n is the number of mesh nodes, deforming the mesh is far less time consuming than evaluating the objective function. Unt i l applications in three dimensions are developed, the current technique should be satisfactory. CHAPTER 5. CONCLUSIONS AND RECOMMENDATIONS 78 • The search direction computation is done wi th both the steepest descent method and the B F G S method. As it was expected, the performance of the latter is better. Not only does this method requires fewer iterations to converge (and therefore fewer state evaluations), but the fact that it produces well-scaled descent direction also accelerates the line search procedure. A s a matter of fact, because a full Newton step (a step length of 1) wi l l eventually satisfy the Wolfe conditions, it should always be tried as the in i t ia l guess of the Armi jo backtracking process. This step length is most often accepted. On ly one state solution is then required by the line search. • The line search procedure using Armi jo backtracking allowed global convergence for the validation problems presented in chapter 4. However, since Wolfe's curvature condit ion is not enforced explicit ly, there is always a chance that it could be violated. In the worst case, this can cause the B F G S search direction to fai l to provide a decrease in the objective function. Because the cost of computing the objective function's gradient is currently very expensive, the Armi jo backtracking procedure was the only choice available. • The penalty formulation, which is currently used to take constraints into account, ap-pears to be performing well. In the problem of section 4.1, convergence to the desired min imum was observed while the constraint was respected. O f the two variants imple-mented (constant penalty coefficients and updated penalty coefficients), no important differences in performance were observed. However, in the studied problem, only one constraint was present. A s the number of constraints increase, it might become more difficult to set constant penalty coefficient while preserving global convergence prop-erties. The method wi th updates in constraints should therefore be uti l ized to ensure that the domain of attraction of the local min imum does not shrink and to ensure better convergence behavior. • Convergence to local min ima is a well-known l imi tat ion of gradient-based algorithms. This is well i l lustrated by the geometric opt imizat ion problem, where al l circles having the correct area and located inside the design space are local min ima. For the two inverse problems, the opt imal solution is unique wi th in the design space and conver-gence to the target geometry was expected. Since algorithms converging to a global min imum, such as stochastic methods, require a very large number of state solutions, it is often more efficient to obtain many gradient-based opt imal solutions, al l start-ing from different geometries, and then to pick the best design among these. These CHAPTER 5. CONCLUSIONS AND RECOMMENDATIONS 79 optimizations runs can be done concurrently, on many processors, without having to parallelize the solver. In summary, the interactions between the various elements of the opt imizat ion framework were verified. The results to the studied constrained and unconstrained opt imizat ion prob-lems il lustrate the validity of these elements, both as indiv idual entities and as a whole. There are st i l l some unsolved difficulties that wi l l need to be addressed in order to make fur-ther progress. Future work should be ini t ia l ly focused on correcting these problems. Then, improvements of the framework's performance and accuracy wi l l need to be considered. The next section discusses some possible paths that could be explored in the future. 5 . 1 Future work Even though the optimizat ion framework has shown promising results when dealing with simple test cases, major improvements wi l l have to be made to tackle pract ical design prob-lems. These improvements can be divided into two categories: the immediate requirements, without which no further progress can be made, and the desirable elements that wi l l improve the framework's accuracy and range of application. Immediate requirements B y looking at the test problems presented in chapter 4, it is easy to see that the current range of application is quite l imited. Th is is mainly due to the fact that this research was conducted concurrently wi th the development of the impl ici t Newton-Kry lov G M R E S solver. A broader spectrum of possible problems wi l l be available once this solver is complete. Gradient calculation is another area where modifications wi l l have to be made. Comput-ing gradients using finite-differences was a good choice for the development phase of the optimizat ion algorithm. However, it has many shortcomings, especially when it comes to efficiency. It is believed that discrete adjoint methods would make the best choice for future applications. The adjoint equation would be solved using the same G M R E S algorithm that is uti l ized to obtain the state solution. Nemec [46] uses this technique on structured grids and CHAPTER 5. CONCLUSIONS AND RECOMMENDATIONS 80 reports that solving the adjoint problem takes one fifth to one half of the time required to solve the direct problem. This makes this technique very attractive, especially since its cost is v ir tual ly independent of the number of control variables (see section 3.2.3). Furthermore, the Jacobian matr ix present in the adjoint equation is readily available from the direct prob-lem. Finite-differencing wi l l most certainly st i l l have a role to play in the implementation of an alternative gradient computation algorithm as it can be used for val idation purposes. Another efficient gradient computation technique is already implemented, but st i l l requires val idation. Once the impl ici t solver works wi th addit ional physics package, it wi l l be easy to compare incomplete sensitivities (section 3.2.5) wi th classic finite-differences. Th is technique is however bound to introduce accuracy errors in the gradient which could have an adverse effect on the convergence behavior. W i t h the abil i ty to compute gradients quickly, some modifications to the line search proce-dure are in order. As mentioned earlier, the curvature condit ion cannot be enforced because it requires the knowledge of the gradient for every step length tested. However, wi th rapid gradient computation comes the possibil i ty of expl ici t ly enforcing this condit ion, making the optimizer more robust. Even though backtracking should implici tely enforce the curvature condit ion, there is no guarantee that the accepted step length wi l l be long enough. Th is might cause the B F G S update to fai l to obtain a descent direction or, even worse, to cause the whole opt imizat ion process to converge to a value that is not the desired min imum. No-cedal and Wright [48] propose a line search algorithm that builds a cubic interpolant of the objective function in the direction in which the line search is performed. Th is interpolant only requires the knowledge of the objective function's value and gradient at two points along this direction. The step length should then be chosen to coincide wi th the min imum of this cubic function. Th is procedure should accelerate the line search procedure while strict ly enforcing the strong Wolfe condit ion. Another shortcoming of the current implementation is the inabi l i ty to use the high order capabilit ies of the solver. Th is is due to the choice of the deformation parameterization that is implemented outside the mesh generator. Even though it works very well wi th the test cases, replacing it would be advisable. Two reasons motivated the current choice for deformation parameterization: firstly, it suited well the planned inverse validation problems and secondly, if automatic differentiation were to be used for gradient calculation, it would have eliminated the need to differentiate the mesh generator. As adjoint methods now appear CHAPTER 5. CONCLUSIONS AND RECOMMENDATIONS 81 to be the method of choice for future gradient calculation, active use of the mesh generator can be made. Since G R U M M P utilizes exact geometric representations of the domain's boundaries, it could easily be used for shape parameterization. As a matter of fact, al l the information necessary is already available in the mesh generator. The only task remaining would be its integration into the opt imizat ion loop. Th is choice presents many advantages over the current parameterization. First ly, it is more versatile since it can take into account every possible geometry that G R U M M P can produce. It is also a ful ly-automated process that does not require any intervention from the user. Desirable features Adapt ive meshing is a feature that could improve both the accuracy of solutions and their evaluation time. Currently, G R U M M P refines meshes based on the geometric characteristics of the domain's boundaries. Usually, this implies that high curvature areas wi l l contain more cells. This , of course, is adequate for geometric representation. However, the physical aspect of the problem might require a high cell concentration in sections where no particular geometric features are present. For example, to be accurately represented, shocks appearing in transonic regimes require a high mesh density on the top of the airfoi l. Typical ly, this area does not have a high curvature compared to the leading edge. There is therefore a relatively low number of cells at this location (see figure 2.12). Modi fy ing the global refinement and grading factors when creating the mesh can add cells in the cri t ical areas and improve accuracy. The drawback is that it also refines the mesh in unnecessary zones, slowing the solution process considerably. Solution adapted refinement increases the number of cells only in cri t ical areas of the domain, based on error estimators. The mesh is usually changed unt i l no more variations in the solution can be observed. Th is results in an opt imal spatial discretization for a given problem, greatly improving the accuracy, certainly improving the performance of the optimizat ion process. Taking into account mult iple objective and mult iple point design problems are other features that wi l l be worth exploring in the future. As indicated by their name, mult iple objective problems use more than one competing objective functions. For example, one might want to maximize the lift coefficient of an airfoil while minimiz ing the drag coefficient. The geometries achieving the opt imal performance for each of these objectives can obviously be different. Many techniques can be used to solve such problems. A common approach is CHAPTER 5. CONCLUSIONS AND RECOMMENDATIONS 82 the weighted-sum method. A weight is accorded to each objective which are then summed together. The same approach can be used to solve multiple points design problems. Th is type of problem can be encountered when the performance of a design needs to be optimized over a range of operating conditions. Designing a transonic airfoil required to fly between a min imum and a maximum Mach number is a good example of such problem. Other desirable elements wi l l surely be identified as the framework's development continues. Future work wi l l l ikely be driven by the needs of part icular application problems. Unt i l then, efforts should be concentrated on developing reliable applications in two dimensional aerodynamics. B i b l i o g r a p h y [1] F E M L A B : A n Introductory Course. Available from the F E M L A B website. http: / /www.femlab.com, 2002. [2] W . F . A M E S . Numerical Methods for Partial Differential Equations, Second Edi t ion. Academic Press, New York, 1977. [3] W . K . A N D E R S O N , J . C . N E W M A N III, D . L . W H I T F I E L D AND E . J . NIELSEN. Sensitivity analysis for the Navier-Stokes equations on unstructured grids using complex variables. AIAA Paper 99-3294, June 1999. [4] W . K . ANDERSON AND V . VENKATAKRISHNAN. Aerodynamic design opt imizat ion on unstructured grids with a continuous adjoint formulation. AIAA Paper 97-0643, January 1997. [5] C . BOIVIN. Infrastructure for Solving Generic Multiphysics Problems. P h . D . thesis, Department of Mechanical Engineering, University of Br i t ish Columbia, 2003. [6] C . BOIVIN AND C . F . O L L I V I E R - G O O C H . Guaranteed-quality tr iangular mesh gen-eration for domains wi th curved boundaries. Int. J. Num. Meth. Eng., 55:1183-1213, 2002. [7] C . BOIVIN AND C . F . O L L I V I E R - G O O C H . A generic finite-volume solver for mult i -physics problems I: F ie ld coupling. In Proceedings of the Tenth Annual Conference of the Computational Fluid Dynamics Society of Canada, pp.49-54, June 2002. [8] C . BOIVIN AND C . F . O L L I V I E R - G O O C H . A generic finite-volume solver for mult i -physics problems II: Interface coupling. In Proceedings of the Tenth Annual Conference of the Computational Fluid Dynamics Society of Canada, pp.55-60, June 2002. 83 BIBLIOGRAPHY 84 [9] C . BOIVIN AND C . F . O L L I V I E R - G O O C H . A toolkit for numerical simulations of P D E ' s : II. Generic solution of multiphysics problems. To appear in Comput. Methods Appl. Mech. Engrg. [10] J . F . BONNANS, J . C . GlLERT, C , L E M A R E C H A L AND C . SAGASTIZABAL. Optimisa-tion numerique. Springer-Verlag, Par is, 1998. [11] C . D E BOOR. A Practical Guide to Splines. Springer-Verlag, New York, 1978. [12] A . M . B R U A S E T AND H . P . L A N G T A N G E N . A comprehensive set of tools for solving part ial differential equations: Diffpack. In M . Daehlen and A . Tveito, editors, Numerical Methods and Software Tools in Industrial Mathematics, pp. 63-92, Birkhauser, 1997. [13] M . C A S T R O - D I A Z , F . H E C H T AND B . M O H A M M A D I . Anisotropic grid adaptation for inviscid and viscous flows simulations. Int. J. Num. Meth. Fluid, 25:475-493, 1995. [14] A . C A U C H Y . Methode generate pour la resolution des systemes d'equations simula-tanees. C. R. Acad. Sci. Paris, 25:535-538, 1847. [15] Y . CRISPIN. Aircraft conceptual opt imizat ion using simulated evolution. AIAA Paper 94-0092, January 1994. [16] A . D A D O N E , B . M O H A M M A D I A N D N . P E T R U Z Z E L L I . Incomplete sensitivities and B F G S methods for 3D aerodynamic shape design. R. R. INRIA 3633, 1999. [17] P . J . DAVIS A N D P . RABINOWITZ. Methods of Numerical Integration. Academic Press, New York, 1975. [18] J . E . DENNIS J R . A N D R . B . SCHNABEL. Numerical Methods for Unconstrained Opti-mization and Nonlinear Equations. Prent ice-Hal l , Englewood Cliffs, 1983. [19] R . DuviGNEAU AND M . ViSONNEAU. Shape optimizat ion of incompressible and tur-bulent flows using the simplex method. AIAA Paper 2001-2533, 2001. [20] J . E L L I O T T AND J . P E R A I R E . Aerodynamic opt imizat ion on unstructured meshes wi th viscous effects. AIAA Journal. 35:1479-1485, 1997. [21] G . FARIN. Curves and Surface for Computer Aided Geometry Design. Academic Press, New York, 1990. BIBLIOGRAPHY 85 [22] A . V . FiACCO AND G . P . M C C O R M I C K . Nonlinear Programming: Sequential Uncon-strained Minimization Techniques. John Wi ley & Sons, New York, 1968. [23] A . FORTIN. Analyse numerique pour ingenieurs. Edit ions de l 'Ecole Polytechnique de Montreal , Montreal , 1995. [24] J . C . G I L B E R T , G . L E V E Y AND J . M A S S E . L a differentiation automatique de fonctions representees par des programmes. R. R. INRIA 1557, November 1991. [25] M . B . GILES . Aerospace design: A complex task. In Inverse Design and Optimization Methods 1997-05, von Ka rman Institute for Flu ids Dynamics, Brussels, A p r i l 1997. [26] P . E . G I L L , W . M U R R A Y AND M . H . W R I G H T . Practical Optimization. Academic Press, London, 1981. [27] A . GRIEWANK , D . JUEDES, H . M I T E V , J . U T K E , O . V O G E L AND A . W A L T H E R . A D O L - C : A package for the automatic differentiation of algorithms written in C / C + + . Version 1.8.2, March 1999. [28] A . H A B B A L . Nonsmooth shape opt imizat ion applied to linear acoustics. SIAM Journal on Optimization, 8(4) :989-1006, 1998. [29] B . T . H E L E N B R O O K . Mesh deformation using the biharmonic operator. Int. J. Num. Meth. Eng., 56(7) :1007-1021, 2003. [30] V . B H A M M E R AND N . O L H O F F . Topology opt imizat ion of continuum structures sub-jected to pressure loading. Struct. Multidisc. Optim., 19:85-92, 2000. [31] R . M . HICKS , E . M . M U R M A N AND G . N . V A N D E R P L A A T S . A n assessment of airfoil design by numerical opt imizat ion. NASA-TM-X-3092, 1974. [32] A . JAMESON . Aerodynamic design v ia control theory. Journal of Scientific Computing, 3:233-260, 1988. [33] A . J A M E S O N , N . A . P I E R C E A N D L . MARTINELLI . Op t imum aerodynamic design using the Navier-Stokes equations. Theoretical and Computational Fluid Dynamics, 10:213-237, 1998. BIBLIOGRAPHY 86 [34] A . JAMESON AND J . C . VASSBERG. Studies of alternative numerical opt imizat ion meth-ods applied to the Brachistochrome problem. Computational Fluid Dynamics Journal, 9:281-296, 2000. [35] W . H . J o u , W . P . H U F F M A N , D . P . Y O U N G , R. G . M E L V I N , M . B . B I E T E R M A N , C . L . H ILEMS AND F . T . JOHNSON . Pract ical considerations in aerodynamic design opt imizat ion. AIAA Paper 95-1730, June 1995. [36] A . KURUVILA, S. T A ' A S A N AND M . D . SALAS . A i r fo i l design and opt imizat ion by the one-shot method. AIAA Paper 95-0478, 1995. [37] E . L A P O R T E AND P . L E T A L L E C . Numerical Methods in Sensitivity Analysis and Shape Optimization. Birkhauser, Boston, 2003 . [38] A . M A R R O C C O . Simulations numeriques dans la fabrication des circuits a semiconduc-teurs. R. R. INRIA 0305, 1984. [39] A . L . M A R S D E N , M . W A N G A N D B . M O H A M M A D I . Shape opt imizat ion for aerody-namic noise control. Center for Turbulence Research Annual Briefs, pp. 241-247, 2001 . [40] G . M E D I C , B . M O H A M M A D I , N . P E T R U Z Z E L L I AND M . STANCIU. 3 D opt imal shape design for complex flow: application to turbomachinery. Amer. Inst, of Aeron. Astro 99-0833:1-8, 1999. [41] B . M O H A M M A D I . A new opt imal shape design procedure for inviscid and viscous tur-bulent flows. International Journal for Numerical Methods in Fluids, 25:183-203, 1997. [42] B . M O H A M M A D I , R. B H A R A D W A J , J . I M O L H O AND J . G . SANTIAGO . Incomplete sensitivities in design and control of fluidic channels. Center for Turbulence Research Annual Briefs, pp. 249-258, 2002. [43] B . M O H A M M A D I AND O . PIRONNEAU. Applied Shape Optimization for Fluids. Oxford University Press, Oxford, 2001 . [44] S. K . N A D A R J A H AND A . J A M E S O N . A comparison of the continuous and discrete adjoint approach to automatic aerodynamic opt imizat ion. AIAA Paper 2000-0667, 2000. [45] A . N E J A T AND C . O L L I V I E R - G O O C H . A high-order accurate unstructured G M R E S solver for Poisson's Equat ion. In Proceedings of the Eleventh Annual Conference of the Computational Fluid Dynamics Society of Canada, pp. 344-349, 2003. BIBLIOGRAPHY 87 [46] M . N E M E C . Optimal Shape Design of Aerodynamic Configurations: A Newton-Krylov Approach. P h . D . thesis, Univerisity of Toronto, 2003. [47] M . N E M E C AND D. W . Z i N G G . Towards efficient aerodynamic shape opt imizat ion based on the Navier-Stokes equations. AIAA Paper 2001-2532. 2001. [48] J . N O C E D A L AND S . J . W R I G H T . Numerical Optimization. Springer-Verlag, New York, 1999. [49] C . F . O L L I V I E R - G O O C H . A N S L i b : A scientific toolkit supporting rapid numerical so-lut ion of practical ly any P D E . In Proceedings of the Eigth Annual Conference of the Computational Fluid Dynamics Society of Canada, pp. 21-28, 2000. [50] C . F . O L L I V I E R - G O O C H AND M . V A N A L T E N A . A high-order accurate unstructured mesh finite-volume scheme for the advection-diffusion equation. Journal of Computa-tional Physics, 181(2) :729-752, 2002. [51] C . F . O L L I V E R - G O O C H . A toolkit for numerical simulation of P D E s : I. Fundamentals of generic finite-volume simulation. Comput. Methods Appl. Mech. Engrg., 192:1147-1175, 2003. [52] O . PIRONNEAU. Optimal Shape Design for Elliptic Systems. Springer-Verlag, New York, 1984. [53] Y . S A A D AND M . H . SCHULTZ. A generalized min imal residual algorithm for solving non-symetrix linear systems. SIAM Journal on Scientific and Statistical Computing, 7:856-869, 1986. [54] J . A . S A M A R E H , A survey of shape parametrization techniques. NASA/CP-1999-209136/PT1, pp. 333-343, 1999. [55] W . SQUIRE AND G . T R A P P . Using complex variables to estimate derivatives of real functions. SIAM Review, 10(1) :110-112, 1998. [56] W . H . SwANN. Direct search methods. In Numerical Methods for Unconstrained Opti-mization, W . Murray, editor, Academic Press, pp.13-28, London and New York, 1972. [57] M . V A N A L T E N A . High-Order Finite-Volume Discretisations for Solving a Modified Advection-Diffusion Problem on Unstructured Triangular Meshes. M .A .Sc . thesis, De-partment of Mechanical Engineering, University of Br i t ish Columbia, 1999. BIBLIOGRAPHY 88 [58] G . N . VANDERPLAATS. Numerical Optimization Techniques for Engineering Design. McGraw-H i l l , New York, 1984. [59] X . W A N G AND M . D A M O D A R A N . Aerodynamic shape opt imizat ion using compu-tat ional fluid dynamics and parallel simulated annealing algorithms. AIAA Journal, 39(8):1500-1508, 2001. [60] D . C . W I L C O X . Basic Fluid Mechanics. D C W Industries, Anaheim, 2000. 


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items