UBC Theses and Dissertations
Synthesis of optimal controllers for a class of aerodynmical systems, and the numerical solution of nonlinear optimal control problems Sutherland, James William
In Part I, a method is developed for determining the optimal control laws for a class of aerodynamical systems whose dynamics are linear in the thrust and nonlinear in the lift and thrust angle. Due to the presence of the linear thrust control, a singular subarc exists along which it is often possible to eliminate the Lagrange multipliers from the control equations. Conditions under which this elimination is possible are derived, and expressions for thrust and the rate of change of lift and thrust angle are obtained that depend only on state variables and a small number of time-invariant parameters. The optimal values of the unknown parameters are determined by a direct search in parameter space for that set which minimizes the system performance function. As a result, the proposed method is considerably simpler than standard numerical techniques that require a separate search in function space for each component of the control vector. Furthermore, since the control vector is generated by the direct solution of differential equations, the method appears suitable for use with in-flight guidance computers. Several numerical examples are presented consisting of one, two, and three dimensional control. In each case, it is shown that the search in multi-dimensional function space can be replaced by an equivalent search in the parameter space of initial conditions. In Part II, a three stage numerical algorithm is developed for a general class of optimal control problems. The technique is essentially a combination of the direct and indirect approaches. Like the indirect approach, the control law equations are used to eliminate the control vector from the system and adjoint equations. However, instead of trying to solve the two point boundary-value problem directly, the augmented performance function is first considered to be a function of the unknown initial conditions and is minimized by a gradient search in the initial condition space. It is shown that it is sufficient to search over the surface of any sphere for the intersection of the line μλ*₀ where λ*₀ is the classical solution of initial values. As a result, this first approach is not dependent on a good initial estimate of the optimal trajectory, and is therefore used in the first two stages of the proposed algorithm to provide the property of rapid initial convergence. The property of rapid final convergence is obtained by employing either a modified method of matching end points, or a method of determining the optimal step size for the gradient method of the first two stages. Either combination results in a three stage numerical algorithm that has good initial convergence, good final convergence, and which requires storage at terminal points only. Several examples are presented consisting of both bounded and unbounded control.
Item Citations and Data