Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Multiple objective control with applications to teleoperation Hu, Zhongzhi 1996

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1997-195937.pdf [ 5.81MB ]
Metadata
JSON: 831-1.0064919.json
JSON-LD: 831-1.0064919-ld.json
RDF/XML (Pretty): 831-1.0064919-rdf.xml
RDF/JSON: 831-1.0064919-rdf.json
Turtle: 831-1.0064919-turtle.txt
N-Triples: 831-1.0064919-rdf-ntriples.txt
Original Record: 831-1.0064919-source.json
Full Text
831-1.0064919-fulltext.txt
Citation
831-1.0064919.ris

Full Text

Multiple Objective Control with Applications to Teleoperation By Zhongzhi Hu B.Sc, Beijing Institute of Technology, Beijing, China, 1983. M.Sc, Beijing Institute of Technology, Beijing, China, 1986. M.A.Sc, University of British Columbia, Vancouver, Canada, 1992.  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF ELECTRICAL ENGINEERING  We accept this thesis as conforming to th^/required standard  THE UNIVERSITY OF BRITISH COLUMBIA November 1996 © Zhongzhi Hu, 1996  In presenting this thesis in partial fulfilment  of the  requirements for an advanced  degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department  or by his  or her representatives.  It  is  understood  that  copying or  publication of this thesis for financial gain shall not be allowed without my written permission.  Zy'nee^Mj  Department of The University of British Columbia Vancouver, Canada Date  DE-6 (2/88)  V* c  y/9?6  4  Abstract Control system design inevitably involves tradeoffs among different and even conflicting performance and robustness specifications. This thesis deals with some multiple objective control system design problems and with applications to teleoperation systems.  First, the multiple objective linear-quadratic optimal control problem is solved. By using duality theory, this minimax problem is transformed into a convex optimization problem . In particular, the infinite time problem is shown to be an optimization problem involving linear matrix inequalities.  Second, the multiple objective Hoo control problem for SISO systems is studied. Nonsmooth analysis is used to characterize optimality conditions for this problem. Under these conditions, either all-pass properties or the optimal performance values are obtained.  Third, numerical solutions of the general multiple objective control system design problem by either convex optimization or non-convex optimization are presented. First, the convex optimization design procedure is described, the effectiveness of the cuttingplane based solver is demonstrated, and some computational issues are discussed. Then a non-convex optimization design procedure is proposed, in which an approximation to the free transfer function in the Q-parametrization is proposed. It is shown, by design examples, that it has the advantage of directly producing low-order controllers.  Last, the robust controller design problem for teleoperation systems is investigated. First, a two-port ideal teleoperation model is proposed. It is shown that the model scales both positions and forces, yet is stable when terminated by any strictly passive hand and  environment impedances. Then a transparency measure is proposed to be defined as the Hoo-distance to the ideal teleoperator model. Using a four channel control structure, the controller design problem is formulated as a multiobjective optimization problem maximizing transparency subject to robust stability for all passive environments. This problem is shown to be convex in the design parameters for a fixed hand impedance. To demonstrate the design procedure, this thesis treats the design of a controller for a simple one degree-of-freedom system model of a motion-scaling teleoperation system. Both simulations and experiments have been carried out to show the effectiveness of the proposed controller design methodology.  iii  Table of Contents Abstract  ii  List of Tables  viii  List of Figures  ix  Notation  xiii  Acknowledgments Chapter 1  . . . xv  Introduction  '.. 1  1.1  Historical Overview  1  1.2  Literature Review  4  1.3  Thesis Outline .  8  Chapter 2  Multiple Objective Control Problems  12  2.1  Introduction  12  2.2  Problem Formulation  14  2.2.1  Parametrization of stabilizing controllers  15  2.2.2  Problem statement  16  2.3  Analytical Solutions  18  2.4  Numerical Solutions  20  2.5  Discussion  21  Multiple Objective LQ Control Problem  23  3.1  Introduction  . 23  3.2  Formulation of the Problem  24  3.3  Linear Quadratic Problem  25  3.4  Solution via Convex Optimization  27  Chapter 3  iv  3.5  Solution via Linear Matrix Inequalities  31  3.6  Concluding Remarks  36  Chapter 4  Multiple Objective  Control Problem  37  4.1  Introduction  37  4.2  Analytical Results via Nonsmooth Analysis  38  4.2.1  Some preliminaries  38  4.2.2  Main results .  40  4.3  Numerical Solution via LMI  49  4.4  Concluding Remarks  Chapter 5  .55  Numerical Solutions of Multiple Objective Control Problems  56  5.1  Introduction  56  5.2  Solution via Convex Optimization  57  5.2.1  Linear approximation of Q  57  5.2.2  Convex optimization via sequential quadratic program methods  58  5.2.3  Convex optimization via cutting-plane algorithms  58  5.2.4  Design Examples  59  5.2.5  Computational aspects  70  Solution via Non-convex Optimization  71  5.3.1  Nonlinear approximation of Q  71  5.3.2  Computational Aspects  72  5.3.3  Numerical examples  . . . . . . . 75  Concluding remarks  .80  5.3  5.4  V  Chapter 6  Teleoperation Controller Design Problem  82  6.1  Introduction  82  6.2  Background on Stability and Passivity  84  6.3  An Ideal Teleoperator  87  6.4  Robust Controller Design  93  Teleoperation controller structure  93  6.4.1 i  6.4.2  Performance measures and stability constraints . . . . . . 94  6.4.3  Controller design problem formulations  6.4.4  Numerical solutions  100  Design Example  101  6.5.1  Identification of the hand impedance  102  6.5.2  Tradeoff between performance and stability robustness . 103  6.5.3  Simulation results  104  6.5.4  Experimental results  113  Concluding Remarks  120  Conclusions  122  7.1  Contributions  122  7.2  Future Work  124  6.5  6.6 Chapter 7  References . . Appendix A  99  • 125 Cutting-plane Algorithms  135  A.1  Elements of Convex Analysis  135  A.2  KCP Algorithm  135  A.3  EMCP Algorithm  138  Appendix B  Linear Matrix Inequalities  140  B.1  Linear Matrix Inequalities  140  B.2  Software for Solving LMI Problems  141  vii  List of Tables Table 3.1  Example 3.1: Solution of a multiple objective LQ control problem  31  Table 4.1  Example 4.2: Solution results  54  Table 5.1  Example 5.1: Results via constr, KCP and EMCP . . . . . 62  Table 5.2  Example 5.2: Results via minimax, KCP and EMCP  Table 5.3  Example 5.4: Solution properties  Table 5.4  Example 5.4: Results via convex and non-convex  . . . 66 .76  optimization  77  Table 5.5  Example 5.5: solution properties  . 78  Table 5.6  Example 5.5: Results via convex and non-convex optimization  "-  80  Table 6.1  Simulation results: transmitted impedances to the hand . 1 1 2  Table 6.2  Experimental results: transmitted impedances to the hand  .119  viii  List of Figures Figure 2.1  Example 2.1: A feedback system with multiplicative plant perturbation  13  Figure 2.2  General feedback systems  14  Figure 3.1  Example 3.2: The level curves of the overall objective: (a) the mesh surface, and (b) the contocdur  Figure 3.2  34  Example 3.2: The level curves of the multiplier A i : (a) the mesh surface, and (b) the contour  Figure 3.3  .35  Example 3.2: The level curves of the multiplier A : (a) the 2  mesh surface, and (b) the contour Figure 4.1  35  Example 4.2: Weighted objective functions \Ti(ju)\ (dotted line) and \T (ju)\ (solid line) when vV = 14  54  2  Figure 5.1  Example 5.1: A feedback system with additive plant perturbation  \ . 60  Figure 5.2  Example 5.1: Robust stability constraint  63  Figure 5.3  Example 5.2: A unity feedback control system ...'  64  Figure 5.4  Example 5.2: Weighted objective functions |Ti(ju)|(dotted line) and |T (jw)|(solid line) associated with 2  Figure 5.5  Example 5.2: Sensitivity function T (solid line) and its er  bounding function h (dotted line) associated with Q™  x  Figure 5.6  . . 68  Example 5.2: Robustness function T (solid line) and its ur  bounding function associated with Qll h(dotted x  ix  line) . . . 68  Figure 5.7  Example 5.2: Weighted objective functions line) and \T (ju)|(solid line) associated with  iT^ju)|(dotted  2  QU  • • • 69  Figure 5.8  Example 5.2: Number of Constraints in KCP and EMCP . 70  Figure 5.9  Example 5.5: Weighted objective functions line) and |T (ju;)|(solid line) using  iT^ju)|(dotted  parameter  Q  2  from (5.45)  2  ncvx  79  Figure 6.1  General Teleoperation System  83  Figure 6.2  2n-port representation of a teleoperation system  83  Figure 6.3  An n-port network :  85  Figure 6.4  A representation of an LTI one-port network, Y, coupled to  .86  Z  Figure 6.5  Two-port representation of ideal scaled teleoperation . . . 88  Figure 6.6  Another representation of ideal scaled teleoperation . . . . 90  Figure 6.7  A four channel control structure.  93  Figure 6.8  General feedback system  94  Figure 6.9  Performance vs distance to passivity for three values of force and motion scaling  Figure 6.10  104  Maximum singular value frequency responses of Y/ # ;  (solid line) and Y  (circle)  T:H  105  Figure 6.11  Nyquist plot of Y  106  Figure 6.12  Design example: simulation diagram of a motion-scaling  te  system Figure 6.13  .108  Simulation results: motion scaling and force scaling with a soft environment E = 2s + 200 X  109  Figure 6.14  Simulation results: motion scaling and force scaling with a stiff environment E = 5s + 500  Figure 6.15  110  Simulation results: motion scaling and force scaling with a time-varying environment, which switches at 3, 6, 9, and 12 seconds  Figure 6.16  111  Simulation results: transmitted impedances to the hand, Z h (solid line) and Z t  th  (dash-dotted line), with a soft  environment E = 2s + 200 Figure 6.17  Simulation results: transmitted impedance to the hand, Z  th  (solid line) and Z  ih  (dashdot line), with a stiff  environment E = 5s + 500 Figure 6.18  112  113  Design example: experiment diagram of a motion scaling system  114  Figure 6.19  Design example: experimental setup  115  Figure 6.20  Experimental results: motion scaling and force scaling with a soft environment E = 2s + 200  Figure 6.21  Experimental results: motion scaling and force scaling with a stiff environment E - 5s + 500  Figure 6.22  116  117  Experimental results: motion scaling and force scaling with a time-varying environment, which switches at 3, 6, 9, and 12 seconds  Figure 6.23  118  Experimental results: transmitted impedances to the hand, Z  th  (solid line) and Z  th  (dashdot line), with a soft  environment E = 2s + 200 xi  119  Figure 6.24  Experimental results: transmitted impedance to the hand, Z  th  (solid line) and Z  th  (dashdot line), with a stiff  environment E = 5s + 500  xii  120  Notation  Notation  Meaning  / : X —> Y  A function from the set X into the set Y .  ^  Equals by definition.  •  End of proof.  d<j)(x)  The subdifferential of the function <f> at the point x.  m  {l,2,...,m}.  R  The real numbers.  R  The nonnegative real numbers.  +  R  The vector space of m-component real vectors.  m  C  The complex numbers.  C  The open right half plane, { ^ C : Res > 0).  +  C  The vector space of m-component complex vectors.  m  C  m  x  n  The vector space of mx n complex matrices.  D  The open unit disc, {z e C : \z\ < 1).  Re{(.)}  The real part of (.).  Li  Lebesgue space of integrable functions.  L2  Lebesgue space of square-integrable functions.  LQO  Lebesgue space of essentially bounded functions.  H2  Hardy space of square-integrable functions.  HQO  Hardy space of essentially bounded functions.  prefix R  Real rational subset of, e.g., RHoo.  ||.||  Norm on £ 2 -  H-lloo  Norm on  S-  Orthogonal complement of subspace S.  S  The unit simplex in R  2  1  m  s  A f m  =  m  : m  |/xe/r  ^|  :^->o,y:^ = i j .  xiii  A >0  The nxn  complex symmetric matrix A is positive  semidefinite, i.e., z*Az > 0, for all z G C . n  A >0  The nxn  complex symmetric matrix A is  positive definite, i.e., z*Az > 0, for all nonzero z G C . n  A  Transpose of matrix A.  A*  Complex-conjugate transpose of matrix A.  T  A ! 1  2  A symmetric square root of a matrix A = A* > 0, i.e., A l A l l 2  = A.  x 2  A~(s)  A(-sf.  inf  The infimum of a real-valued function on a set.  sup  The supremum of a real-valued function on a set.  6(A)  The maximum singular value of a matrix A, equal to the square root of the largest eigenvalue of A* A.  [A, B, C, D]  The transfer matrix corresponding to the state-space equations x = Ax + Bu, y — Cx + Du, i.e., . [A, B,C,D]  xiv  = D + C(sl -  A)~ B. l  Acknowledgments I would like to express my deep gratitude to my supervisor, Professor Tim Salcudean, for suggesting the subject of this thesis, for his invaluable support and inspiring guidance throughout this research. I would also like to extend my appreciation to my co-supervisor, Professor Phillip Loewen, for his many insightful suggestions and helpful discussions. I am grateful to many of past and present colleagues, for some technical help and discussions, and for their encouragement. Last, but no means least, I would like to thank my wife Lei Jin for her love, support and encouragement.  This thesis is dedicated to my sons, Jinwei and Youja, to whom I owe the most.  XV  Chapter 1 Introduction In this chapter we first give a brief historical overview of control systems design, describing why we have to deal with multiple objective control problems, we then survey the literature on some particular multiple objective control problems, and finally we present the thesis outline.  §1.1 Historical Overview From the ancient waterclock to today's space probes and automated manufacturing plants, control systems have played a key role in technological and scientific development. The ultimate goal in control system design is to achieve optimal performance while maintaining stability, under perhaps substantial system uncertainty and under design constraints imposed by the technology. Quite often, performance and stability are two different and even conflicting features. Therefore most of the practical controller design problems are multiple objective in nature. Control systems are designed and analyzed using mathematical models, arrived at by physical laws or perhaps from measured data. Mathematical models inevitably give an imperfect description of physical systems, and, in any case, the parameters involved in such a description are often subject to variation and uncertainty. Feedback can be used to reduce the effect of this uncertainty on the system behavior. Indeed, the ability of feedback to counteract the effects of uncertainty is generally the primary motivation for its use. However feedback come with several potential disadvantages. In particular, large gains in a feedback system usually cause the system to become unstable. Therefore determining just how much feedback can safely be applied is a key issue in control system design. 1  Chapter 1, Introduction  Early efforts to use feedback ran into exactly this difficulty, and it motivated much of the early development of the so called classical control theory.  Classical design  methodologies dealt with linear time-invariant scalar systems and proceeded by shaping the gain and phase of the open loop transfer function to modify the feedback properties of the closed loop system. Several techniques for synthesizing feedback control systems that meet explicit specifications on disturbance attenuation and that have low sensitivity to large parameter variations are presented in Horowitz's book [1]. These techniques are based on root-locus and frequency response shaping, but are limited to single-input single-output systems. Furthermore, they are highly dependent on the specific structure of the problem and demand a large amount of the designer's time because they are not easily implemented on computers. The linear-quadratic-Gaussian (LQG) approach dominated the field in the 1960's and the early 1970's [2]. If the dynamic model is assumed to be exact and the disturbances are assumed to be white noise Gaussian processes, then the control law that minimizes the expected value of a quadratic form in the errors is called an L Q G controller. However, there is no guarantee of robustness for LQG-controllers [3].  For example, a small  perturbation of the plant dynamics or the occurrence of control disturbances may produce an unstable closed-loop system. This is because the L Q G design method takes no account of either uncertainties in the dynamic model or of departures from uniformity in the power spectrum of the disturbance. Both factors are, of course, inevitable in any practical problem. A good control system design procedure should therefore take account of the uncertainty inherent in a mathematical model, and quantify the effect of feedback on uncertainty. This motivated people to seek quantitative measures for the size of uncertainty. A suitable measure that can incorporate both signal gain and robustness to uncertainty is the 2  Chapter 1. Introduction  Hoo-norra. The synthesis of controllers using Hoc optimization methods was initiated in 1981 by Zames [4]. It represents one of the major recent advances in control system design. Put simply, the problem is to design a controller that achieves internal stability and minimizes the Hoo-norm of a closed-loop transfer matrix. Zames's work dealt with some of the basic questions of classical control theory, and immediately attracted a great deal of attention. It was soon extended to a wide range of sensible robust controller synthesis problems, in particular when it was recognized that the approach allows robustness to be incorporated more directly than other methods. The intensive research activity that. followed has produced valuable results in both theory and practice [5, 6, 7, 8, 9, 10]. Despite its success and consequent popularity, HQO control theory has some disadvantages, which were quickly recognized in control community. One serious disadvantage is that the theory can only cope with the problem of minimizing the Hoo-norm of a single transfer matrix. Therefore any problem with several and perhaps competing performance and robustness specifications must be restated in terms of just one objective in the frequency domain by ,the use of weighting functions. Selecting weighting functions that capture the essence of each specification, in the absence of a systematic procedure, may turn out to be quite hard and time consuming, and worse still the conservatism introduced by this approximation may render the final results useless. It should be noted that the same problem also occurs to the L Q G design method discussed before in selecting the weighting matrices. In practice, control system design almost inevitably involves tradeoffs among competing objectives. It is often the case that the controller is required to enhance several different performance and robustness criteria, and all of these cannot be improved simultaneously. For example, it is intuitively clear that to obtain a greater stability margin, it is likely that the performance of the control systems needs to be compromised. Therefore it 3  .  Chapter 1. Introduction-  is, desired to formulate the controller synthesis problem using multiple design objectives rather than to cast the problem in terms of a single cost function.  § 1 . 2 Literature Review The research in multiple objective control problems is very rich and broad.  The  literature review will only centre on • multiple objective linear quadratic (LQ) control problem, • multiple objective HQQ control problem, • numerical solutions of multiple objective control problems, and • controller design for teleoperation systems. Multiple objective L Q control problem Much attention has been paid to the L Q control in control theory, largely due to its wide applications and its mathematical elegance and tractability. The use of multiple objectives is well justified by many real-world control problems, for example, in aircraft control systems design [11, 12], in the control of space structures [13], and in industrial process control [14, 15,16]. Various multiobjective L Q control formulations and schemes have thus been motivated and investigated. Mainly, there are two research directions: 1) characterization of the set of noninferior solutions (e.g., [14, 17, 18]), and 2) search for a specific noninferior solution, for example, the minimax solution [19, 20], the ideal point [21], and the hierachical ordering [22]. For the first direction, the selection of the final implementing control is left unsolved. For the second direction, technical difficulties arise in the search process due to the involved dependency of the matrix Riccati differential equations (RDE) or algebraic Riccati equations (ARE) on a weighting coefficient vector. However, recently, the convex duality and optimization method has emerged as a powerful 4  Chapter 1. Introduction  tool to solve the multiobjective L Q G problem [23, 24] and the non-convex multiobjective linear quadratic regulator problem for the infinite time horizon case [24]. Multiple objective H ^ control problem As we discussed in the last section, the H^o control theory plays an essential role in designing robustly stable control systems (see, e.g., [4, 6, 7, 25] and references therein). However, control system design is constrained not only by stability robustness but by other performance criteria as well. As shown in [26, 23], many design specifications can be quantified using closed-loop frequency responses. Therefore, the control system designer needs to find a controller such that several closed-loop frequency responses are simultaneously small in an appropriate sense. This problem may be cast as a multiple objective Hoo control problem, also referred to as a multi-disc H^o problem. In complete generality, the multiple objective Hoo control problem has only been solved numerically by convex optimization [26, 23] and general analytical solutions are not available. However, some qualitative properties of the optimum for related problems have been reported in [27, 28, 29, 30, 31]. Holohan and Safonov [29, 30] studied the nominal loop shaping problem (NLSP) for SISO systems, which they transformed into a two-disc Hoo problem, and then used function space duality theory to obtain bounds and an all-pass property. For the case where the plant is both stable and minimum phase, they determined the optimal achievable performance. Zames and Owen [28, 31] studied another form of the two-disc Hoo problem for both SISO and M I M O systems, which arose from the optimal robust disturbance attenuation problem (ORDAP). Their solutions have also been shown to satisfy a flatness or all-pass condition. Numerical solutions of multiple objective control problems Despite great progress in control theory and enormous advances in computing power, the design of a LTI controller which meets multiple and conflicting performance and 5  Chapter 1. Introduction  robustness specifications can still be quite a challenge. Only for a few very special cases are there analytic methods for finding the exact form of the trade-offs among different specifications [32, 29]. However, in the last decade, optimization-based design methods [33, 34, 35, 36, 26, 23, 24] have emerged as powerful tools in multiple objective control system design. Most numerical methods proceed by solving approximate problems. These can be categorized as the non-convex optimization approach, such as the approximate scalarization [37], the U-parametrization [38, 39], and the iterative Hoo optimization [40], semi-infinite optimization [41], and the convex optimization approach, such as the Q-parameter design [36, 26, 23, 24]. The advantage of the non-convex approach is that it normally yields low-order controllers when it succeeds. Its major drawback is that it is not guaranteed to find a solution if one exists, nor is it guaranteed to find global optimum of the objective function. The advantage of convex optimization approach is that it always finds a solution if one exists. However, the parameter space is usually very large, which may produce two problems in the design process. First, it easily runs into computational problems, and therefore an efficient optimization solver is demanded in the design. Second, it generally yields high order controllers, which must then be judiciously reduced in order to be made feasible in practice. A practical example: teleoperation control Teleoperation has found wide applications in space exploration, waste management and undersea exploration, by extending an operator's sensing and manipulation capabilities to remote and hazardous locations. Recently, teleoperation has started to encompass the extension of such capabilities through barriers of scale, allowing human involvement at scales much smaller or much larger than possible directly. Examples include manipulation of cells, microsurgery and electronic assembly. Much research effort has been devoted to this kind of scaled teleoperation (see, e.g., [42, 43, 44], and references therein). 6  Chapter 1. Introduction  One of the major issues in teleoperation is the controller design. The teleoperation controller should be designed with the goal of achieving the best possible performance, normally termed as transparency, while maintaining stability when coupled to uncertain environments, in the possible presence of time delays, disturbances, and measurement noise. Therefore, the teleoperation controller design problem involves a compromise between performance and robust stability. It is challenging mainly due to the large uncertainties of operator and environment impedances that have to be accommodated and due to communication delays. The conventional teleoperation control schemes such as position-position and position-force schemes provide poor transparency, even at low frequencies, and poor stability properties [45].  Recent work in teleoperation controller design has focused  more on stability or/and performance, and could be categorized as stability-optimized scheme [46, 47, 48], tansparency-optimized scheme [45] or some combinations of the two [49, 50]. Passivity theory is the basis for the modifications of basic position-position and position-force schemes presented in [46, 47] to deal with time delays. Robust control ideas based on small gain theory motivate the force-force scheme proposed in [51]. Transparency performance of these teleoperation systems are not presented.  Some  experimental testing of the passivity concept in [46] reveals that the stability guarantee comes at the expense of the reduced stiffness, resulting in poor transparency [52]. The problem of achieving performance is also difficult, in large part because performance specifications are also likely to change with the environment. However, various control schemes have been concentrated on performance. Objectives based on specifying network theory hybrid parameters are discussed in [53, 54]; the network hybrid parameter design problem in [55] is formulated in terms of a transparency objective, and suggests a 7  Chapter 1. Introduction  position-position approach; a four-channel control structure has been suggested to achieve transparency in [45] and ideal response of teleoperation in [48]; Hoo-optimization theory has been used to best shape the interested closed-loop responses in [56, 44] and to shape the relationships between forces and positions at both ends of the teleoperator in [57]. However none of the above work has explicitly incorporated robust stability to large changes in hand and environment variation into the controller design. Both robust stability and performance are treated in [50] and [49] respectively. In [50], a combined Hoo-optimization and /j-synthesis framework are used to design a teleoperator which is stable for a pre-specified time delay and fixed operator and environment impedances while optimizing performance specifications.  Concepts of  passivity, impedance control and Hoo-optimization theory are used in [49] to formulate the controller design as a semi-infinite optimization problem. Two criteria, "transparency distance", and "passivity distance" [58] are employed. Unfortunately, both resulting designs are not convex in design parameters, and therefore the limit of the achievable performance and the exact trade-offs between stability and performance can not be obtained.  § 1.3 Thesis Outline This thesis is concerned with control system design with multiple objectives for linear time-invariant (LTI) systems only. We first study two particular multiple objective control problems, then we develop numerical solutions to a general multiple objective control system design, and finally, in a practical problem of multiple objective control system design, we develop robust controller methodologies for teleoperation systems. The thesis consists of seven chapters and two appendices. An outline of the thesis is given as follows. 8  Chapter 1. Introduction  Chapter 2: Multiple Objective Control Problems.  We formulate the multiple objec-  tive control problem as an optimization problem, which in most cases is convex but nondifferentiable, and review previous analytical and numerical work on this problem. Chapter 3: Multiple Objective L Q Control Problem.  We solve the multiple objec-  tive L Q optimal control problem, where the functional to be minimized is the maximum of several quadratic performance indices. By using duality theory, the minimax problem is transformed into a convex optimization problem. Particularly for the infinite time horizon case, it can be reduced into a convex optimization problem only involving a linear matrix inequality (LMI). These two optimization problems can be efficiently solved numerically by cutting-plane based solvers or L M I based solvers. The solution to the multiple objective L Q control problem shows that it depends on the initial states and is an open-loop control optimal only for the particular initial conditions. Chapter 4: Multiple Objective Hoo Control Problem. We study the multiple objective Hoo control problem for SISO systems, in which the functional to be minimized is the maximum of several Hoo-norm performance indices. The problem is convex but nondifferentiable. Nonsmooth analysis is used to derive optimality conditions for this problem. Under these conditions, either all-pass properties or the optimal performance values can be obtained. Since the optimal solution for this problem might be infinite order in some situations, to get a realizable controller, i.e., a real-rational, proper and stable controller, it is suggested that only suboptimal solutions be pursued. One approximate solution approach based on LMIs is presented, in which Hoo-norm constraints are transformed into LMIs by using the Bounded Real Lemma. Chapter 5: Numerical Solutions of Multiple Objective Control Problems. We present convex and non-convex optimization based solutions to the general multi9  Chapter 1. Introduction  pie objective control system design problem. We first describe the convex optimization design procedure, demonstrate the effectiveness of a cutting-plane based solver, and discuss some computational issues arising in the design process.  Then we propose  a non-convex optimization design procedure, in which an approximation for the free transfer function in the Q-parametrization is proposed. Since the starting guess is vital for non-convex optimization, three schemes of updating the starting guess are proposed to improve the computational efficiency. We show by several design examples that this approach has the advantage of directly producing low-order controllers. Chapter 6: Teleoperation Controller Design Problem. We investigate  a practical  multiple objective control system design problem — the construction of robust controllers for teleoperation systems. These are inherently M I M O systems with large uncertainties. A controller design approach for teleoperation systems that optimizes performance subject to robust stability for all passive environments is proposed. With the four channel control structure, the controller design problem can be formulated as a multiple objective optimization problem, which is shown to be convex using the Youla parametrization. The robust controller was obtained by solving the optimization problem with the cutting-plane based solver described in Chapter 5. To demonstrate the design procedure, we design a controller for a simple one degree-of-freedom (DOF) system model of a force-reflecting and motion-scaling teleoperation system. Simulations and experiments were carried out to show the effectiveness of the proposed design methodology. Chapter 7: Conclusions.  In the last chapter we summarize the contributions of this  thesis and suggest some future research. Appendix A: Cutting-plane Algorithms. In this appendix, we first describe some of the basic tools of convex analysis, and then summarize Kelley's cutting plane algorithm 10  Chapter 1. Introduction  for solving convex optimization problems, and its refinements by Elzinga and Moore. Appendix B: Linear Matrix Inequalities. In this appendix, we give a short introduction to linear matrix inequalities, and some software for solving linear matrix inequality problems.  11  Chapter 2 Multiple Objective Control Problems We first define multiple objective control problems, then briefly review some existing analytical and numerical solutions, and finally offer some discussion and remarks.  § 2.1 Introduction Control system design is most often a study of tradeoffs between different and even conflicting performance and robustness specifications. Most of these specifications can be quantified using closed loop frequency responses. For instance, questions such as how large the Controlled signals (e.g., tracking errors, weighted control inputs, etc.) can get in response to a given class of exogenous inputs (e.g., load disturbances, command inputs, sensor noises, etc.) and how much modeling uncertainty can be tolerated before the closed loop system becomes unstable may be precisely quantified using a closed loop frequency response. Therefore the control problem becomes that of finding a controller such that some desired measures of closed loop transfer matrixes are below specified levels. This is the class of multiobjective control problems that we shall consider in this thesis. Before we formally define this problem, we shall give an example illustrated in Figure 2.1 to show that multiple objective control problems arise naturally in control systems design.  12  Chapter 2. Multiple Objective Control Problems  t^  r  u  e  yp  t  I+AWi  —  Figure 2.1. Example 2.1: A feedback system with multiplicative plant perturbation  Example 2.1: Robust Control Problem. The (I + AW\)P,  controlled  plant  is  modeled  as  where P denotes the nominal plant transfer matrix, A the modeling  uncertainty, and W\ a known weighting function that reflects size and directional properties of the uncertainty at each frequency.  The objective is to design a con-  troller K such that the closed loop system is internally stable for every plant in P = {(/ + A W i ) P : HAII^ < 1} and the power of the nominal (i.e., A = 0) plant output y is below some pre-specified level in response to an exogenous signal d which p  is either white noise or an unknown signal with bounded power. More specifically, after introducing a suitable frequency dependent scaling W  2  of the controlled output y , the p  control problem is to find a controller K that (i).  internally W PK(I X  (ii).  stabilizes +  PK)'  1  every  plant  in  P,  or  equivalently  [59,  60],  < 1; and  keeps either the H - n o r m or the Hoo-norm of the weighted closed loop transfer 2  matrix from d to y below some pre-specified level, i.e., p  or  W (I 2  +  PK)~  1  W2(I + PK)  < 1.  The combination of (i) and (ii) is a multiobjective control problem. 13  < 1  Chapter 2.. Multiple Objective Control Problems  § 2.2 Problem Formulation For general control design problems, we use the basic configuration of the feedback systems as shown in Figure 2.2, where G is the generalized plant, which has absorbed all weighing functions, with two sets of inputs: the exogenous inputs w = (w  T  w  T  ... w„ ) , T  w  which may consist of disturbances and commands, and the control  inputs u. The plant G also has two sets of outputs: the measured outputs y and the controlled outputs z = (z  T  z  ...z )  T  .  T  z  Note that the input channels wi and output channels  ZJ may be vector-valued. K is the controller to be designed. It is assumed that G and K are real-rational and proper. Under the decomposition  G\2 G21 G22 G\\  G  (2.1)  the equations corresponding to Figure 2.2 take the form z  = G\\w  + G12U, (2.2)  y = G iw + G 2u, 2  2  u = Ky. We will see later that this setup is very convenient for analyzing some specific properties of the closed loop system or designing the controller K such that the closed loop system is stable and the output signal z is specified, i.e., some performance is satisfied.  w  > • •  U  • • •  G  ^  ^ ^ -—•  y  K Figure 2.2. General feedback systems 14  Chapter 2. Multiple Objective Control Problems  § 2.2.1 Parametrization of stabilizing controllers The parametrization of all internally stabilizing controllers was first introduced by Youla et al. [61], which is often referred to as the Youla-parametrization or the Qparametrization. The central idea is to parametrize all controllers in terms of a free stable transfer matrix Q such that closed loop stability is guaranteed for any choice of Q G R H Q O and the closed loop transfer matrices of interest are affine in the. function Q e R H O Q . A more recent version of the parametrization using coprime factorization due to Desoer et al. [62] is summarized in the following theorem. Theorem 2.1 [62]: Let G  = N M~  (2.3)  = M~ N  l  2 2  X  2  2  be a right coprime factorization (rcf) and a left coprime factorization (lcf) of G  22  RHQO,  (a).  over  respectively. Then  The set of all (proper-real-rational) transfer matrices K stabilizing G is parametrized by K = (Y 2  • = (X  2  M Q)(X 2  - QNi)  _  det(X - N Q)(joo)  - N Q)~\  2  2  2  (f  1  det (X  - QM ),  2  2  2  ? 0,  - QN ) (joo) ^ 0,  2  2  for Q e RHoo, where X ,Y ,X ,Y 2  2  2  e R H ^ satisfy 'M Y I 0' M _ N X_ 0 I  ^  2  2  -JV  2  2  2  2  (2.5)  2  (b). With K given by (2.4) the transfer matrix T  zw  from w to z equals  Ti + T QT , where 2  3  T\ = G\\ +  G\ M Y G \, 2  2  2  2  (2.6) T = 2  -G M , 12  2  T = MG . Under the hypotheses above, Ti e RHoo, (i = 1,2,3). 3  2  15  2l  ;  Chapter 2. Multiple Objective Control Problems  Remarks: (i) . For each proper real-rational matrix G22, eight RHoo-matrices satisfying the equations in (2.3) and (2.5) can be explicitly constructed by using state-space formulas as shown in [63, 64, 59, 7]. Note that there are many choices of finding the rcf and lef of £22(ii) . As a special case, suppose G22 is already stable, i.e., G22 £ RHoo- Then in (2.5) we may take N =N = 2  2  G22  X = M = I, X 2  2  2  (2.7)  =M =I 2  F = 0 , F = 0, 2  2  . in which case the formulas in (2.4) and (2.6) become simply K = -Q(I  T  zw  - G22QT  = -(/  1  = G  n  - QG22)~ Q, 1  and  (2.8)  - G12QG21, Q € RHoo.  (2.9)  (iii) . A n important point to note is that the closed-loop transfer matrix is simply an affine function of the controller parameter Q G R H ^ .  § 2.2.2 Problem statement The multiple objective control problem is defined as follows: given m closedloop transfer matrices Tk (&G m ) , each relating some input wi(i 6 n ) w  j{j  z  ^ "Hz)  m  to some output  Figure 2.2, find a controller K such that K internally stabilizes G and  minimizes the maximum of the various norms of transfer matrices Tjt (k G m); that is, solve the following optimization problem:  (  p2I  where, for each k, ||.||  )  :  normfc  ^JSS.,™!™—..*em). denotes one kind of norms in 11,112,1100, etc. 16  (2.10)  Chapter 2. Multiple Objective Control Problems  Applying the Q-parametrization of stabilizing controllers shown in Theorem 2.1 reveals that the I*, (k e r a ) affine matrix functions in Q E R H ^ [7, 65, 23], i.e., Tk = Ti,jfc + T kQT k, 2)  where T i ^ , ^ , * and T^^,  k E m,  3j  (2.11)  k E m, are known RHoo-matrices, which can be obtained  directly from the plant data G. So problem (P )  is equivalent to the following convex  21  optimization problem:  (  P  2  '  2  )  :  Q ^ H ^ ^ i ^  1  ' *  +  T  2  '  J  t  g  T  3  ^ll-r  m  f  c  ^  ^  (2.12)  More generally, we can formulate the problem directly from the design specifications. As shown in [65, 23], a controller design problem can be cast as a set of functional inequalities that must be satisfied. A flexible approach is to divide the performance specifications into two classes: <t>i{Q)  < ai,i E n, and  (2.13)  (2.14)  ii>j(Q)<bj,jem, where ai E R+(« E ra), bj E R+(j E ra), and <£,•(•.) : RHoo—> R, i E n and  V>.?(') :  RHoo—> R i ^ 221, are convex but perhaps nondifferentiable. The inequalities (2.13) are ?  soft constraints or specifications about which the designer is flexible; and the inequalities (2.14) are hard constraints or specifications that must be met. Thus the design of control systems can also be carried out by solving the following convex optimization problem:  ( ' ) nSk p2  3  :  (2-15)  {<KQ)MQ)<i>h  where <j){Q) = max{<^(Q) — ai,i E 21} and i>(Q) = ma,x{tpj(Q) 17  — bj,j E m}.  Chapter 2. Multiple Objective Control Problems  Example 2.2:  We consider the same problem as that in Example 2.1 using the frame-  work in Figure 2.2. To get the generalized plant G we first define w — (  Wl  ^ —  \2 w  (d)'*  =  (ll) = ( w  2  e)' " " "  y  y  r  y  0 G  a  n  d  Wi  u  =  /  ^  h  e  n  W]P  jy  -w  -W P  1  -i  -p  2  T  2  (2.16)  2  Suppose, for simplicity, that the nominal plant P is stable, real rational and proper. Using the Q-parametrization in (2.8), we have T  = WiPK(I  Z1W1  T  + PK)'  1  = W (I + PK)-  (2.17) (2.18)  = W (I + PQ).  1  Z2Wa  = VPiPQ, and  2  2  Therefore the robust control problem discussed in Example 2.1 can be formulated as convex optimization problems in the form of ( P ) : 2 2  (  p 2 A  ) • 7 = rmn nc  (P ' ) 2 5  •• -7=  max {||W.PQW^ \\W (I + PQ)\\ }, 2  2  or  min ma {||W Pg|| ,||W (/ + PQ)|| „,}.• X  1  0O  2  (2.19) (2.20)  C^feK-Jtloo  Clearly, if we have 7 < 1, then we can get a solution to the robust control problem.  § 2.3 Analytical Solutions There are no analytical solutions to the above formulated multiple objective control problems in (2.12) and (2.15). Only a few had been obtained for some simpler related problems. For example, control problems with multiple objectives in the H 2 sense has been handled by assigning a cost functional to each specific design objective and by merging all the objectives into a "global" cost functional by a weighted sum [17]. The resulting optimal control law, which is seldom optimal from any single design objective 18  Chapter 2. Multiple Objective Control Problems  point of view, brings the global functional to a minimum. A well known example for this merging of objectives is the standard linear quadratic regulator problem [66], where the weighted sum of the L.2-norms of the controlled output and the control input is optimized. The advantage of the weighted sum approach is two-fold: first, the control law that results is Pareto-optimal [17]. Second, the weighted sum approach enhances the tractability of the design problem since it reduces to the intensively researched optimization problem of a single cost functional. The major design drawback to this approach is that there is no apparent way of determining the summation weights in advance. These weights control the trade-offs that are made between conflicting objectives, and their choice involves a judicious intuition in a "cut and try" sequence. The latter intuition is lost quickly when the number of weights to be determined increases. Holohan and Safonov [29, 30] studied the two-objective  control problem ( P ) 2 5  as in (2.20), for SISO systems. By using the concept of the Cartesian product of vector spaces, they converted this problem into a geometrical one: find that vector belonging to a certain subspace lying closest to another given vector. Function space duality theory is used to develop the duality relations for the problem. Bounds are established for the general case. For the case where the plant is both stable and minimum phase, the optimal achievable performance (i.e., the numerical value of the minimum in (2.20)) is explicitly determined. However, the optimal controller has not been obtained in closed form. Recently, Zames and Owen [31] [67] studied the dual form of this problem but for M I M O systems using Banach space duality theory. Again, only a flatness or all-pass condition is obtained. To get the controller, numerical methods are still required. Several analytical results [32, 68, 69, 70] have also been obtained for solving the following so-called mixed H^/Hoo problem:  min  ilir^yilT^ll^l},  stabilizing K  19  (2.21)  Chapter 2. Multiple Objective Control Problems  where T ,i ZiWi  = 1,2, denote the closed loop transfer matrix from the exogenous input w{  to the controlled output z,. Bernstein and Haddad [32] studied a special case of (2.21). With the restriction w = w\ = w , they obtained necessary conditions for optimality 2  of controllers of a pre-specified (full or reduced) order. Doyle et al [68] and Zhou et al. [71] considered the dual of the Bernstein and Haddad type problem. They obtained necessary and sufficient conditions for the existence of an optimal controller (see also [72]). These conditions were given in terms of coupled nonlinear matrix equations of the Riccati type. Khargonekar and Rotea [70] have shown that in the state feedback case one can come arbitrarily close to the optimal mixed H2 [Hoo performance measure using constant gain state feedback and the output feedback problem can be reduced to a state feedback problem; moreover the state feedback problem can be converted into a convex optimization problem over a bounded subset of real matrices. Unfortunately, as we can see, all these results still demand numerical methods.  § 2.4 Numerical Solutions As discussed before, the Q-parametrization of stabilizing controllers allows the multiple objective design problem to be formulated as a convex optimization problem in Hoo such as (2.12) or (2.15). This problem is infinite-dimensional in the sense that Q cannot in general be specified by a finite number of parameters. However it can be approximated by a finite-dimensional convex optimization in the following manner: fix N basis functions Qi (i G jV) in RHoo, and then choose N  (2.22) i=l A '  where x = [xi, • • •, x^]  is a vector of new design parameters. This approximation of  Q(s) is known as the "Ritz approximation" [23]. It can be shown, by using Weierstrass' Approximation Theorem [73] as done in [65, 23], that any Q e RHoo can be uniformly 20  Chapter 2. Multiple Objective Control Problems  approximated arbitrarily close by the Ritz approximation in the form of (2.22). However, since there is no systematic way available to choose the basis functions, a large number of parameters is normally needed to obtain a satisfactory approximation. By using the Ritz approximation, problem ( P ) in (2.15) reduces into a convex 2 3  optimization problem in the finite-dimensional vector space R ^ : min {/(x)|flf(x)<0},  (2.23)  x£R,  N  where /(•) : R ^ -> R and g(-) : R  N  -> R are convex but perhaps nondifferentiable.  For example, (2.23) approximates (2.15) when f(x)  = max |^jfc(Q(a)) : Q{s) = J ^ S i Q i O o J ; and  g(x) = max{rf>k(Q(s)):Q(s) kem  {  =  y2xiQi(s)\.  U  J  There are several simple but powerful algorithms specifically designed for the finitedimensional convex optimization problem (2.23), e.g., the cutting-plane method [74, 75], and the ellipsoid method [76, 77]. These algorithms require only the ability to compute the function value and a single subgradient for / and g at any given point x. Therefore they are easy to implement. A key feature of these algorithms is that they maintain converging upper and lower bounds on the desired minimum value, and thus can compute this quantity to a guaranteed accuracy. There are also some more efficient methods such as interior-point methods [78]. They have been successfully applied to general convex optimization problems, especially in problems involving linear matrix inequalities (LMIs) [24].  § 2.5 Discussion Despite some progress in the multiple objective control problems, they have turned out to be very difficult and seem unlikely to have a complete analytical solution. Even 21  Chapter 2. Multiple Objective Control Problems  though there exist some analytical solutions to some particular problems, they still demand efficient numerical algorithms. The convex optimization approach offers interesting possibilities in the design of multiple objective control systems. It enables us to solve control problems with complex specifications, including hard constraints on various loop response parameters, in the frequency domain, the time domain and some operator norms. The main limitation of this approach is that it generates high order controllers. However, the advent of cheap, high performance, digital processors has substantially reduced the relevance of controller order. Also, even if a controller designed by this approach is not implemented, knowledge that particular loop shaping specifications can or cannot be achieved is very valuable information to the designer. The designer then knows exactly how much performance is given up by using a lower order controller, obtained by some other design method.  22  Chapter 3 Multiple Objective LQ Control Problem A minimax approach, based on convex duality and optimization, is presented for solving the multiple objective L Q optimal control problem. For the infinite time horizon case, the minimax solution of this problem can also be obtained by solving linear matrix inequalities (LMIs).  § 3.1 Introduction In this chapter, we shall pursue the minimax solution to the multiple objective linearquadratic optimal control problem, where the functional to be minimized is the maximum of several quadratic performance indices. In the previous work done in [19, 20, 79], technical difficulties arise in the search process due to the involved dependency of the matrix Riccati differential equations (RDE) or algebraic Riccati equations (ARE) on a weighting coefficient vector. We will show that by using convex duality and optimization method the search process can be efficiently completed. Although the method used here is in essence the same as in [23, 24], the problems to be solved are in different formulations. More importantly, the minimax solution for the finite time horizon problem has not been obtained before by convex optimization. The organization of this chapter is structured as follows. In the following section, the formulation of the problem is presented. In Section 3, basic results of linear quadratic theory are reviewed; they are instrumental in determining the solution of the problem considered. In Section 4, the multiobjective linear quadratic optimal control problem is solved by convex duality and optimization. In Section 5, the minimax solution for the infinite time horizon case is reduced to solving linear matrix inequalities. Some concluding remarks are given in the final section. 23  Chapter 3. Multiple Objective LQ Control Problem  § 3.2 Formulation of the Problem Consider the LTI system described by  x = Ax + Bu, x(0)  —  XQ,  Ql ° \  (3.1)  u  1  0  R? \  l.  (3.2)  x  where a; is an ra-dimensional state vector, u is a p-dimensional control input, and zi, i 6 m, are the exogenous outputs. For any u, the m performance indices are defined by  (3.3)  o where tf > 0 is the time horizon. Assume that (A,B)  is controllable; that \Q?,AJ, i e m, are observable; and that  Qi > 0, Ri > 0, and F{ > 0. These assumptions ensure that each of the objectives Ji, i £ m, has a finite minimum value, and indeed this minimum is attained by a linear feedback control law that stabilizes the system [80]. The convexity of each objective Ji, i G m, is also assured under these assumptions. This model may also be used for determining the feedback strategy in a control problem where m distinct situations may arise, each case being managed by selecting the control minimizing a corresponding performance index. If it is not known beforehand which of the equally likely situations will arise, a reasonable choice for the overall performance index, J(u), in the control problem is the maximum of the m performance indices. Therefore the multiple objective linear-quadratic control problem ( M O L Q P ) is formulated as a minimax problem: min max Ji(u) u  jgm  subject to  x = Ax + Bu, x(0) = XQ. 24  (3.4)  Chapter 3. Multiple Objective LQ Control Problem  § 3.3 Linear Quadratic Problem Before proceeding with the solution of the minimax problem M O L Q P described above, we shall recall briefly the solution to the linear quadratic problem. The finite time horizon problem is to choose u e Z/2[0, tf) so as to minimize a single performance index  If + / (  J«'(«) =  R*u  +  dt,0<t<  t, f  (3.5)  subject to the dynamic constraint (3.1), whereas the infinite time horizon problem is to find u £ L,2[0,oo) which minimizes J°°(U)  2  1  /(  +  Qx 2  I  R2u  |2^  dt.  (3.6)  0  We summarize the well-known results as follows. Lemma 3.1 [66, 80, 81]: Let R > 0, Q > 0, and F > 0. Then the minimum value of J  T F  in (3.5) subject to (3.1) is x P(0)xo, where P(0) is the solution to the matrix T  Riccati differential equation (RDE) P(t) + P(t)A + A P(t)T  P(t)BR- B P(t) l  T  + Q = 0, P(t ) f  = F.  (3.7)  The matrix P(t) exists for all t < tf. The associated optimal control is given by the linear feedback law u (t) = -RT B P{t)x(t), 1  T  t <E [0,t ]. f  (3.8)  Lemma 3.2 [66, 80, 81]: Let R > 0 and Q > 0. Suppose that (A, B) is controllable and (Q%,A^ equals X PXQ, T  is observable. Then the minimum value of J°° in (3.6) subject to (3.1) and is achieved by choosing u*(t) = -R~ B Px(t), l  T  t> 0,  (3.9)  where P is the unique positive definite solution of the algebraic Riccati equation (ARE) PA + A'P-  PBR-'B  1  25  P + Q = 0.  (3.10)  Chapter 3. Multiple Objective LQ Control Problem  Lemma 3.3 [82, 24]: Assume that (A, B) is controllable. Let J* =  inf  «ei [o,o6)  {J°°(u), x = Ax + Bu, x(0) = '  v  2  .  '  (3.11)  x }. 0  J  J* is bounded if and only if there exists a real symmetric matrix P ~ P  T  AP T  + PA + Q BP T  PB R  such that (3.12)  > 0.  When J* is bounded, its numerical value can be obtained by solving the following optimization problem with choice of variable P = AP  + PA + Q BP  T  max <  {  XQ  PXQ\  T  P  T  PB R  > 0  (3.13)  and the corresponding optimal control is u (t) = -R- B P*x(t),t> 1  T  0,  (3.14)  where P* is the solution in (3.13). Remarks: (i) . The positive-definiteness condition in (3.12) is called a linear matrix inequality (LMI) in the parameter P. The LMIs can be formulated as convex optimization  problems  that are amenable to computer solution. Recently the efficient solution of LMIs has attracted considerable interest among researchers [24]. A short introduction to L M I and the software for solving L M I problems is given in Appendix B. (ii) . The L M I formulation of L Q problem in Lemma 3.3 can be derived from Theorem 3 of [82] by Willems. The details of the connection between Lemma 3.2 and Lemma 3.3 can be found in [24].  26  Chapter 3. Multiple Objective LQ Control Problem  § 3.4 Solution via Convex Optimization Before we go further note that MOLQP is not in a form that can be handled by the usual methods for LQ design. However, MOLQP can be transformed into a convex optimization problem, which can be easily solved by some numerical methods, such as the cutting-plane algorithm [23, 75] in Appendix A. m A f Define J ( « ) = £ A,-Ji(u), where A e S = \ A e R i=i I unit simplex in R . Then as proven in [19, 83], A  m  m  min max U  i  €  Ji{u)  = min max  m  m 1 : Ai > 0, £ A,- = 1 \, a !=i J  J\(u)  " = m a x m i n J\(u).  m  (3 15)  A e £ m  The first equality comes from the elementary scalarization result for finding the maximum element of an m-vector [19, 83]; and the second equality is obtained by interchanging the order of the min and max operations, which can be justified because J\(u)  is linear  in A and convex in u [84]. Define V(A) = m i n { J H A  :  x = Ax + Bu, x(0) = x }. Q  (3.16)  u  Since V is the minimum of a family of linear functions of A, it is concave. We can thus pose the MOLQP as a convex optimization problem, which maximizes over the compact, convex, finite-dimensional set E , the concave function V, i.e., m  m a x V(X)  (3.17)  and it can be solved by using the cutting-plane algorithm in Appendix A. To use the cutting-plane algorithm, we handle the equality constraint Ai + A + ... + A 2  27  TO  = 1  (3.18)  Chapter 3. Multiple Objective LQ Control Problem  by letting (3.19) be our optimization variables and setting A  m  = l-Ai  A _i.  (3.20)  m  The constraints on the optimization variables are then Ai > 0, • i  (3.21)  1 - Ai  A _ ! > 0. m  From (3.16), evaluation of the objective function V(\)  at a given vector A 6 S  m  requires the solution of a linear quadratic problem. According to Lemma 3.1, (3.22)  V(X) = x P (0)x , T  x  0  where -PA(O) is the solution of a single Riccati differential equation p\(t)  + P (t)A x  + A P (t)-P (t)BR^B P (t) T  + Q = 0, Px(t )=F ,  T  x  x  x  x  f  x  (3.23)  where, m  F = Y, ^ X  m  m  > 0, Q =  ^  x  >0, Rx = J2  > °-  (- ) 3  24  i=l t'=l i'=l The minimizer in the definition of V(X) for any evaluation point A is given by u (t) x  A subgradient # bj 6 - f t " 0  1-1  = -R- B P x(t), l  t>  T  x  0.  (3.25)  for the objective V(A) is easily obtained J\{u ) x  -  J (u ) m  x  (3.26)  9ohi =  J -\(u ) m  -  X  J (u ), m  X  Before evaluating <7 bj, we first present the following lemma. 0  28  Chapter 3. Multiple Objective LQ Control Problem  Lemma 3.4:  IJZX  With the control in (3.25), for each i G m, Ji  ,K\ = —R\ B P\. 1  T  is given by (3.27)  Ji = x?Wi(0)x  o  where Wi = Wf satisfies Wi(t) + {A + BK ) Wi(t) T  x  + W (t)(A + BK ) + KTR^X t  X  + Qi = 0, Wi(t ) f  = F. t  (3.28) Proof:  Define 2  Hi(t) = x (t)Wi(t)x(t)=\\F?x(t )j  t  + J  T  f  then we have J , = #;(0) = x Wi(0)x  f  (jQ*x(t)  dt,  +  (3.29)  and Wi(t/) = F,-, Differentiating (3.29) with  T  0  respect to t and using the dynamic constraint (3.1) and the control (3.25), we can obtain (3.28).  • From Lemma 3.4, J  8  (i G m) can be obtained by solving the differential Riccati  equation (3.28) for each i G m. With the obtained objective values Ji (i G m)> the subgradient # bj can be easily evaluated according to (3.26). 0  Example 3.1: We solve the following problem taken from [20]: J* - m i n m a x { J i , i G 3}, u  i  (3.30)  subject to x = Ax + Bu, x(0) = XQ, where, for each i G 3,  +  QU  2  29  +  dt.  (3.31)  Chapter 3. Multiple Objective LQ Control Problem  and  "3 1 0 " A= - 1 2 0 1 0 2 "1 Fx = 0 0 "0 F = 0 0 "0 F = 0 0 2  3  0 0 0 0 1 0 0 0 0  ' 2" 0" 1 ,xo = 2 -1 1 - 1  "1 0  0" "3 0 ,Qi = 0 0 0 0" "0 0 ,Q2 ='• 0 0 0 0" "0 0 ,Qs = 0 2 0  (3.32)  0 0 0 0 2 0  0" 1 0 0 ,Ri — 0 1 0_ 0" 1 0 0 0 1 0 0. 0" 1 0 0 0 , R3 — 0 1 0 1  (3.33)  In Example 3.1, we first transformed this three-objective min-max problem into max-min problem as in (3.17), then we used the cutting-plane algorithm written in Matlab to automatically adjust the value of A in order to find the minimax solution. Since  PA  a n  d W»  (* =  1,2,3)  are symmetric, for each matrix six elements need to be  computed as functions of time. For each given value of A, at each iteration of the cutting-plane algorithm, we first solved the 6th-order differential equation given in (3.23) for P A to evaluate the objective function V(A), and then we solved three 6th-order differential equations given in (3.28) for W , (i = 1,2,3) to evaluate the subgradient of V(X). A l l these differential equations were solved by using the 4th and 5th order RungeKutta formulas. There are only two optimization variables Ai and X in this convex 2  optimization problem. To start the cutting-plane algorithm, the initial values for A were chosen as Ai = 0.3 and X = 0.5; and the objective tolerance e bj and constraint tolerance 0  2  e n were e bj = 1 0 CO  Q  - 1 0  and e  con  = 1 0 . After 32 iterations with 146.6000 seconds of - 6  Sparc 5 cpu time, the algorithm terminated and yielded the optimal objective value V(A*) = 31.3989 and the corresponding maximizers A^ = 0.2839 and A^ = 0.4627. The minimax control u* = u\* can be computed by using (3:25). Table 3.1 presents the iteration process with only first four steps and last four steps. We can see that 30  Chapter 3. Multiple Objective LQ Control Problem  J\, J and J 2  3  are conflicting objectives and approximately approach the same value at  the last iteration.  MO  V(X)  29.1578  38.1917  31.1785  309.0146  18.8556  83193  8.3193  0.0000  46.1859  49.5584  14.7746  17.8111  0.4740  114.5509  21.7482  14.5768  24.6394  i  j  Iteration Ai  A  1  0.3000  0.5000  29.8709  2  0.0000  0.0000  3  0.0967  4  0.0667  2  «M«A)  Step  •  29  0.2838  0.4633  31.4319  31.3847  31.3878  31.3974  30  0.2834  0.4630  31.3990  31.3985  31.3985  31.3988  31  0.2838  0.4628  31.3994  31.3988  31.3985  31.3989  32  0.2839  0.4627  31.3989  31.3989  31.3989  31.3989  Table 3.1. Example 3.1: Solution of a multiple objective L Q control problem  § 3.5 Solution via Linear Matrix Inequalities When tf —* 00 and x(tf) —> 0, MOLQP becomes  / (H  ' 0 0  mm max  subject to x= Ax+Bu,  2  1  Rfu 1  \dt  (3.34)  x(0)= XQ.  Since the infinite time horizon problem in the single-objective L Q design is a special H  2  norm minimization problem as shown in [85], the problem considered here is a special case of ( - P ) defined in the previous chapter. As done for the finite time horizon case 21  in the last section, this problem can be solved by the cutting-plane algorithm. The only difference here is that the objective function V(\) is evaluated by using Lemma 3.2 31  Chapter 3. Multiple Objective LQ Control Problem  instead of Lemma 3.1, and the subgradient of V(\)  is computed by solving AREs in  (3.10) instead of integrating RDEs in (3.28) at each iteration. In this section, we present an alternative solution approach based on Lemma 3.3. This is done by avoiding the Riccati equation approach altogether and reformulating the infinite time horizon problem as an LMI-based convex optimization problem, which is summarized as the following Theorem. Theorem 3.4: Let Ri > 0 and Qi > 0 for i em. m  i=l  m  For any A 6 E , define Q\ = m  '  /  AjQj, and R\ = ^ A ^ . Suppose also that (A,B)  1  \  is controllable and [Qf, Aj,i V  t=l  /  e  221, is observable. The solution to the infinite time horizon problem in (3.34) can be obtained by solving the following LMI-based optimization problem with choice of variables P = P  T  and A XQPXQ max A P + PA + Q\ subject to BP A >0, Ai Ai  PB' >0, R . A _ i >0.  T  T  (3.35)  X  m  The corresponding minimax control is computed by u  -R- B P*x, l  (3.36)  T  where P* is the solution to (3.35). Proof: The solution of the infinite time problem, by using the scalarization [19, 83] and invoking the saddle-point theorem [84], is given by  max  mmm J  /  2  (jQix  +  subject to x= Ax+Bu,  I  R\u  i2\  dt  (3.37)  x(0)= XQ.  Note that for any fixed A € S , since Qx > 0 and Rx > 0, the integral inside the m  min is always bounded below by zero. Therefore the minimum over u of the weighted 32  Chapter 3. Multiple Objective LQ Control Problem  linear quadratic problem, according to Lemma 3.3, is computed by solving the L M I in P = P  T  in the form of (3.13).  Example 3.2:  <  We consider the following minimax control problem: J* —  min max {Ji, i £ 3), u i  (3.38)  subject to x — Ax + Bu, x(0) = XQ. where, for each i' £ 3, oo  Rlu  + and Qi =  2 1 , 0-2 = 1 1  1 -1  -1 ,g 3  3  (3.39)  dt,  1 1.5  1.5 3  Ri = 2, i ? = 1, i?3 = 1, 2  A  0 1 0 0  and B  According to Theorem 3.4, we can easily formulate this problem as an LMI-based optimization problem in the form of (3.35). Totally there are 5 optimization variables: the entries pi 1, P12, P22 oi P  P11 P12  , and the multipliers A i ,  P12 P22 .  A . An LMI-based 2  solver, sdpsol, developed at Stanford [86], was used for the optimization. The relative tolerance was chosen as e i = 1 0 . The solutions for two different initial states were re  - 6  obtained as follows: For the initial state xj = [5  2]', sdpsol terminated after 11 iterations. It generated [3.0806 1.1680' the following results: J* - 55.0208, P , Ai = 0.1680, and A = 1.1680 2.4166 0.8320. The minimax control, given by (3.36), is u* = [-1.000 -2.0690 ]x. With 2  this control, we can compute the three individual objectives: 55.0208, and J3 = 23.7708. Clearly, J\ and J not active in the optimization. 33  2  J\  = 55.0208, J  2  =  are conflicting objectives and J3 is  Chapter 3. Multiple Objective LQ Control Problem  For initial state XQ = [—2  5]', sdpsol terminated after 11 iterations. It generated [2.9077 1.3407' the following results: J* = 24.7723, P = , A = 0.3407, and A 1.3407 2.5892 0.6593. The minimax control, given by (3.36), is u* = [-1.000 -1.9310 }x. With x  this control, we can compute the three individual objectives: 24.7723, and J  3  = 19.7720. Again, J\ and J  2  2  J\ = 24.7723, J  2  =  are identified as active and J3 is not  active in the optimization. Clearly, the solution to the M O L Q P is dependent on the initial condition. To better look at this, by using sdpsol, we solved Example 3.2 for some sampling initial conditions satisfying —10 < xo < 10. Fig. 3.1 shows the level curves of the overall performance with respect to the initial conditions.  Also, in this example, J3 was automatically  identified being non-active, and J\ and J active. From the level curves of the multipliers 2  with respect to the initial conditions as shown in Fig. 3.2 and Fig. 3.3, J\ and J .are 2  conflicting each other on the initial conditions that the multipliers are positive but less than unity.  (a)  Figure 3.1. Example 3.2: The level curves of the overall objective: (a) the mesh surface, and (b) the contocdur 34  Chapter 3. Multiple Objective LQ Control Problem  (a)  X0(1)  Figure 3.2. Example 3.2: The level curves of the multiplier A i : (a) the mesh surface, and (b) the contour  Chapter 3. Multiple Objective LQ Control Problem  § 3.6 Concluding Remarks In this chapter, we used convex optimization to obtain the minimax solution of MOLQP. The minimax solution is identified in a two-step iterative scheme. In the first step, the weighted linear quadratic problem is solved and the set of noninferior solutions is generated. Then, in the second step, the weighting coefficients are updated to approach the minimax solution. These two steps are repeated under the control of the cutting-plane method until the desired accuracy is attained. Two examples show that the cutting-planebased solver is quite effective and can automatically identify the active objectives without the prior knowledge of the conflicting situations. For the infinite time horizon case, the minimax solution can also be obtained in a single step by solving an LMI-based convex optimization problem. Since the L M I approach adjusts the matrix P and the weighting coefficients A simultaneously instead of alternating between the two, it is more direct than the discussed two-step scheme. From the numerical examples and from (3.35), the solution to M O L Q P depends on the initial states and is open loop control optimal only for the particular initial conditions. This is different from the solution of the single-objective L Q optimal control problem, where one can always get a feedback control law. Since a number of researchers have solved the single-objective Hoo control problem by using L Q optimal game theory [87, 88, 89], one possible direction for future research is to explore the connections between MOLQP and the multiple objective Hoo control problem and to investigate the possibility of finally solving the multiple objective H^ control problem.  36  Chapter 4 Multiple Objective Control Problem In this Chapter, the solution to the multiple objective Hoo control problem for SISO systems is pursued. Optimality conditions, and, in special cases, either all-pass properties or optimal performance values are obtained via nonsmooth analysis.  One numerical  solution approach based on linear matrix inequality (LMI) formulation is also presented.  § 4.1 Introduction In this chapter, we study the multi-disc Hoo problem only for SISO systems via nonsmooth analysis [90]. This is a special case of (2.10) presented in Chapter 2 and can be rewritten as the following optimization problem (P - ): 4  min max {||a - 6 ^ } ,  1  l<?  (4.1)  where, for convenience, a,, 6, and q are used, respectively, to denote T ^ T i ^ , —T^i, and Q in (2.10). The problems N L S P [29, 30] and O R D A P [28, 31] mentioned in Chapter 2 could be transformed into some special forms of the general problem considered here. For example, if we only consider specifications on robust stability and on nominal disturbance attenuation, then the problem ( P  4 1  ) in (4.1) reduces into N L S P in the form of ( P  2 5  )  in (2.20); if we only consider specifications on robust stability and on robust disturbance attenuation, then the problem ( P  4 1  ) in (4.1) reduces into O R D A P . While the results  obtained here are similar, to the results of Holohan and Safonov [29, 30], our problem is more general and as we can see from the following sections, nonsmooth analysis also provides a more straightforward solution. This chapter is structured as follows. In the following section, using nonsmooth analysis [90], optimality conditions, and, in special cases, either all-pass properties or 37  Chapter 4. Multiple Objective  Control Problem  optimal performance values are obtained. In Section 3, an LMI-based solution approach is presented and illustrated by a numerical example. We conclude with some discussions in Section 4.  § 4.2 Analytical Results via Nonsmooth Analysis Note that problem (P ) 41  in (4.1) is convex and nondifferentiable.  Nonsmooth  analysis [90] provides a powerful tool for problems with this structure. In this section, we first give some preliminaries about nonsmooth and functional analysis, and then present the main results.  § 4.2.1 Some preliminaries We start with a characterization of subgradients for a finite max-functions. Lemma 4.1 [90]: Let Y be a Banach space, and U c Y be an open convex set. Suppose fi : U —> R, i £ JV, is a finite collection of functions each of which is finite-valued and convex on U . Define  (4.2)  f(y) = max{fi(y):ieN},  and denote the set of "active" indices by I(y) = {i e N_ : /,-(j/) = f(y)}. Then the subdifferential of the function / obeys  df(y) = co{dfi{y),ieI(y)} = I Yi & & dfi{y),\i >0,Y X  :  e  i =  X  1  In particular, at any point y where each fi is differentiable, we may take & = V/,-(y) in (4.3). Here is an extension of the simpler case above to a max-function with infinite index set. 38  Chapter 4. Multiple Objective H  T O  Control Problem  Lemma 4.2 [90]: Let Y be a Banach space and fe be a family of functions on Y parametrized by 0 G T , where T is a compact topological space. Assume that: (i) . for each y £ Y , fe(y) is continuous as a function of 6, and {fe(y) : # G T } is bounded; (ii) . for each 9 G T , fe(y) is convex as a function of y, and is Lipschitz in y. Define a function / : Y —» R via (4.4)  f(y) = m*x{f {y)}. e  Then the subdifferential of / is given by  df(y) =  df (y)ii(dB) 9  : pt G P[S(y)]|.  (4.5)  Here, S(y) = {0 G T : /#(?/) = f{y)} denotes the "active set" and for any subset S of T , P[S] signifies the collection of probability Radon measures supported on S. The following two lemmas in functional analysis are essential to the derivation of the all-pass property in the solution of the multi-disc problem.  Lemma 4.3 [91]: If v is a Borel measure on T , where T = [0,2ir], such that / e^ du = n6  T  0 for n > 0, then v is absolutely continuous with respect to Lebesgue measure and there exists / in H i such that dv = fd6. If / is a nonzero function in H i , then the set {0 G T: f(e )= }6  Lemma 4.4  0} has measure zero.  [73]: Suppose n is a positive measure on a cr-algebra in a Banach space,  and g G Li(/x). If d\ = gdu, then d\X\ = \g\du. 39  Chapter 4. Multiple Objective Hoo Control Problem  § 4.2.2 Main results To use nonsmooth analysis, we reduce the space RHoo over which the minimization is performed in (4.1) to the Banach algebra A o of RHoo functions that are continuous on the unit circle [92]. Therefore we shall next solve the following problem: (P  4 2  ):  (4:6)  minmaxdla^-feilU}.  In this thesis, the systems are normally represented by their transfer functions, which are functions of the Laplace transform variable s G C + (open right half plane) or functions of the Z-tansform variable z G D (open unit disc). Here the solution to ( P ' ) will be 4  2  derived using functions defined on the unit disc. Any function F G Ao defined on C  +  can be represented in terms of function / G A o ( D ) , and vice versa, e.g., f(z) = P ( y ^ f ) and F(s) = /(f^y). The conformal map between C  and D preserves all the important  +  properties of F(s) as a bounded analytic function, e.g., f(z) is a bounded analytic function on D and  I i n » = sup | J W I =  sup  weR  0G[O,27r]  | / ( ^ ) | = ||/||oo1  v  (4.7)  n  Let /(•) : A o —> R be given by  f(q) = ma,x{\\a q-b \\ } i  max  i  00  max{|a -& |(e^)}| i9  t  (4.8)  naax j\d{q — b{ \ ( ^ = max max e-?  }  = max fo{q),  where f$ is a family of functions on A o (a Banach space) parametrized by 9 G T = [0,27r] (a compact space), and is given by  f {q) = max {|a e  t9  40  - 6 | (e^) }. t  (4.9)  Chapter 4. Multiple Objective H  T O  Control Problem  Clearly ( P - ) in (4.6) is equivalent to 4  2  (P - ): 4  min/(g), ?€A  3  (4.10)  0  and the optimality condition for ( P  4 3  ) is  (4.11)  Oedf(q).  The trivial case in which some g 6 Ao obeys aiq — 6 — 0 for all i G rahave a 8  solution in A o is excluded in the discussion below. To use the optimality condition (4.11), we need to compute the subdifferential of / by using Lemma 4.1. Before doing this, we check each assumption in Lemma 4.1. First, the continuity of function f$(q) and the boundedness of {fe(q) : 6 G T } for each q G Ao are obvious since a,g — bi (i G ra) are proper and stable transfer functions. Second, the convexity of /<?(<?) can be easily established as follows: Vgi,<?2 G A o , A G [0,1], f (Xqi e  + (1 -  X)q ) 2  = max{|a,(Agi + (1 - X)q ) - bi\} 2  < m&x{X\aiqx - 6,-| + (1 - A)|a^ ~ k\}  (4.12)  2  = X\a q\-h \-\-(\ M  M  -  X)\a q -b \ M  2  M  < Amax{|aigi - 6,1} + (1 - A)max{|a,g 2  = A/*((?i) + (1 -  k\}  X)f (q ), e  2  here, M is the index corresponding to the maximum of m functions X\aiq\ — bl\ + (1 - X)\aiq — bi\,i G m . To verify that fo(q) is Lipschitz in q, we first show that for 2  41  Chapter 4. Multiple Objective H  each i G m , and 9 G T , gf(q) =  |atg — 6i|(eje)  w  Control Problem  is Lipschitz:  Vr/i, <72 € A , 0  rf(?l)"rf(92)|  =  k ? l - &i|(eJ*) - |a;g2 - M ( e J < ? ) |  (e '*)  < Kdiqx - bi) - (a,iq -  J  2  = Since each m a x {gf,i  \a \(e>')\q -q \(e> ). 9  i  1  is Lipschitz with constant  G in} is Lipschitz with constant  Lemma 4.5 If 'g\, ...,g  m  constants Ki,...,K  m>  (4.13)  3  HaiU^,  the next result shows that f$ =  m a x { H o i H ^ , i € m}.  are functions on a linear space with respective Lipschitz  then the max-function / — m a x { < ? i , z G ra} is Lipschitz with  constant K = m a x {if,, i £ m}. Proof: For any points p, q, • f(q) =  max{gi(q)}  < max {gi(p) + Ki\q - p\} (4.14) < m a x {gi(p)} + m a x {K{}\q - p\ = f(p) + K\q - p\.  Thus f(q) - f(p) < K\q - p\. But since q and p are arbitrary, we can exchange them to obtain \f(q) — f(p)\ < K\q — p\ as required.  •  Now we proceed to compute the subdifferential of /. By using Lemma 4.2, for any direction h € A o , we have < df(q),h>=  lj<  df ,h>du(9) 9  42  : pL G P[S(q)}V  (4.15)  Chapter 4. Multiple Objective  where S(q) = { O e T : f(q) = From  (4.15),  Control Problem  f (q)}. e  we need to compute the subdifferential of fe(q)- Before doing this by  Lemma 4 . 2 , we notice that fe(q) = max {\aiq — 6t|(e '*)}, and clearly , for each fixed JI  i G m, \aiq — 6,-|(e '*) is finite-valued and convex in q G A o . Therefore the assumptions J  in Lemma 4 . 1 are satisfied. Next we compute the directional directive of g\ for each i G rn.i & G T , as follows: V/i G Ao,t G R, 9i(q'+th)-g i(q) e  <Vg° (q),h> i  = ]im  i |«i/t| + 2*Re{(oig - &i)*a,-ft} 2  2  (4.16)  t^o *(|a,-(g + ifc) - 6i| + |a,-g - 6,-|) Re{(a,'9 - fej)*a,7i} Finally, from Lemma 4 . 1 , the subdifferential of f$(q), for any direction h G A o , is given by <  df (q),h>=  ( 4 . 1 7 )  0  Therefore, from the optimality condition f$(q), respectively, in  ( 4 . 1 5 )  solves  q  ( 4 . 1 1 )  and the subdifferentials of f(q) and  and ( 4 . 1 7 ) , we have (P - ) 4  3  => o G om such that V / i G A o ,  3p(6) eP[S(q)]  0  -Hi  ai(aiq — bi)*ai Wiq-  Since for a fixed h, (—jh) also satisfies 0  ( 4 . 1 8 ) ,  4  18  am-  we have  cti(a,iq — bi)*ai - &i|  (e *) j  bi\  ( - )  (e*)  (4.19)  The following theorem gives the all-pass properties of the solution, and, in special cases, the optimal performance value. 43  Chapter 4. Multiple Objective  Theorem 4.1: Assume that the simultaneous equations  Control Problem  — b ; = 0 (i G m) have no t  solution q in Ao- If the multi-disc Hoo problem (-P ' ) (m > 2) in (4.6) has a solution 4  2  q G A o , then either (i)  max { \aiq — 6,-| (e^), i G 221} — const. ,0 G T , or  (ii) some 0 G S(if) obeys condition (4.28) below. In this case, if a,-(e '*) 7^ 0 for all J  i G 221, V0 G T , then there exists  £ S  such that  m  TO  ,  i'=i  Proof:  We show first that the following hypothesis implies conclusion (i): cti(aiq — bi)*a,i \a{q-  i=i  bi\  (f)  ± 0,ye  G s(q).  (4.21)  Choose h = e> ,n > 0. Then (4.19) becomes ne  tit,  /  a>i(a,iq — &i)*a,-  (>) <W)  K<7 - &i|  0,n > 0.  (4.22)  From Lemma 4.3, 3/(e ) G H i such that je  'ai(aiq- UfaA  (  -\ e  (4.23)  Since // is a positive measure, and by the assumption, ^  tt;(aig—&i)*a. |a,g—6;|  (e>*) / 0, V0 G  T , / is a nonzero function in H i . Therefore the support of / is supp(/) = supp(|/|) = T.  (4.24)  From Lemma 4.4, since du > 0 and d0 > 0, (4.23) becomes  E  cti(aiq-  bi)*ai  44  du = l/l <*0.  (4.25)  Chapter 4. Multiple Objective Hoo Control Problem  Therefore (4.26)  supp(/x) = supp(|/|) = T . But we have (j, <E P[S(q)] in (4.18), so T = S(5) = {0 € T : f(q) =  (4.27)  f (q)}. e  Hence conclusion (i) of Theorem 4.1 holds. In this case, the "all-pass" property from single-objective Hoo control theory extends to the multiobjective case. It remains to treat the possibility that (4.21) fails, i.e., that some 0 6 S(q) obeys  E  ~ai(aiq - 6i)*a,'  0.  o-iq — b{  i=i  (4.28)  Under this condition we prove conclusion (ii). Indeed, since each ai(eJ ) ^ 0, V0 e T , e  by hypothesis, and V0 G S(<f), |a,<7- 6,-|^e^ = \a q — bk\(e^ k  for alH,fc e m, from  (4.28),  g[ rf( -^)](^ o, a  ?  =  (4.29)  which immediately gives the desired result in (4.20) by choosing |2  (4.30) i2  i=i  When conclusion (i) of Theorem 4.1 holds, we can get the infinite-order property for the optimal solution summarized in Corollary 4.1. This corollary suggests that in some situations, the optimal controller may not be realizable, so that only approximate solutions can be pursued. This justifies the use of approximate numerical methods, such as the LMI-based optimization presented in the following section. To obtain Corollary 4.1, we first prove the following lemma. 45  Chapter 4. Multiple Objective Hoo Control Problem  Lemma 4.6: Let = —2  T(s)  ™ ^ — - (n > m > 0 , + A,-!*'*- + ... + D  (4.31)  1  Ds  n  1  n  0  be a proper real-rational transfer function. If \T(joj)\ = const., V w £ U c [0, oo), where the measure of the set U is not zero, then indeed |T(ju;)| = const., Vu; e [0, oo). Proof: Let h (w) = N (joj) .+ N ^(ju) m  N  m  m  and h (w) = D (jto)  + D _\(ju)  n  D  n  n  (4.32)  + ... + No  1  m  1  n  + ... + D  0  (4.33)  These can be rewritten as, respectively, 2m  h (u) =  (4.34)  Y N,iu\ h  N  i=0  2n  and h (u>) =  hoju , 3  D  (4.35)  j=0  where the real coefficients h^^ (i e 0 U 2m) and hpj (j G 0 U 2n) can be obtained from (4.32) and (4.33).  We denote c as a constant and positive real number.  Then, by  assumption, we have 2m  l  r  -HI  = TTTA  = Tn  = c, Vu; e U ,  (4.36)  i'=0  i.e.,  2n c  Y  2m  D,i  h  u3  ~ Y  i=0  ^,i  h  ujl  = 0, Vu> G U ,  (4.37)  i=0 .  2n  or y^c u  k  k  - 0, Vu; £ U ,  it=o 46  (4.38)  Chapter 4. Multiple Objective  Control Problem  where, cjfc(fc£0U2n) are the real coefficients, which can be expressed by c, h (ieOU Nii  2m), and h j  (j £ 0 U 2n).  D  Taking any 2n + 1 distinct points {u-i,W2, •••,W2n+i} C U , we have 2n  = ( U £ 2n + 1.  Y^c uf it=0 k  (4.39)  If we took c/t (fe G OU 2n) as unknown variables, solving (4.39) would yield c = 0,k £ 0 U 2 n . k  (4.40)  This implies that 2n  ^ c ^ it=o  f  = 0,Vcu£[0,oo),  c  (4.41)  i.e., 2n  2m  cY,h J  - £ fcjv.iw'" = 0, Vu; £ [0, oo),.  Dti  i=0  (4.42)  t'=0  or |T(ju>)| = c,Vu>£[0,oo). 2  (4.43)  Corollary 4.1: Assume that the multi-disc Hoo problem (m > 2) has a solution g £ A o . If max{|a,<f —  £ m} = 7 ,V0 £ T , and 3i E m such that the measure of the set  {6 £ T : 7 = |a,-9 — bi\(e? )} C T is not zero, then q is of infinite order. e  Proof: First notice that ai(i £ m) and bi(i £ m) are given real-rational proper transfer functions of finite order. Then the corollary can be proven by contradiction using Lemma 4.6.  47  Chapter 4. Multiple Objective H  m  Control Problem  From (ii) of Theorem 4.1, the explicit expression of the optimal performance value, denoted by 7 , can be obtained by solving the following equations in 7 and k\ = 7 , ( J £ m),  Wiq-  (i e m): (4.44)  i—l m  X)ft  = l,#>0,tem.  (4.46)  t'=l  This could be done by using symbolic math software, such as Maple [93]. A n example follows Example 4.1:  Under the assumption of (4.28), obtain the optimal performance value  for two-disc problem: ( P - ) : 7 = min max {\\a - b^}. 4  (4.47)  4  iq  Solution: To solve this problem, we use Maple command solve(eqns,vars), where eqns := {equations in (4.42 — 4.44)}, and vars : =  B , 7 } . Executing the com2  mand yielded the following results: .  |ai «2|'  M + 7  Ctl&2  «2|'  (4.48)  - a &i  M +  2  |«21  Combining (i) of Theorem 4.1 and (4.48), the results for two-disc problem are summarized as the following corollary. 48  Chapter 4. Multiple Objective  Control Problem  Corollary 4.2: Assume that all of a,g — bi = 0, i-= 1,2, have no solution in A o . If the two-disc HQO problem ( P  4 4  ) has a solution q G A o , then either  (i) max{|aiif— 6,-|(e '*),i € 2} = const. ,0 £ T , or JI  (ii) some 0 G S(if) obeys condition (4.28).  In this case, if a i ( e ) ^ 0 for all je  i G 772, V0 G T , then the optimal performance is given by ax 62 — G2&1  (4.49)  We can use this corollary to obtain the optimal performance value for N L S P as in (2.20), studied by Holohan and Safonov in [29, 30]. To do this, we rewrite N L S P here as follows:  (  p 4  ' ) 5  :  7  ^{\\ i Q\LA\W2(i  =  2  w p  + PQ)\\  Clearly, ( P - ) is a special form of two-disc problem ( P 4  WiP,  5  4 4  (4.50)  }.  OQ  ) in (4.47) with a\ =  h — 0, and a = W P, fe = — W 2 , where W\ and Wi are two known outer 2  2  2  functions. Using (4.49), we obtain the same results as in [29, 30]: 72 =  ww 1  2  |Wi|+|W | a  § 4.3 Numerical Solution via LMI Note that the solution to problem ( P ( P - ) : ' min 4  6  4 1  ) in (4.1) can be obtained by solving  7  subject to \\aiq — biW^ < 7 , i G m.  1  The Hoo-norm constraints in (4.51) can be transformed into LMIs by using the following version of the Bounded Real Lemma, which may be derived via a bilinear transformation of the generalized Positive Real Lemma in [94, 95]. 49  Chapter 4. Multiple Objective  Control Problem  Lemma 4.7 [24]: Consider a transfer function T(s) with a state-space realization T(s) = [A, B,C, D]. The following statements are equivalent: (1) The LQO—norm of T(s) satisfies the constraint \\T(s)\\  Lao  (2) There exists an symmetric matrix P = P  T  \A P T  + PA BP C T  (4.52)  < 7;  such that the following L M I holds:  PB  C ' - 7 / D < 0. D - 7 / T  T  (4.53)  To transform the Hoo-norm constraints in (4.51) into LMIs, we first approximate q(s) in (4.51) as a linear combination of known basis functions, qi(s), as in (2.22) N  where x = (xi, • • •, x^)  is the vector of new design parameters, and the basis functions,  qi(s), can be chosen as proper and stable transfer functions, for example, the all-pass functions, s—p s +p  i-1 ,p>Q.  (4.55)  With N chosen sufficiently large, the Hoo-norm performance in (4.51) can be approximated arbitrarily closely [23]. By using the parametrization of q(s) given in (4.54), for each i£ m . ai(s)q(s) - bi(s) ai(s) (s) ai(s)q (s) f j x i; qi  2  1  ai(s)q (s) . -his) l][Ai,Bi,Ci,Di] N  [x  T  A{, Bi, X^Ci, 50  Di  (4.56)  Chapter 4. Multiple Objective  where X  1], and [Ai,Bi,d,Di]  = [x  T  T  = d(sl  Control Problem  — Ai) Bi + Di is the state-space x  realization of the transfer function matrix ai(s)q (s) 2  (4.57) a,i(s)qN (s) . -bi(s) J +1  We can now apply Lemma 4.6 to obtain this section's main result. Theorem 4.2: If q(s) is given by the N-th order approximation in (4.54), then the solution to ( P ) in (4.51) can be obtained by solving the following LMI-based optimization 4 6  (4.58)  Clearly, ( P ' ) is a finite-dimensional convex problem and efficient numerical al4  7  gorithms exist for its solution [23, 24]. Here, to illustrate the solution procedure, we approximately solve a two-disc problem, previously called a robust control problem in Example 2.1 and Example 2.2, by using the LMI-based solver sdpsol [86]. Example 4.2: Find a stabilizing controller K such that 7 < 1, where 7 =  max{||r (^)|| ,||r (^)|| }, 1  00  2  00  where  T = W (;I + PK)- 1 1  T = W PK(I 2  (4.59)  l  2  +  PK)-\  The plant and the weighting functions are  =  s-12' 11sls + 1 (s+6) 3 s + 2 (s •+ 2s + 37)' 2  2  51  (4.60)  Chapter 4. Multiple Objective  Solution:  Control Problem  Notice that the plant here is not stable. Therefore the solution of this problem  proceeds by first obtaining a stable coprime factorization of P as in Theorem 2.1. We take —;M = ; s + 6' 5 + 6'  N =  (4.61)  X = -0.8; andF = -1.8. In terms of the Q-parametrization, we cast this problem as the following convex optimization problem: (P - ) : * = 4  8  7  min max{||a - b ^ , ig  qtltxloo  (4.62)  \\a q - b \\ } 2  2  00  with ai = -WiMN-  h =  -WiMX; (4.63)  a = -W NM; 2  2  b = —W NY. 2  2  In this example, we chose all-pass functions as the basis functions as in (4.55) with p — 1. The number of the design parameters was first chosen as N = 5, and then was increased one by one until we obtained a satisfactory solution. From (4.57), the corresponding state-space realization data, [Ai,P>i,Ci,Di],i This realization was high-order.  = 1,2, were generated.  Since sdpsol can only efficiently handle low-order  systems, a model-reduction procedure was used in this step. Then, by using Theorem 4.2, problem ( P ) was transformed into an LMI-based optimization problem in the form of 4 8  (4.58). Finally, the LMI-based solver sdpsol was used to solve the optimization problem. Key features of this procedure are tabulated in Table 4.1, which shows the number of terms used to parametrize q(s), N; the obtained performance value, 7 ^ ; the number of variables, ra ; the number of constraints, n var  £  cpu  c o n  ; and the cpu time (in Sparc 5 seconds),  , needed in the computation. For example, at N = 14, [Ai, Bi, d, Di], i = 1,2, were  both obtained as 15-th order systems, and therefore the LMI-based optimization problem in the form of (4.58) had 255 variables: P i = P f e R x 6 R  1 5  1 5 x 1 5  , P  2  = P  2  T  e R  1 5 x 1 5  ,  and 7 € R, and 578 constraints. With 206 seconds of Sparc 5 cpu time, sdpsol 52  Chapter 4. Multiple Objective Hoo Control Problem  terminated after 21 iterations and yielded the following results  x =  0.8102 -0.3325 -0.1207 0.0596 0.1769 0.2224 0.2062 0.1504 0.0808 0.0197 -0.0194 -0.0332 -0.0274 -0.0128  and 714 = 0.9977.  (4.64)  The corresponding controller to this solution can be obtained according to (2.4). With this controller, the frequency responses of two weighted objectives in (4.59) are shown in Fig. 4.1. The graph clearly shows that the designed controller meets the required specifications: 714 = 0.9977 < 1. With an increase of the order of the approximated q(s), we can expect that max {|TI(JOJ)|, |T2(jo;)|} approaches "flatness" and the two objective functions are shaped over mutually exclusive frequency ranges. This would be consistent with all-pass property for the optimal solution described by (i) of Theorem 4.1 in the previous section.  53  Chapter 4. Multiple Objective Hoo Control Problem  N  7iV  ^var  ^con  5  1.3358  48  128  3.0215  6  1.2019  63  162  4.8477  7  1.1047  80  200  9.1406  8  1.0381  99  242  14.1154  9  1.0215  120  288  24.5263  10  1.0204  143  338  40.1288  11  1.0184  168  392  61.1081  12  1.0100  195  450  94.3341  13  1.0007  224  512  135.7785  14  0.9977  255  . 578  206.1405  15  0.9973  288  648  284.4093  ^cpu  () s  Table 4.1 Example 4.2: Solution results  1  1  ,  -„,-r^  ,  1  1  1  •' ' '•"•n  •  '  "I  0.8  | .6 0  0.4  0.2  n 10~  2  10"  1  . 10° ' Frequency  "  10  1  10  2  Figure 4.1. Example 4.2: Weighted objective functions Ti(ju)\ (dotted line) and |T (icc;)| (solid line) when 7Y = 14 2  54  Chapter 4. Multiple Objective  Control Problem  § 4.4 Concluding Remarks Nonsmooth analysis has been shown to be a useful tool in the study of multiple objective Hoo control problems. Optimality conditions, and, in special cases, either allx  pass properties or optimal performance values were obtained. It was inferred from the all-pass property that the optimal controller in some situations is not of finite order. To get a realizable controller, only approximate solutions can be obtained. A numerical solution approach in terms of state-space LMIs has been proposed to obtain the approximate solutions: By re-parametrization of the free parameter q, the multiple objective Hoo control problem in frequency domain is transformed into a finitedimensional convex optimization problem. This problem can be solved by many efficient algorithms.  A robust control design problem, cast as a two-disc problem, has been  solved to illustrate the solution procedure and to show the effectiveness of the proposed technique.  rv  55  Chapter 5 Numerical Solutions of Multiple Objective Control Problems In this Chapter, convex and non-convex optimization-based solutions to the general multiple objective control system design problem are presented. The solution procedures are illustrated, and some computational issues arising in the design process are discussed in terms of design examples.  § 5.1 Introduction As discussed in Chapter 2, parametrizing all stabilizing linear controllers in terms of a stable transfer function Q allows many specifications to be formulated as convex constraints on the free design parameter Q. In most cases, Q is approximated by a linear combination of given stable basis functions, so that convex constraints on Q become convex constraints on finite dimensional vector of design parameters. The advantage of the convex optimization approach is that it always finds a solution if one exists. However, the parameter space is usually very large, which may produce two problems. First, computational problems increases with the dimension of the unknown, and therefore an efficient optimization solver is demanded in the design. Second, it generally yields high order controllers, which must then be judiciously reduced in order to be made feasible in practice. In this Chapter, by solving two multiple objective control problems, we show that the cutting-plane based solver with constraint dropping scheme is quite efficient in convex optimization based control system design, and discuss some computational issues in the design process. Then we present an approximate numerical solution of the 56  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  multiple-objective control system design problem for SISO systems by using a nonlinear approximation of the free design parameter Q. We show through two numerical examples that low order controllers can be guaranteed by this new solution procedure. This chapter is organized as follows. In the following section, by solving a mixed H /Hoo control problem and a two-objective Hoo control problem, we describe the 2  convex optimization based solution procedure, demonstrate the effectiveness of the cutting-plane based solver, and discuss some computational issues arising in the design process. In Section 3, the solution via non-convex optimization, which has the advantage of directly producing low-order controllers, is presented, and the solution procedure and the computational issues are also illustrated and discussed by solving the same design examples. We conclude with some remarks in the last section.  § 5.2 Solution via Convex Optimization § 5.2.1 Linear approximation of Q As shown in Chapter 2, the multiple objective control system design problem can be formulated as a convex optimization problem in the form ( P  2 2  ) or ( P  2 3  ) with unknown  Q e RHoo- These two problems are infinite dimensional and therefore an approximation of Q G RHoo is needed to obtain numerical solutions. To preserve the convexity property, as proposed in [36, 26], Q can be parametrized as a linear combination of basis functions: N  where the Qi are chosen stable transfer functions and Xi,X ,and 2  XJJ are N real-  valued matrices of appropriate dimension, which are the new design parameters. With the approximation of Q by Q^ , the infinite-dimensional convex program ( P vx  2 3  ) becomes  finite-dimensional convex optimization program for the unknown X : (P - ): r m n { 0 ( g L ( X ) ) | ^ ( Q L w ) <0}. 5  1  57  (5.2)  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  One important property of a convex optimization program is that every local solution is actually a global solution, so there is no danger of getting "stuck" at a local minimum. Another important feature of the parametrization is that even for such straightforward choices as Qi(s) = ( f ^ y ) ^ \ any desired Q in RHoo can be approximated arbitrarily well by some Q^ (X) VX  provided N is sufficiently large [26, 23]. Complementing this  advantage, however, is the problem that adequate approximations often require large dimensions N, and correspondingly heavy computation and high-order controllers. Until now, little work has been done on how to systematically approximate the free design parameter Q to reduce the size of the parameter space and therefore the computational complexity, and the order of the designed controller.  § 5.2.2 Convex optimization via sequential quadratic program methods Many algorithms for convex optimization have been devised, and we could not hope to survey them in this thesis. For a good survey, please refer to [23, 24]. We first tried the sequential quadratic program (SQP) based solvers, such as constr and minimax, in the Optimization Toolbox for Use with Matlab [96]. These solvers are descent methods, which require the computation of a descent direction or even steepest descent direction for the function at a point.  For smooth problems, these descent methods offer fast  convergence, e.g., quadratic. For nondifferential optimization problems, which are often the case for the controller design problems, these solvers often exhibits slow convergence near the solution due to truncation error in the gradient calculation by finite difference approximation.  § 5.2.3 Convex optimization via cutting-plane algorithms The cutting-plane method was chosen here simply due to its easy implementation. The cutting-plane method is an iterative algorithm. It is attractive since the subproblem to be solved at each iteration is a simple linear program that changes only sightly from 58  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  one iteration to the next and no line searches are required. It has simple stopping criteria that guarantee the optimum has been found to a known accuracy. We have developed two numerical solvers, K C P and E M C P , for convex optimization problems in Matlab. K C P is based on Kelley's original cutting-plane algorithm [74]. One criticism of K C P is that the number of constraints in the linear programs at each iteration increases by one with each iteration. E M C P is based on Elzinga and Moore's cuttingplane algorithm [75]. In contrast to K C P , it has rules for dropping old constraints in the optimization process and therefore the linear programs solved in each iteration do not grow as rapidly as the algorithm proceeds. We will illustrate this later by an example. For details of these two cutting-plane algorithms, please refer to Appendix A.  § 5.2.4 Design Examples To illustrate the solution procedure by using the SQP based solvers and the cuttingplane based solvers, we solve two design examples: one is a mixed H /Hoo control 2  problem [38], and the other is a two-objective HQO control problem, taken from [40]. Example 5.1: As shown in Fig. 5.1, consider a unstable nominal plant  with additive perturbation bounded by (5.4)  \AP(ju)\<—±—,\/u, W (ju) 2  where W = 20 is constant. The design objectives are to guarantee robust stability for 2  perturbed plants and to minimize the H - n o r m of weighted sensitivity function 2  <f>(K)= Wiil  +  PK)'  1  (5.5)  with weighting function W {s) 1  = ^—: 5 + 1 59  (5.6)  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  r  +  e  u  K  +  P+AP  yp  Figure 5.1. Example 5.1: A feedback system with additive plant perturbation Solution: From Hoo control theory [59, 7], a sufficient condition for robust stability is W K(I  +  2  PK)'  < 1.  1  (5.7)  Therefore, the controller design problem can be posed as a mixed H /Hoo optimization 2  problem (P  5 2  ):  min  U{K)\<p{K) < 0},  (5.8)  stabilizing K  where cp(K) =  W K(I +  PK)-  (5.9)  - 1.  1  2  Since the nominal plant is unstable, the solution to ( P ) proceeds by first obtaining 5 2  a stable coprime factorization of the nominal plant P = NM~  X  take  s -s-2 s + 35 + 2' 2  s-1 N = M s + 3s + 2  2  2  s '+ 2  X =  s  2  - 26 , Y and = 3s + 2 ' ...  -485 -  75 +  as in Theorem 2.1. We  5  2  + 35  48  (5.10)  + 2'  Second, by using the Q-parametrization, we have W^I W^K(I  + PK)-  and  = T +T Q,  1  + PK)-  hl  1  = 60  1}2  (5.11) T , +T Q 2 1  2a  i  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  where Ti i = WiMX,  Ti =  -WiMN,  2  (5.12) r ,i = w M y , r 2 = - F F M M . 2  2  2  2l  Therefore we can express ( P - ) as an infinite-dimensional convex optimization problem 5  2  (P - ) : 5  min  3  { « ) < 0 } ,  (5.13)  where  W) =  l|ri,i  ^(Q) =  + r g|| , 1)2  2  -+-  N  Then, by approximating Q as YjXiQi  m  (5.14)  1-  the form of (5.1), ( P ) is approximated 5 3  i=i  by the finite-dimensional convex optimization program (P - ) : 5  4  7 i V  = min {<f>(x)\<p{x) < 0}.  (5.15)  Here we chose basis functions in (5.1) as all-pass functions: •/  _  Q^( )={J^)  \  N-i  S  (5-16)  ,i£K-  First we solved ( P ) by using the SQPbased solver constr in Matlab Optimization 5 4  Toolbox, with both objective tolerance and constraint tolerance as 1 0 . To use constr - 6  and other developed solvers in this thesis, we make one naive approximation. We simply approximate semi-infinite constraints and objectives by discretization, e.g., replacing an Hoo-norm constraint by a very large number of single frequency constraints log-spaced in a specified frequency range. Then we solved ( P ) by using the cutting-plane based solvers K C P and E M C P , 5 4  with the stopping criteria for K C P as e bj = 1 0 0  criterion for E M C P as e = 1 0 . . - 6  - 6  and e  To solve ( P ) 5 4  con  = 1 0 , and the stopping - 6  using either K C P or E M C P ,  we need to evaluate the objective function, <f>(x), the constraint function, y>(x), and to compute a subgradient of each. For a given x, the evaluations of <f>(x) and ip(x) can 61  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  be easily done by computing the H ^norm of T 2  2)2  + T Qg (x) 1)2  x  and the H oo—norm of  respectively. According to [23], one subgradient of tp(x) is given by  T ,i + T Qf (x), 2  M  vx  <f>sub(x) = (<t>lub( )'  >  x  W h e r e  ' oo —oo Now denote Q the frequency at which the Hoo-norm of T i + T Q^ (x) 2)  2t2  vx  is  achieved. Then, according to [23], one subgradient of <p(x) is given by y> ub{x) = S  T  (vlub( )'---' Psub( )) X  i  .  X  W h e r e  Ke{(T  +  .  \ \ T ,  2il  ^  {  X  )  =  T Q^ (x)yT QA 2t2  i  +  T  M  vx  x  )  2t2  l  ^  L  ^  A l l computations were done in a Sparc 5 Sun Workstation. We solved ( P  5 4  ) for  three different choices of basis functions in the form of (5.16) with p = 0.5,1, and 5. We took iV = 20. The obtained objective, 720, and the cpu time needed in the computation, t  , are summarized in Table 5.1 for three different solvers.  cpu  p  tcpu (s)  720  constr  KCP  EMCP  constr  KCP  EMCP  0.5  2.6015  2.6009  2.6010  4530.65  2768.81  1559.98  1  2.5998  2.5994  2.5994  3378.89  2305.45  1238.15  5  2.5993  2.5990  2.5990  1265.45  687.28  302.32  Table 5.1 Example 5.1: Results via constr, K C P and E M C P From Table 5.1, it can be observed that 1) the three different basis functions produce different convergence speeds; and among them, the all-pass function with p = 5 yields the best result in terms of objective value; and 2) E M C P is the most efficient solver among minimax, K C P and E M C P , in terms of the computation time. Taking the solution 62  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  corresponding to p = 5 from E M C P , we obtained the approximated Q in ( P - ) as 5  Qlv (x). X  4  Since further increasing the number of parameters does not reduce the objective  value significantly (< 1 0 ) , we can consider this solution as a nearly optimal one. With - 6  Qlv (x), X  we obtained a 25th-order stabilizing controller according to (2.4). The frequency  response of the resulting weighted robust stability function is depicted in Fig. 5.2, which shows that the obtained controller satisfies the robust stability constraint, given by (5.7).  0.2r  10  10"  10  10  10  10'  Frequency  10  Figure 5.2. Example 5.1: Robust stability constraint Example 5.2: Consider an unstable plant with transfer function P(s) =  + 10 s -0.5s + 1' -3  2  It is required to design a controller K(s) which will stabilize the plant in a negative unity feedback configuration (Fig. 5.3), such that the sensitivity function T  er  and the inverse additive stability margin T \T (M\ er  = K(I + PK)  -1  UT  < \h(ju)\, Vu,, 63  = (I +  PK)  -1  satisfy (5.20)  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  and (5.21)  < \l {jw)\, Vw,  \T (ju)\ ur  2  where the bounding functions li(s) (i = 1,2) are given by h(s) = 2  'a + 0.01\ 5 + 4.5  (5.22)  and 5  h(s) = 10  r  +  +2  (5.23)  5+10  e  K>  +  Figure 5.3. Example 5.2: A unity feedback control system Solution: If we define weight functions W\ and W  2  as W\ = i j  and W  - 1  2  =  l  x 2  respectively, then the closed-loop system can be represented by Fig. 2.2, as shown in Chapter 2, with 2 i = Wi(I + PK)~  l  and T  2  = W K(I 2  + PK)' . 1  The design  specifications given in (5.20) and (5.21) will be met if and only if  max  {11^(^)11^,11^(^)11^} < 1.  (5.24)  stabilizing K  Therefore the controller design problem can be cast as a two-objective Hoo optimization problem: (P - ): 5  5  7  =  .min  max{WT^K)^  \\T (K)\U. 2  (5.25)  stabilizing K  If the solution 7 < 1, then the minimizing controller K meets the design specifications. 64  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  Since the plant is unstable, the solution of problem ( P - ) proceeds by first obtaining 5  a stable right coprime factorization of P = NM~  as in Theorem 2.1. We take  l  -* +  1 0  N  =  M  =  s 2  -°-  5  5 s  +  l  s + 3s + 2' s + 3s + 2 ' s + 6.5s + 16.5 , - 1 . 2 5 3 + 1.25 X — —r-r , and Y = —T . s + 3s + 2 ' 3 + 3s + 2 2  2  2  (  5  2  6  )  v  2  2  In terms of the Q-parametrization of stabilizing controllers, the set of achievable closed-loop transfer functions Ti (i = 1,2) can be parametrized as Q G RHoo},  {Ti,i + Ti Q, j2  (5.27)  where T^i and 2^2 (i = 1,2) are the following stable transfer functions: Ti i = I f j M I , T i = - W i M / V , and 2  (5.28) T i = w My, T 2l  2  Thus the original problem ( P  2)2  =  -W MM. 2  ) is equivalent to the following infinite-dimensional  5 5  convex optimization program:  N-  By parametrizing Q using ^ ^ ( x ) = ^  in (5.1), we approximate ( P  5 6  ) by  a finite-dimensional convex optimization program (P - ) : 5  7  l  N  = m i l l i m a x { | | r , - i + r - g^ (a:)| i  2  e  }.  (5.30)  Here we chose the basis functions in (5.1) as the following all-pass functions:  >-  i  {i^y  Qi{s)=  ieK  (53i)  Both the S Q P based solver minimax in Matlab and the developed cutting-plane based solvers K C P and E M C P were used to solve ( P ) , . To use K C P and E M C P , we need to 57  65  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  evaluate the objective function, <f>(x) = max {||Ti(a;)HQQ, H ^ M X ) ^ } , ^  to compute one  of its subgradients. For a given x, <j>(x) can be evaluated by computing two Hoo-norms:  IITitx)!!^ and || T («) 11 oo-Denote fc the "active" index for which ^(x) = WTkW^k 2  = 1,2,  and Q the frequency at which Hoo-norm of 7* is achieved. Then, according to [23], one subgradient of <j)(x) is given by (f> b(x) = (<^ (x), • • •, <f>g (x)) , where u6  su  uh  cj>i = -^(Tiuo^kiU^QiU^ym^i  e N.  ub  (5.32)  A l l computations were done in a Sun Sparc 5 workstation, we solved ( P ) 5 7  for  three different choices of basis functions in the form of (5.31) with p = 0.5,1,5. We took iV = 30. The stopping criterion for minimax was taken as 1 0 , and the stopping criteria - 6  for KCP as e  o b j  = 10~ and e n = 10~ , and for EMCP as e = 1 0 . The obtained 6  CO  6  - 6  objective, 730, and the cpu time needed in the computation, i  , are shown in Table 5.2.  tcpu (s)  730  p  c p u  minimax  KCP  EMCP  minimax  KCP  EMCP  0.5  1.0269  1.0257  1.0217  819.03  1266.72  460.95  1  0.9715  0.9713  0.9711  767.77  988.85  352.43  5  0.9741  0.9735  0.9724  1139.43  390.40  | 1456.45  Table 5.2 Example 5.2: Results via minimax, K C P and E M C P From Table 5.2, we can see again that different basis functions produce different results. Among these three choices of the basis functions, p = 1 yields the best solution to ( P - ) in terms of the objective value. Again, EMCP appears to be the most efficient 5  6  solver among minimax, K C P and EMCP, in terms of the computation time. Taking the solution from EMCP, we obtained the approximated Q in ( P ) 5 6  as Ql® (x,s). x  With Ql® (x, s) from EMCP, a 36th-order stabilizing controller K(x,s)  was obtained  according to (2.4). The frequency responses of objective functions TI(K^)  and  x  66  TAK)  .  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  are shown in Fig. 5.4, the unweighted sensitivity function T (^K^ together with its er  bounding function l\ and the robustness indicator T  ur  (^K^j together with its bounding  function I2 are depicted respectively in Fig. 5.5 and Fig. 5.6. The graphs show that the designed controller meets the required specifications given in (5.20) and (5.21). Therefore Qly (x,s) X  gives a reasonable solution to this example.  Frequency  Figure 5.4. Example 5.2: Weighted objective functions \T (ju;) I (dotted line) and |T (jw)|(solid line) associated with Q X  3  67  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  10'  10- ' 10" 3  3  10"  2  10"'  " . i — 10° Frequency  i  — 10  — 10  1  i 10  2  Figure 5.5. Example 5.2: Sensitivity function T (solid er  3  line) and  its bounding function l\ (dotted line) associated with ( J ° 3  x  10'  Frequency  Figure 5.6. Example 5.2: Robustness function T (solid UT  and its bounding function associated with Q l h z  c x  68  line)  (dotted line)  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  From Theorem 4.1, however, the above solution is not an optimal one since max{|Ti(ju;)|, |T (ju;)|} in Fig. 5.4 is not flat. To get a better solution, we should 2  increase the number of parameters in Q^  vx  until there is no significant improvement of  the solution and max{|Ti(ja;)|, |T2(ju;)|} is nearly flat. For example, we solved ( P  5 6  )  using E M C P with p = 1 and N = 50 in (5.31), and e = 1 0 . The objective value for - 6  the approximation to this problem is 0.9666. Increasing N in Q^  vx  does not significantly  reduce the objective value (< 1 0 ) ; and the resulting max{|Tj(ja;)|, |Ti(ja>)|} is al- 6  most flat as shown in Fig. 5.7. This gives some confidence that the solution is nearly optimal. The controller K obtained from QH  X  is 56th-order, which has a 51st-order  minimal realization.  0.8  h  o.4  y  0.2  y ,-2  10'  ,-1 10'  10*  Figure 5.7. Example 5.2: Weighted objective functions {T^ju)|(dotted line) and |T (jo;)|(solid line) associated with Q50 cvx 2  69  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  § 5.2.5 Computational aspects Solving the above two design examples clearly demonstrates that EMCP is the most efficient solver among constr, minimax, KCP and EMCP. The slow convergence to the solution in the SQP based solvers is due to the truncation error in the gradient calculation by finite difference approximation. EMCP is always faster K C P since EMCP drops old constraints so that the linear programs solved at each iteration does not grow rapidly in size as the algorithm proceeds. In contrast, KCP does not drop old constraints and therefore suffers from a size problem. For example, Fig. 5.8 shows the number of constraints in linear programs versus the number of elapsed iterations when KCP and EMCP were used to solve Example 5.2 with p — 1 and iV = 30. As we can see clearly from Fig. 5.8, the size of the linear program in KCP grows linearly with the number of iterations, while the size of the linear program in EMCP turn out to be almost constant after 20 iterations. We can expect that with an increase of the size of the optimization problem the advantage of EMCP over KCP will be more apparent.  250 +  KCP  O  EMCfi  I  > 200  ! 150  • 100  50  "'4  ,ooooooooojooooooooo°iooooo<>oooO|00oooooo 20  40  60 80 Number of Iterations  120  140  Figure 5.8. Example 5.2: Number of Constraints in K C P and EMCP 70  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  Both control design examples show that the convergence of the optimization process depends on the choice of the basis functions in the linear approximation of Q. Therefore a good choice of the basis functions should reduce the computational complexity and maybe the order of the designed controller. Unfortunately, no systematic methods currently exist on how to choose good basis functions. The two examples also show that taking all-pass functions as the basis functions usually generates very high-order controllers, and a model reduction procedure is needed in order to implement an obtained controller. It should be remarked here that the LMI-optimization based method proposed in Chapter 4 had been also tried to solve the two-disc Hoo control problem in Example 5.2. Unfortunately due to the large size of the formulated LMIs the LMI-based solver sdpsol failed to produce any results.  § 5.3 Solution via Non-convex Optimization § 5.3.1 Nonlinear approximation of Q In this section, we only work on SISO systems. As shown in the last section, in convex optimization-based controller design, except for some special cases, we do not know how to choose good basis functions in linear approximation of Q, and usually very high-order controllers are generated. To circumvent this, we propose to sacrifice convexity and parametrize Q(s) 6 RHoo as a rational and proper transfer function in the following form: 2N+1  E  i  x s  .2N+l-i  (5.33)  Here, N(> 0) is chosen to give a desired degree of accuracy for the minimal value. For N = 0, we define Qncvx( ) x  =  x  b  a  function of only one parameter. For N > 1, 71  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  x = (XI,...,X N+I)  £ B?  T  and y = (3/11,3/12,-,ym,yN2)  N+1  2  T  eR  2 A r  are two design  parameter vectors. Comparing the parametrization in (5.33) with that in (5.1) reveals that Q*t is one vx  special form of Qn .  Indeed, if y in Q„  CVX  is specified, i.e., the poles of Qn  cvx  3  X  0  CVX  fixed in the left half plane, then the form of Qn  CVX  becomes the form of Q^ . vx  Therefore  any transfer function in R H ^ can also be approximated arbitrarily well by the form of (5.33), and if the orders of Qn  CVX  and Q^  vx  are equal, then Qn  CVX  should approximate Q  at least as well as with Q^ . To see this, note that we can always choose the solution of vx  in the convex program as the initial guess of Q^  Qf  vx  cvx  in the non-convex program by  an appropriate mapping and then make further improvements by adjusting parameters of Qncvx  m  m  e  non-convex program. More importantly, the form of (5.33) allows the direct  placement of 2N poles (some real, some in complex-conjugate pairs) at arbitrary points in the left half plane, in contrast to the fixed poles of Q*? in convex optimization. This vx  is the key to limiting controller complexity: our method usually produces controllers of lower order than those produced by other methods, especially the convex optimization method.  § 5.3.2 Computational Aspects With the transfer function Q parametrized as Q^ (x,y,s) cvx  dimensional convex program ( P  2 3  in (5.33), the infinite-  ) is approximated by a finite-dimensional non-convex  optimization program: ( P - ) : lucvx = 5  9  x  g  R  2  ™ *  e  ^  (5-34)  There are some specifically well-developed algorithms for solving these, e.g., [35, 97], which have been shown to be quite successful. In our computations, we use the SQP based solvers, such as constr and minimax, in the Optimization Toolbox for Use 72  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  with Matlab [96]. To use these solvers, we make one naive approximation. We simply approximate semi-infinite constraints and objectives by discretization, e.g., replacing an Hoo-norm constraint by a very large number of single frequency constraints log-spaced in a specified frequency range. No doubt great improvements in performance would result from the use of sophisticated methods for semi-infinite optimization programs such as those described in [35, 97], Even if our method is not guaranteed to find the global minimum of the objective function, computational examples show that an acceptable suboptimal solution can always be obtained with a litde common sense. Since most non-convex optimization problems benefit from good starting guesses at the solution, here, we propose three schemes of updating the starting guesses to improve the execution efficiency. Scheme 1:  Note that we can write  N  Qncvxi. iV) x  _  ^(^  — jjN^y^ '  (5.35)  where NQ(X) and Dq(y) are the numerator and denominator of Qn {x,y).  Clearly  CVX  NQ(X)  is convex in x. Based on this observation, we can use a convex optimization-  based solver, such as E M C P , to update the starting points in the optimization process. For a fixed N, the steps to obtain a better starting guess for solving non-convex problem ( P - ) are the following: 5  9  1. Solve ( P - ) by using a S Q P based solver, and obtain a solution 5  9  2. Fix Dq(y) by setting y = y l (P ) 510  : levx =  x  min  + i  c v x  (x* ,y* ). cvx  cvx  , in ( P 5 9 ) and hence form a convex program  {^(^(^/^(^.^^(^(x)/^^,))  < o}. (5.36)  3.  Solve ( p - ) by using the convex program solver E M C P , and obtain a solution x * 5  10  cvx-  4. Take (x* ,ylcvx) vx  a s a  n  e  w  starting guess for problem ( P  73  5 9  ).. .  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  This solution procedure for ( P  5 , 9  ) , using the combination of a SQP-based solver  and a convex optimization-based solver, can be simply summarized as following: define ( P  5 9  ) in (5.34) in terms ofQ%  e <— stopping 7  = N$ (x)/D%(y)  cvx  in (5.33) for some N;  tolerrance;  k « - 0; <— any initial guess for ( P ; ) ;  (XQ,?/O)  5  9  repeat { obtain a solution (x^. ,ync ) vx  define ( P  5 1 0  t  ( P ' ) and'j ,  0  5  VX  cvx  ) in (5.36) by setting y = y*  k  obtain a solution x*  x  to ( p  s a  n  k  take {x* , y^cvx) k  a  e  5 1 0  ) and ^  v  cvs  in ( P  5 9  );  , using a cutting-plane based solver;  x  initial guess for ( P  w  x  using a QP based solver;  k  9  5 9  );  k «- k + 1; } Until "fncvx ~ levx < 7 e  In this solution procedure, since ( p  5 1 0  ) is convex, we have that 7  ^ < 7*+^ <  levx ^ Incvx- The two sequences {"fn - } and {"i \ are thus monotonic nonincreasing. k  CV  x  vx  Clearly, both sequences are bounded below (e.g., by zero) and hence they are convergent to a same value. Scheme 2: Solutions from lower order problems can generally be used as starting points for higher order problems by using an appropriate mapping. For example, if we denote a solution to ( P  5 9  ) when N = 1 as Q} , ncvx  then we can take  (s + as + b) . (2 ) Qncvx 2  s  as an initial guess for ( P  5 9  +  a  s + fe  ( -37) 5  ) when N = 2, where a and b are some positive real numbers.  Scheme 3: The solutions for higher order problems, after using model reduction techniques, may also be used as starting guesses for lower order problems. For example, 74  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  if we denote a solution to ( P - ) when iV = 2 as Ql,cvx, using the balance-truncation 5  9  model reduction method such as balmr in Matlab, we can take its second-order model as an initial guess for ( P  5 9  ) when N — 1.  § 5.3.3 Numerical examples Initial experimentation with the proposed solution of the multiple objective control system design problem for SISO systems is certainly encouraging.  Here we solve  two control design problems given in the last section to illustrate the presented design procedure and to demonstrate its effectiveness. Example 5.4: Solve the mixed H / H 2  ( ' ) P5  3  :  control problem:  [ X ;  o ^ S {11^1,1+ Ti,2g|| |||r +r , g|| <l}, 2  2  2)1  2  (5.38)  oo  where the data T^i, Ti ,i = 1,2, are the same as those in Example 5.1. )2  Solution:  By approximating Q as Qn  CVX  in the form of (5.33), ( P  5 3  ) is approximated  as a finite-dimensional non-convex optimization program (P - ): 5  n  lN  = mm{\\T iV  x  | T  + T Ql (x,y)  n  12  vx  y II  +T Q  - 1 < o}.  N  2 1  22  ncvx  2  oo  (5.39)  )  Then using the SQP based solver constr in Matlab Optimization Toolbox and the convex optimization-based solver E M C P developed in the previous section, with Scheme 1 and Scheme 2 for updating initial guesses, we solved ( P  5 1 1  ) for N = 0,1, and 2, in a Sparc  5 Sun Workstation. The results are summarized in Table 5.3. Now we show that using the starting-guess-updating Scheme 3, we can obtain a better lower-order approximation of Q(s). We did model reduction for Q  2 ncvx  in Table  5.3, using Matlab command balmr, and obtained its second-order model as following : n  2  n  a  — r s * 0  0  0  0  2  ^  + +  2.5731s + 6.5601 2.30H 7.10253 +  75  ( 5  -  4 0 )  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  We took this as a new guess for Q  (P  ncvx  solution for ( P  5 1 1  ) . Invoking Scheme 1 again yielded a new  ) :  Qi LJ  20 0 0 0 0 - ZU.WW  32 + g 2 +  ncvx  With this Q  5 1 1  2  -  5 4 7 8 3  + +  6 g  -  (5 41) (5.41)  .  4 0 4 1 9 5 1 3  we obtained a 4th-order stabilizing controller  ncvx  (s + l)(s + 0.7584)(s + 2.18945 + 5.1094) 2  M  K  =  2 0  -°  0 0 0  ( , I 0.7724)(s - 13.6993)(^ + 2.2025, + 4.9762)  ( 5  '  4 2 )  and the corresponding minimal objective value 71 = 2.5994. Comparing with 71 in Table 5.3, clearly 71 < 71.  N  0  0  20.000  2.6771  7.97  1  20.0000(s +0.8453s+0.228l)  2.6340  173.77  2.5993  592.78  tcpu (s)  IN  N  2  s +0.9282s+0.2493 2  2  20.0000(s +0.0980s+0.1017)(s +2.5731.s+6.560l) 2  2  (s +0.0992s+0.1021)(s +2.3014s-|-7.1025) 2  2  Table 5.3 Example 5.4: Solution properties  Compared with the results obtained from convex optimization given in Example 5.1, here the order of the obtained controller is much lower, while there is not much compromise in the objective value. This is shown in Table 5.4, where n & denotes the 0T  order of the obtained controller, and 7 denotes the corresponding objective value. 76  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  Method  7  ^ord  Convex optimization  2.5990  25  non-convex optimization  2.5994  4  Table 5.4 Example 5.4: Results via convex and non-convex optimization This problem was first solved in [38] by the U-parametrization method. This method also solves a finite-dimensional non-convex program, aiming at getting a low-order controller. The solution given in [38] corresponds to our solution for N = 0. Both solutions yielded second-order controllers, but the U-parametrization method resulted larger objective value, 2.8616, which is 6.90% higher than that in this solution, 2.6771. Example 5.5:  Solve the two-objective H^, control problem (^ )  = 7 = . imn max  3  1  + T Q|| }, h2  (5.43)  oo  where the data T^\, T,, , i = 1,2, are the same as those in Example 5.2. 2  Solution:  By approximating Q using the parametric expression Qncvx in the form of  (5.33), we reduce ( P ) to a finite-dimensional non-convex optimization program 5 3  (P - ) : 5  12  1  N  = minmax iV  x  tt£  {||r -!( ) + T t  S  (s)Qf! (x,y,s)\\  l>2  }.  cvx  Ml  I  loo  (5.44)  J  By using the SQP-based solver minimax in Matlab Optimization Toolbox and the convex optimization-based solver E M C P , with starting-guess-updating Scheme 1 and Scheme 2, we solved ( P  5 1 2  ) on a Sparc 5 Sun Workstation. We started from N = 1 and increased  N to 3, at which we obtained a satisfactory solution, which is a 6th-order model. The results are shown in Table 5.5. From Table 5.5, at N = 3,  = 0.9701 < 1 meets  the required specifications. To get a lower-order solution than Q (s), ncvx  we first reduced it into a 4th-order  model, by using starting-guess-updating scheme 3, and then took the result as a new ,77  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  initial guess for Q\ . cvx  Invoking Scheme 1 again, with minimax solver and E M C P , we  also obtained a satisfactory solution, which is a 4th-order approximation of Q(s) by 0 + 5.93085 + 10.1786) (s + 0.66575 + 1.1339) = 9.7013 (.5 + 14.4398)(5 + 4.7052)(5 + 0.50065 + 0.9999)' 2  QlcvM  2  2  (5.45)  With Qn , a 6th-ofder controller was obtained, according to (2.4), as follows: CVX  9.7028(5 + 1.92995 + 0.9444) (5 + 0.32135 + 0.7136) (s + 3.97425 + 4.1408) 2  K(s)  2  2  (5 + 29.9390)(5 + 0.0100)(5 + 1.88785 + 0.9148)(5 + 4.01035 + 4.2807) (5.46) 2  2  and the corresponding minimal objective value is 72 = 0.9707 < 72 = 1.0030.  N  0  N  1  11.7556(s+2.3575)(s+0.0004)  2  10.0306s(s+2.2290)(5 +0.8165s+l.5746)  (s+16.8085)(s+0.0004)  2  s(s+16.5203)(s +0.6564s+1.2929)  IN  tcpu (s)  1.1755  126.30  1.0030  141.48  0.9701  816.35  2  3  9.7013(s+0 09356)(s+0.00019)(s +5.9308s+10.1790)(s +0.6657s+1.1339' 2  2  r  (s+0.09359)(s+0.00018)(s+14.4395)(s+4.7054)(s +0.5005s+0.9999) 2  Table 5.5 Example 5.5: solution properties  With this controller, the frequency responses of objective functions Ti and T are 2  shown in Fig. (5.9). The figure shows that the designed controller meets the required specifications given in (5.20) and (5.21), and max{|Ti(jw)|, |T2(ju;)|} is almost "flat". Therefore even the 4th-order Qn (s) gives a very reasonable solution to this example. CVX  78  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  Frequency  Figure 5.9. Example 5.5: Weighted objective functions [T^ju) | (dotted line) and |T (ju;)| (solid line) using parameter Q  2  3  ncvx  from (5.45)  The controller design problem in this example was first solved in [40] by a nested iterative #oo optimization procedure. This method is computationally demanding and the order of the desired controller can increase very rapidly with the number of iterations. Therefore at each step, model reduction techniques had to be used. The best results reported in [40] is a seventh order controller and its corresponding objective value is 0.9729. Our method here produces better results even when N — 2 in terms of objective value and the controller order [98]. In contrast to the convex optimization approach, the key feature of our proposed method is that it directly yields lower order controllers without significantly compromising the objective value. This can be seen from Table 5.6, in which 7 denotes the objective value and n d denotes the order of the obtained or  controller. 79  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  Method  7  H-ord  Convex optimization using 30 parameters  0.9711  36  Convex optimization using 50 parameters  0.9666  51  The iterative Hoo [47]  0.9729  7  Non-convex optimization  0.9707  6  Table 5.6 Example 5.5: Results via convex and non-convex optimization  § 5.4 Concluding remarks While there are no known analytical solutions to general multiple objective control problems, many problems can be effectively solved by convex optimization. We have demonstrated by solving two robust control problems that the cutting-plane based solver E M C P is most efficient among constr, minimax, K C P , and E M C P . We also have shown that efficiency of the solution via convex optimization depends on the choice of basis functions in the Q-parametrization. However for most control design problems, high-order controllers generally result. Therefore a further study is needed on choosing good basis functions. To get a low-order controller, a non-convex optimization procedure is proposed for the multiple objective design for SISO systems. The key is a nonlinear approximation of the transfer function Q in the Q-parametrization in terms of controller complexity. To improve the solution efficiency, three initial-guess-updating schemes are proposed. Two examples illustrate the effectiveness of the proposed techniques. They seem to indicate 80  Chapter 5. Numerical Solutions of Multiple Objective Control Problems  that the approximate solution is very close to at least a local minimum. In comparison to other methods, especially the convex optimization method, the design procedure presented here has the advantage of directly yielding low-order controllers. The extension of the non-convex optimization procedure to the M I M O systems is straightforward. However a large number of optimization parameters may cause some computaional problems: slow convergence and local optimum, as we experienced in the controller design- for teleoperation systems in the next chapter.  81  Chapter 6 Teleoperation Controller Design Problem In this chapter, first we formulate the robust controller design problem for teleoperation systems as a constrained multiple objective optimization problem, and then we demonstrate the effectiveness of the proposed methodology, with simulations and experiments, through a controller design example for a motion-scaling system.  § 6.1 Introduction A typical teleoperation system consists of five interacting subsystems: human operator, master manipulator, controller, slave manipulator and environment as shown in Fig. 6.1. For simplicity and to a first approximation, the blocks in Fig. 6.1 are usually modeled as L T I systems, in which the dynamics of the force transmission and position responses can be mathematically characterized by a set of network functions. This allows the designer to draw upon well developed linear systems and network theory for analysis and synthesis of the controller. For example, the teleoperation system in Fig. 6.1 can be modeled as an L T I 2n-port network illustrated in Fig. 6.2, in which the master, controller and slave are grouped into one block called the teleoperator M C S . Here, v  m  is the master velocity, v the slave velocity, fh the force that the operator applies to the s  master, f the force that the environment applies to the slave, and the operator hand and e  environment impedances are modeled, respectively, by Zh and Z . Usually, continuous e  contact is assumed between the manipulator and its environment so v^ = v  m  82  and v = v . e  s  Chapter 6. Teleoperation Controller Design Problem  Vh  Operator 0=ffl Master fh  i  Controller  L  Vs  Slave  V  f. Teleoperator  .  Environment  J s>  Figure 6.1. General Teleoperation System  Figure 6.2. 2n-port representation of a teleoperation system The teleoperation controller should be designed with the goal of achieving the best possible performance, normally termed as transparency, while maintaining stability when coupled to uncertain environments, in the possible presence of time delays, disturbances, and measurement noise. Therefore, the teleoperation controller design problem involves a compromise between performance and robust stability. It is challenging mainly due to the large uncertainties of operator and environment impedances that have to be accommodated and due to communication delays. The objective of this thesis work is to design a teleoperation controller that can achieve an appropriately defined measure of performance, i.e., transparency, while maintaining the system stability against any passive environment. First, by using the fourchannel i/oo-control framework proposed by Yan and Salcudean in [44], we define two transparency measures. One is defined by the closed-loop frequency responses on the kinematic correspondence error and the force tracking error [99]. The other is defined 83  Chapter 6. Teleoperation Controller Design Problem  as the admittance matrix gap between the designed teleoperator and a proposed ideal teleoperator [100]. Then the stability constraints are characterized by using the scattering matrix, for any passive operator and environment impedances, or the transmitted admittance to the environment, for a fixed operator impedance and any passive environments, of the designed teleoperator. After parametrizing all stabilizing controllers via the Youla parametrization, the controller design problems are cast as multiple objective optimization problems. For the case that the operator impedance is assumed to be known, the formulated controller design problems are shown to be convex. Therefore they are numerically solvable, and the limit of transparency achievable, thus the exact form of the tradeoffs between transparency and stability robustness can be obtained. This chapter is structured in the following way. In the next section, we review some basic passivity concepts and stability conditions for teleoperation systems. In Section 3, the ideal teleoperator is proposed for scaled teleoperation. In Section 4, the teleoperation controller design problems are formulated as multiple objective optimization problems by defining transparency measures and stability constraints. In Section 5, we demonstrate the effectiveness of the proposed robust controller design methodology by treating the design of a controller for a one-degree-of-freedom (1-DOF) force-reflecting and motionscaling teleoperation system. The tradeoff curves between transparency and robustness are displayed, and the results of both simulations and experiments are presented. Some concluding remarks are included in the final section.  § 6.2 Background on Stability and Passivity In this section, we review some .basic concepts of passivity and stability conditions for teleoperation systems. Those conditions will provide useful design constraints for the development of robust controllers for teleoperation. 84  Chapter 6. Teleoperation Controller Design Problem  Definition 6.1  [95]: For a linear time-invariant (LTI) n-port network as shown in Fig.  6.3, the impedance matrix Z is defined as the map from v to / by / = Zv; the admittance matrix Y as the map from / to v by v = Yf;  and the scattering matrix S as the map  from the input wave ca = (/ + v)/2 to the output wave b = (/ — v)/2, i.e., satisfying the equation b =  Sa.  Figure 6.3. An n-port network These matrices are interrelated by S = (I-Y)(I  Theorem 6.1  + Y)-  = {Z-I)(Z+.I)- .  1  1  (6.1)  [101]: (a) An LTI n-port as shown in Fig. 6.3 is strictly passive if and  only if the matrix criterion below holds for some 6 > 0 : Y(]u)  + Y*(}LO) >.8I,\/ui G R.  (6.2)  (b) An LTI n-port is passive if and only if criterion (6.2) holds for 6 = 0. Notice that if n = 1, condition (6.2) reduces to Re[Y(ju)}  Definition 6.2  > 8/2.  (6.3)  [58]: The i/-index, also referred to as passivity distance, is defined as the  distance of a stable LTI system to strict passivity. For the one-port network in Fig. 6.4, v = - inl: {Re[F(ja>)]}.  85  (6.4)  Chapter 6. Teleoperation Controller Design Problem  Theorem 6.2 [102]: Consider the network in Fig. 6.4, in which an L T I one-port with admittance Y is coupled to an impedance Z. This network will be stable for every strictly passive Z if and only if the one-port is itself passive, i.e., Re[F(ju;)] > 0, Vu; G R.  (6.5)  Figure 6.4. A representation of an LTI one-port network, Y, coupled to Z. Theorem 6.3 [103]: Consider the bilateral system shown in Fig. 6.2. This teleoperation system will be stable for every pair of strictly passive Zh and Z  e  if and only if the  scattering matrix  St  V s  1  21  s s  12  (6.6)  22  of the 2n-port teleoperator M C S has no poles in the closed right-half-plane, and moreover satisfies ,11  ^12'  21  s  s [s „2l//3  sup {fiA(St(ju>))\ - sup inf a u w /?>o  Here, a denotes the maximum singular value, and  22  (6.7)  < 1.  denotes the structured singular  value against the block structure A  S 0 h  0 S  (6.8) e  where Sh and S are the scattering matrices of strictly passive Zh and Z , respectively. e  e  86  Chapter 6. Teleoperation Controller Design Problem  Remarks:  (1) . The robustness criterion for the 2n-port teleoperation system in Theorem 6.3 is based on the finding that the system can be transformed into a structured uncertainty problem by using the scattering matrix [103], which can be addressed by considering the structured singular value [104]. Here we point out that this robustness criterion can also be derived from network theory on absolute stability [105, 95]. (2) . When the hand impedance  is fixed, Fig. 6.2 reduces to an one-port network as  in Fig. 6.4. Here we point out that the stability criterion for the one-port network in Theorem 6.2 can also be obtained by using the structured singular value of its scattering matrix. For the one-port network, the scattering matrix now becomes S = yipY, and the uncertainty structure is only one block A = S , with H ^ H ^ < 1. t  e  As shown in [104], the coupled network is stable if and only if p.&(St) < 1, which is equivalent to 1+Y(ju)< l,Vw e R , i.e., Re[Y{jto)} > 0,Vw e R as shown in (6.5).  § 6.3 An Ideal Teleoperator In this section, we propose an ideal teleoperator that has good transparency and robust stability. We shall see in the next section that this teleoperator can be used as a model-reference to aid the design of controllers. An ideal teleoperator is one that provides complete transparency of the man-machine interface such that the operator has the perception of working directly on the task environment.  In order to achieve this, for non-scaled teleoperation, the effort and  impedance of the operator port should be identical to the effort and impedance of the environment and vice versa [53, 45], i.e., v  m  87  = v , fh = f . This can be represented s  e  Chapter 6. Teleoperation Controller Design Problem  by an infinitely stiff and weightless mechanical connection between the master and the slave [106], which cannot be physically realized. For scaled teleoperation, an ideal teleoperator is proposed here and is modeled by a two-port network as illustrated in Fig. 6.5. Instead of eliminating the dynamics of the teleoperator and realizing the ideal response defined by vm = npvs, fh = riffe, where n and rif are constant scaling ratios of position and force respectively, a passive tool p  represented by impedance Z  is used by the operator, as similarly done in [107]. The  m  end effector force is multiplied by nf and directly fed forward to the operator's hand so that the operator can get a feel of the task, and the motion command from the operator is divided by np. For micro-manipulations, nf,np > 1. There are instances in which rif,n  p  — 1 (passive tool) or rif,n  p  < 1, i.e., manipulation is at a large scale. Here  Z h denotes the transmitted impedance to the hand, and Z t  te  the transmitted impedance  to the environment.  /  \  Figure 6.5. Two-port representation of ideal scaled teleoperation  The dynamics of this MCS teleoperator block are given by:  Vm = y (fh + n f ), m  f  v - Vm/n. s  88  e  (6.9)  Chapter 6. Teleoperation Controller Design Problem  where y  = 1/Z . Therefore, the ideal MCS teleoperator block can also be represented  m  m  by the following admittance matrix 1  Yi =  Uf  _l_  n  (6.10)  \Vm-  f  Tip .  -Tip  When nf — n — 1, this corresponds to the case in which the operator manipulates the p  task environment direcUy with the assistance of a tool with impedance Z . The important m  properties of this MCS teleoperator block are summarized in the following theorem. Theorem 6.4: The proposed ideal teleoperator has the following properties: (i). The transmitted impedance to the hand is H  Z  Zth  m  (6.11)  ~%e,  and the transmitted impedance to the environment is Zte  n.  = —Z -\ m  Uf  (ii) . If y  (6.12)  ~Zh.  Uf  is passive, then Yj is passive if and only if  m  n  f{ju)n* (ju) p  =  l , V w  <E  (6.13)  R .  (iii) .  (a) . If y  m  is passive, then sup{/iA(Sj(H)} = l  (6.14)  for any constant, positive and real scahngs nf and n . p  (b) . If ( H - ^ j ! /  m  is passive, then sup { ^ ( ^ ( j w ) ) } = 1 for any frequency  dependent scahngs  nf(ju)  and n (juj)  satisfying  p  I  (6.15) Tin  Here, Si is the scattering matrix of the teleoperator MCS, and the block structure is given by A  \S 0 h  o  with WSHW^ < 1 and \\S  E  S  e  89  < 1.  Chapter 6. Teleoperation Controller Design Problem  (iv).  If \y (joj)\ —> oo, Vtu 6 R , then m  1  Si  rip — rif  2rifn  2  rif — n ,  nf + n  p  p  (6.16)  r  Proof: First, (i) is obvious. For example, the first part can be proven by transforming Fig. 6.5 into Fig. 6.6.  riife +  rif/ripZe  Figure 6.6. Another representation of ideal scaled teleoperation  Second, we prove (ii). For each Yf(jw)  u  e  R , the eigenvalues  n  ±J[Re{y (jio)}} n  2  + n (ju)  2  f  n* (joj) p  n  nf(j<jj)n*(ju)  of the matrix  Re{y (ju)}  2  y  2  are given by  + Yj(ju)  Aj> (u>) =  where  \l' (u>)  = {(1 + ^ ) 2 / ™ } - ^ =  1 into (6.17).  we just need to notice that if (6.17). is bigger than  The  D a r t  "only  nf(ju)n*(ju>)  Re(y (ju)), n  eigenvalues Ay (u;) of the matrix 2  n e  Vu>  e  Yj(ju>)  c a n  e  e a s u  V  y (ju)\ , m  obtained by plugging  part can be proven by contradiction,  if" ^  ^  (6.17)  1, then the inside of the square root in  R. This would imply that at least one of the + Yi(ju)  90  is negative.  Chapter 6. Teleoperation Controller Design Problem  Third, we prove (iii). The scattering matrix of the ideal teleoperator, S /, can be 1  obtained, by using formula (6.1), as 2n y„ f  (i+^> +i m  Si = (6.18) ,11  „12  -21 7  „22 7  5  5  For each u e R , we compute fi&(Si(iu>)). Since S j has two block structured uncertainties,  i (S )=Ma(S ),  f A  I  (6.19)  IJ  where a denotes the maximum singular value and  Si,p =  (6.20)  To get the singular values of Si p, we calculate the eigenvalues \ (i = 1,2) of the matrix %  t  s  A(/9)±WA2(/3) + [2Re{y }]^B  i2  \y \ + 1 z  n  (6.21)  |y„r+ 2 R e { y » } + l where  T2  ^(0) = 2 +  |?/ | + 4 n  2  (6.22)  ny P  n  })  + 1.  Since a(Sitp) = max (y/Xl, V ^ f ) ' and from equations (6.21) and (6.22), we can see that minimizing a(Sij) is equivalent to minimizing A(/3), and the infimum in (6.19) 91  Chapter 6. Teleoperation Controller Design Problem  is achieved when  0M-  i  1  0, or B =  P\n,  (6.23)  \nfn \ p  Therefore, for any constant and positive scalings rif and n , we have p  1  tiA(Si)=.'mf(T(S p)=a[S p[ It  i;  It  (6.24)  and for any frequency dependent scalings rif (jus) and n (ju>) satisfying p  HL  {ju)  n  =Re{^(iu;)),Vu,GR,  In  p  (6.25)  )  p  we also have  / * A ( S J ) = cr  Si p  1.  t  (6.26)  yj\nf~n \ p  Finally, (iv) can be obtained from (6.18) by letting \y (ju>)\ —> oo,Vo; G R. m  Remarks: (1) . Clearly, as shown in (i) of Theorem 6.4, with this ideal teleoperator, the operator feels a. scaled version of the environment impedance plus the tool impedance. Similarly, the impedance transmitted to the environment is a scaled version of the hand impedance plus a scaled version of the tool impedance. It will be assumed later that a teleoperator with this property shall be considered as having ideal transparency.  A transparency measure will be introduced, based on the proposed  teleoperator model, in the next section. (2) . Although this teleoperator is not generally passive (see Theorem 6.4, (ii)), it is robustly stable for any strictly passive operator and environment impedances, for any 92  Chapter 6. Teleoperation Controller Design Problem  constant scalings (see Theorem 6.4, (iii).(a)) and even for some frequency-dependent searings (see Theorem 6.4, (iii).(b)), since it satisfies the stability criterion in (6.7). (3).  From (iv) of Theorem 6.4, we can see that the scattering matrix of this teleoperator approaches that of teleoperation for the ideal response in [43] as the tool's dynamics disappear, i.e., v  m  —• n v p  s  and fh —> nff  e  —> 0, Vu; e R.  as \Z (jio)\ m  § 6.4 Robust Controller Design In this section, we first define the performance measures and stability constraints for teleoperation systems, and formulate robust teleoperation control problems as multiobjective optimization problems.  § 6.4.1 Teleoperation controller structure  fha  +  fh  +  + fm  Operator  Master  Xm  = Pm  ( fh + f m )  Xm  Controller  Xs  Environment tea fe  X +  Slave  Xs  = Ps  ( fe + fs )  + fs  Figure 6.7. A four channel control structure. A general four channel structure in "admittance" form is used for controller design as shown in Fig. 6.7. Laplace transforms and transfer function notation are assumed throughout. For simplicity, we consider only the one-degree-of-freedom case. In Fig. 93  Chapter 6. Teleoperation Controller Design Problem  6.7, x  m  = v /s is the master position, x = v /s the slave position, P the master plant, m  s  s  m  and P the slave plant. The hand force fh is decoupled into an active component fh s  a  and a passive component —Hx ,  and similarly, the environment force f  m  into f  ea  e  is decoupled  and —Ex _ Note that since H and E are the maps from the positions to forces, s  the hand impedance,  , and the environment impedance, Z , are given by, respectively, e  Zh = \H and Z = ^E. The master dynamics P can also be expressed as the impedance e  Z  m  m  = 7T5- or the admittance y  m  i.e., Z = -jjr or y = s  s  =  = sP . We can do the same with the slave too,  =  m  sP . s  The controller K, which is a 2 by 4 matrix of real-rational transfer functions, takes force and position measurements from both master and slave, and generates the actuator driving forces f  m  on the master and f  s  on the slave. As shown in [45, 44], this four  channel structure can provide sufficient freedom to shape various closed loop frequency responses of interest. We shall show later that this controller structure can also realize scaled teleoperation with good transparency while maintaining stability. In this work, for simplicity, we do not consider the master and slave modeling errors, the measurement noise, the control disturbances, and the time delays.  § 6.4.2 Performance measures and stability constraints  w  u  G  K  z  y  Figure. 6.8. General feedback system 94  Chapter 6. Teleoperation Controller Design Problem  The model presented in Fig. 6.7 can be transformed into the basic configuration of the general feedback systems as shown in Chapter 2, which is re-plotted in Fig. 6.8. As discussed in Chapter 2, this setup can be used to define multiple performance specifications and stability constraints, and therefore to find a realizable controller K which stabilizes the augmented plant G. To define performance specifications and stability constraints, we first look at the general case, where both the hand and the environment impedances are unknown but strictly passive. Then we look at the special case where the hand impedance is fixed and the environment impedance is strictly passive. In the general case, we choose w = [f f ] , u = [f T  h  e  f], T  m  3  and y = [f f x h  e  x] . T  m  s  The output vector signal z should be selected to be able to characterize the performance specifications and stability constraints. There is a great deal of freedom in selecting the output vector z. To realize a scaled teleoperation, as an example, a possible set of signals is chosen as the following: (1) . z\ = Wf (f  — n// ), the force tracking error for some frequency range of interest. e  m  The master actuator force should track the environment force, scaled by a specified force-scaling ratio nf. Since it is more important at low frequency range, Wf is chosen to be a low-pass filter. (2) . z  — W (x — x /n ),  2  p  s  m  the kinematic correspondence error . The slave motion  p  should track the master motion scaled by a specified motion scaling ratio n . Again p  is chosen to be a low-pass filter.  W  p  (3).  z  z  =  , an unweighted output signal. This may be used to define transparency  and robust stability constraints, as we shall see later. Then the generalized plant can be expressed as z  'WGzw  y.  WG  W  Gy  U  ZU  u  95  (6.27)  Chapter 6. Teleoperation Controller Design Problem  where,  0 Pmj  G  Hp  sP 0  z  G yw  1  —rif  Pmfrip  Ps  0 sP, '1 0 p _ m 0  0" 1 0  0 0  sPm  0  1  1  m  _ 0  Ps  (6.28)  sP  s  ' 0 0 Gyu — p  1  Ps  0" 0 0  (6.29)  Ps  Here, for notational convenience, W = diag(W/, W , 1, l ) . p  Let T i , T  2  and  be the maps from ti; to z\, z , and 23 respectively, which are 2  closed-loop transfer functions. Here we propose two performance measures. We will use the Hoo-norm to reflect the error sizes of transfer function responses. First we define performance specifications, directly according to the above setup. Clearly, to have a good force tracking and a good kinematic correspondence between the master and the slave, both ||Ti(ii )|| and 1^2(^)11^ should be minimized. Therefore a performance r  00  measure, denoted by up, can be defined as  /  ,p(ir) = max{||ri(/^)|| ,||r (/0||oo}oo  2  (6.30)  Second, we can use the ideal teleoperator M C S proposed in the previous section as a reference model. Note that Yp is the map from fh fe  , and therefore it is the  to  admittance matrix of the teleoperator to be designed. As shown in the Theorem 6.4 of last section, the ideal teleoperator provides good transparency. Therefore, to get good transparency, Yp should be designed to match the ideal model Yi in (6.10) as closely as possible. Hence, a transparency measure, denoted by p,p, can be defined as the gap between the teleoperator to be designed and the proposed ideal teleoperator, quantified by the following weighted Hoo-norm: f*T(K) =  \\W [Y (K)-Y }\\ , T  96  T  I  00  (6.31)  Chapter 6. Teleoperation Controller Design Problem  where WT is a weighting matrix that reflects the frequency bands of interest. The smaller the value of ur, the better the transparency of the teleoperation system. To get the stability constraint for the teleoperator, as discussed before, we take the teleoperator as a two-port network. From its admittance matrix Yr, using (6.1), we have its scattering matrix ST = {Yr - I){YT +  Therefore, to ensure stability of  the two-port network coupled to any strictly hand and environment impedances, by using Theorem 6.3, the following absolute stability constraint should be satisfied: svL u (S {K)(ju)):< V  A  (6.32)  l,Vw E R.  T  In the special case where the hand impedance is fixed and the environment impedance is strictly passive, the signals u, y, and z are selected as the same as those defined in the general case. Since we assumed that the hand impedance define w as w —  fha  Let W = diag(W), W , 1, l ) and P p  .fe  H  (= ^H) is fixed, we = P /(1 + P H). Then m  m  the generalized p ant becomes z  WG  ZW  WG '  W  Gyy,  U  ZU  y.  where,  0  -n  1  f  Ps  G  z  1  0  SP  H  0  S  H  0 PH  0  G  z u  0" 1. ) Gy 0  0 Ps  —  SPH  0  0  SP  _  SP  \-HP G yw  (6.33)  '-HP  H  u  =  0 PH  0  Ps  (6.34)  S  0" 0 0  (6.35)  Ps  The performance measure and the transparency measure are, respectively, in the same forms as in (6.30) and (6.31). However, to differentiate this special case from the general case discussed above, we denote them, respectively, as  W  {K)  = max  {HTi^/pIL, 97  \\T ,H{K)\\J 2  (6.36)  Chapter 6. Teleoperation Controller Design Problem  and WM )  (637)  = \\WT,H( T,H(K)-YI,H)\\ -  K  Y  00  The teleoperator, in this case, can be taken as a one-port network. admittance transmitted to the environment, which is the map from f  e  Let Yu be the to v . According s  to Theorem 6.3, to maintain stability of the teleoperator coupled to any strictly passive environment impedance, the following stability constraint should be satisfied:  inf {Re[Y {K){ju)]} te  > 0, Vu; G R.  (6.38)  Remarks:  (1) . As shown in [108, 109], other useful signals could be added in the output vector z. For example, z± = W f . m  m  W  m  is chosen to be high-pass. This could be used  to account for master actuator saturation at high frequencies. (2) . From the definitions of up in (6.30), up in (6.31), and Yj in (6.10), it is easy to verify that up = 0  •<=>• up = 0. This equivalence, in the optimal case,  shows that the perfect force tracking and kinematic correspondence would imply the good match between the designed teleoperator and the proposed one, and vice versa. Therefore, we can use either up or up as an optimization objective in the controller deign. (3) . In practice, the variation of the operator hand impedance is relatively small if compared with drastic changes of the environment impedance. Therefore, to design a less conservative controller for teleoperation, the stability constraint in (6.38) could be used. The advantage of using (6.38) is that it is convex in design parameters, as we will show next.  98  Chapter 6. Teleoperation Controller Design Problem  § 6.4.3 Controller design problem formulations From the above discussion, it is clear that the teleoperation controller design involves tradeoffs between performance and robust stability. The general problem of designing a robustly stable controller for the scaled teleoperation system can be formulated as a multiobjective optimization problem as follows: (P - ): 6  min  1  P  K  sup H {S (K)(JLO)) A  (6.39)  a {K)  stabilizing T  < l,Vw G R,  (6.40)  or (P - ): 6  min  2  T  K  sup fi (S (K)(juj)) A  (6.41)  a {K)  stabilizing T  < l,Vu> G R.  '  (6.42)  For the special case where the hand impedance is fixed, the robust teleoperation controller design problem can be cast as  stabilizing  K  inf {Re[Y {K){ju)]} te  > -v, Vu; G R ,  (6.44)  or (P - ): 6  4  min  (6.45)  UT, (K) H  stabilizing  K  inf {Re[Y (K)(jco)]} te  99  > - i / , Vw G R ,  (6.46)  Chapter 6. Teleoperation Controller Design Problem  here, the positive parameter v is used to ensure a given distance to passivity defined in (6.4) and determine the degree of conservatism of the design. As discussed in Chapter 2, the Youla parametrization of the stabilizing controllers K [7] makes the closed-loop transfer matrices Ti,T ,Yp 2  Ti,#, T^jy, YT,H-> and Y  TE  affine  functions of Q € RH%£*. Since the scattering operator Sp is a bilinear transform of Yp it is not affine in Q £ RH^£ .  Therefore p,p, up, //p,#, and p,p jj are convex and  A  t  Sp is non-convex in Q £ RH%£*. Hence, after the Youla-parametrization, the general teleoperation control design problems ( P - ) in (6.39-6.40) and ( P - ) in (6.41-6.42) 6  1  6  2  are non-convex. Only in the special case where the hand impedance is fixed, can the controller design problems ( P ) in (6.43-6.44) and ( P ) in (6.45-6.46) be cast as 6 3  6 4  convex optimization problems in Q e RH oo  •  2x4:  (P - ) : 6  5  mm  QeRH ^ 2  (6.47)  u (Q) PtH  inf {Re[Y (Q)(ju)]} te  > -v,Vu  £ R,  (6.48)  and (P - ) : 6  6  mm  QeRH ^ 2  (6.49)  ap, (Q) H  inf {Re[Y (Q)(ju)}} te  > -u, Vu; 6 R.  (6.50)  § 6.4.4 Numerical solutions In this thesis, we only solve problems ( P ) and ( P ) . We tried both the convex 6 5  6 6  optimization procedure and the non-convex optimization procedure presented in Chapter 5. We found that the latter often produced slow convergence and local optimum due to the large number of parameters and the truncation error in gradient calculation by finite 100  Chapter 6. Teleoperation Controller Design Problem  difference approximation. Even though the cutting-plane based solver E M C P produced high-order controllers, it was found more efficient and reliable. Therefore we will use the cutting-plane-based solver E M C P developed in Chapter 5, in a design example shown next. To produce a finite-dimensional approximation of ( P  6 5  ) and ( P  6 6  ) , as discussed  in Chapter 2 and Chapter 5, Q G RH^* is approximated as a linear combination of fixed scalar stable basis functions Qi e RH^, as in N  Q(X X ,...,X ) U  2  N  = ^ X  t  g „ X e R * , 2  (6.51)  4  %  where the N real-valued matrices Xi(i = 1,2, ...,iV) are the design .parameters. For / _  \N-i  example, the basis functions can be chosen as all-pass functions Qi = ( f+£ J  for  some fixed p with positive real part, as in [26].  § 6.5 Design Example In this section, we present an example to show the trade-off between performance and stability robustness, and to demonstrate the effectiveness of the above developed methodology, through both simulations and experiments. The design problem considered here concerns a prototype telerobotic system for use in microsurgery experiments [108, 44]. In this system, two magnetically levitated wrists, each having a stator and a levitated floater, are used as the master and the slave. Both force and position need to be scaled down from the operator's hand to the task. The transfer functions mapping force to position for the master P (s) and slave P (s) are m  s  modeled, respectively, as  P  m  ^ =  1.422*2 +42.663 +383.94'  ( - ) 6  52  and ftW=  0.035,'+ 1.0.+9.6101  < 6  '  5 3 )  Chapter 6. Teleoperation Controller Design Problem  The design here is only for motion along a single D O F (the vertical z-axis). As discussed in the previous section, this teleoperation controller can be obtained by solving either ( P - ) in (6.47-6.48) or ( P 6  5  6 6  ) in (6.49-6.50), according to the designer's specifications.  § 6.5.1 Identification of the hand impedance To use the proposed controller design method, we need to identify the operator's hand impedance by using the master wrist. A JR? force/torque sensor was mounted on the top of the master wrist. As suggested in [55, 57, 44], the operator's hand impedance can be modeled as a constant mass-spring-damper system, i.e., Zh(s) = rrihs + bh + H(s) = sZh(s) = mhS +b} s-\-ki ,  and the operator hand force fh is considered to possess  2  l  or  l  active exogenous component //,„ and passive feedback component —Hx  m  dependent on  the hand impedance. In the identifying process, the operator was grasping the wrist's handle with his right hand without imposing any force on the wrist (fh = 0), and was a  following the wrist's motion. The master wrist was driven by a white noise signal. The experimental data of the hand force fh and the master wrist's position was collected. We took x  m  as the output signal and fh as the input signal. Then x  m  = J J ^ J A - Since it  is assumed as a second-order system, the model ARX(2,2) was chosen to describe the relations between the two signals: - T ) + b f (t - 2T ),  (6.54)  and b are unknown parameters to be identified, and T  = 0.002  x (t) + aix (t m  m  where, a\,a ,b\ 2  - 2T ) = hf (t  - T ) + a x (t s  2  m  S  h  s  2  h  S  2  s  seconds, the sampling interval. By using the Identification Toolbox [110], we obtained the parameters of the model ARX(2,2). Based on this estimated model, the hand impedance model H(s), the map from x  m  to /ft, was obtained in frequency domain as  H(s) = 0.26s + 26.23s + 170.83 (N/m). 2  We shall use this model in the following control system analysis and design. 102  (6.55)  Chapter 6. Teleoperation Controller Design Problem  § 6.5.2 Tradeoff between performance and stability robustness The tradeoffs between performance and stability robustness can be displayed by solving either ( P  6 5  ) or ( P - ) with different distances to passivity v. Here, by using the 6  6  linear approximation of Q as in (6.51) with N = 20 and p .= 1, and using the developed E M C P solver, we solved the convex program ( P  6 5  ) with different v, and different force  and motion scaling ratios rif = 5,10,20, and n = 5,10,20. The weighting functions p  in ( P - ) were selected as 6  5  Wt(s)  =  U  f  P,(s),  u  f  =  6TT rad/sec,  (6.56)  S+LOf  LO„  HJI(S) =  , top = 57T rad/sec.  (6.57)  These reflect the frequency bandwidths of force tracking and kinematic correspondence, and are low-pass filters. For example, for the kinematic correspondence, we want the error to be low for frequencies below 57t rad/sec. The performance vs robust stability trade-off curve can be plotted as shown in Figure 6.9. As expected, the performance gets worse as the distance to passivity increases; also higher scaling leads to worse performance. We can also see the large increase in performance measure imposed by the passivity requirement. For example, in the case of n = rif = 5, with passivity constraint p  (v =  0), ap H = 0.0265, and without any passivity constraint (v = oo), UP H = 0.0112. %  t  103  Chapter 6. Teleoperation Controller Design Problem  ~i  r  np= iu= 20  n = nr= 10 P  0.02  0.04  0.06 0.08 0.1 0.12 Distance to Passivity v  0.14  0.16  0.18  0.2  Figure 6.9. Performance vs distance to passivity for three values of force and motion scaling  § 6.5.3 Simulation results In the simulations presented here and the following experiments, we specified the force scaling ratio and the motion scaling ratio as nf = 50 and n = 10, respectively. p  The weighting matrix was selected as W (s)  =  H  U>y. -7/2x2, w = 57r rad/sec. s +w-y y  The robust controller was obtained by solving ( P m  i  l  l  6 6  ) for v = 0 :  W'HiQ)  2 x 4  inf {Re[Y (Q)(ju)}} te  (6.58)  > 0, Vu, e R ,  (6.59)  with the linear approximation of Q in (6.51). We also took N = 20 and p = 1 in (6.51). The resulting controller is of high order and was reduced to an 8th-order system by using the balanced model reduction technique. With the obtained controller, the maximum 104  Chapter 6. Teleoperation Controller Design Problem  singular value frequency responses of the admittance matrices YI,R and YT,H for the ideal teleoperator and the designed one, respectively, are shown in Fig. 6.10, and the Nyquist plot of the transmitted admittance to the environment, Yt , is depicted in Fig. e  6.11.  cn E 3 E  Frequency (rad/sec)  Figure 6.10. Maximum singular value frequency responses of Yj jj (solid line) and YT,H (circle) t  105  Chapter 6. Teleoperation Controller Design Problem  0.4  0  0.1  0.2  0.3  0.4  0.5  0.6'  0.7  0.8  Real Axis  Figure 6.11. Nyquist plot of Y  te  Clearly, Fig. 6.10 shows that the designed teleoperator is very close to the ideal one, and therefore should have good transparency. Fig. 6.11 shows that Y  te  is positive  real and satisfies the stability criterion in (6.5), and therefore the scaling system is also robustly stable for any strictly passive environment. In order to illustrate the stability and the validity of the scaling system with the proposed controller design approach, both ; simulations and experiments were carried out. Fig.  6.12 shows the simulation diagram in Simulink [111] that we used.  To  be consistent with the experiments in the next section, the master was modeled in continuous-time, and the controller, the slave and the environment were modeled in discrete-time. The dynamics of the master and the slave were simulated as in (6.52) and (6.53) respectively. Three sets of simulations were performed with the following passive environment impedances (note that E =  sZ ): e  (1) . a soft impedance E = 2s + 200, (2) . a stiff impedance E = 5s + 500, and 106  Chapter 6. Teleoperation Controller Design Problem  (3).  a time-varying impedance, which simulates the free motion to a soft environment, to a hard environment, to a soft environment again, and back to the free motion again: 0, 55 + 300, 105 + 1000, 25 + 100 0  E  Fig.  6.13, Fig.  6.14 and Fig.  0 < * < 3 sec 3 < t < 6 sec 6 < i < 9 s e c >. 9 <t<12sec 12<t<15sec  (6.60)  6.15 show the motion-scaling and force-scaling  results for each case. It can be observed that the system is always stable for the passive environments and both the motion and the force scahngs are realized. To look at transparency of the designed teleoperator, for the cases (1) and (2), we need to obtain the transmitted impedances to the hand, Z h, defined as the map from v t  m  to fh in frequency domain. We shall use Matlab Identification Toolbox [110] to do this. During the course of simulations for case (1) and case (2), we collected the data of the hand force fh and the master position x ,  as shown in Fig. 6.13 and Fig. 6.14. The  m  system identification problem is to estimate a model of a system based on observed inputoutput data. We took x  as the output signal and fh as the input signal. We assumed  m  that the two signals are related by a linear system, described by the model ARX(2,1): x (t) m  where, a\, a , 2  + x (t ai  - T ) + a x (t  m  s  2  m  - 2T ) = b ^ t S  (6.61)  - T ), s  and b\ are unknown parameters to be identified, and T = 0.002 seconds, s  the sampling interval. By using the Identification Toolbox, we identified the parameters of the model ARX(2,1). Based on this estimated model, the map E h from x t  m  to fh  was obtained in frequency domain, and then transformed into the estimated transmitted impedances to the hand, Z h — ^E h. The results are summarized in Table 6.1, where as t  defined before Z h = Z t  m  t  + ^Z  e  is the ideal transmitted impedance to the hand. The  Bode plots of the transmitted impedances Z h and-2^'are illustrated in Fig. 6.16 and t  107  Chapter 6. Teleoperation Controller Design Problem  Fig. 6.17, which clearly show very good transparency in the specified frequency range (< 57T rad/sec).  •*yyZero-Order Hold  [t.fh] +  From Workspace  Sum  Master  rvl Position Error  Zero-Order Hold! Mux fm/nHe  Forces  Controller  rZEZH-  1  To Workspace2  -+B&>-  r*r-i  np  Slave  Mux np*xs/xm  Positions  + 1  ZJ  Force Error  To Workspacel  Sum1 Environment  0-  Clock  -H time To Workspace  nf  Figure 6.12. Design example: simulation diagram of a motion-scaling system  108  Chapter 6. Teleoperation Controller Design Problem  (a)fh i  I 0  1  i  i  i  0.2  0.4  0.6  i  1  1  1  1  1  i i i 0.8 1 1.2 Time (second)  r  i  i  i  1.4  1.6  1.8  I 2  (b) xm (solid line) and 10*xs (dashdot line)  I 0  I 0  i  i  0.2  0.4  i  i  1  0.2  1  0.4  1  i 0.6  1  1  1  i i i 0.8 ,1 1.2 Time (second)  1  1  i  i  1.4 . .1.6  r  i  1.8  (c) fm (solid line) and 50*fe (dashdot line) i i i i i i  1  0.6  1  I  1  0.8 1 1.2 Time (second)  ;  I  1.4  I  1.6  r  I  1.8  Figure 6.13. Simulation results: motion scaling and force scaling with a soft environment E = 2s + 200.  109  I 2  I 2  Chapter 6. Teleoperation Controller Design Problem  (a)fh  (b) xm (solid line) and 10*xs (dashdot line) 1  \\ V\ \\  £-0.5 -1  0  1  1  i  •• \  1  LrvJ  0.5 - V  |  1  \  /.' /' Ii  X  1  1  0.2  0.4  />• •  0J  1  : \  \  1  1  i'  Y  \\  •'  U  \  // li •  0.8 1 1.2 Time (second)  1.4  1.6  1.8  2  (c) fm (solid line) and 50*fe (dashdot line) n  // if  o o  if  1  0.2  1  1  1  1  /il  \  0.4  I  . 0.6  I  I  I  0.8 1 1.2 Time (second)  r  1  '/ \  L/L \\  \ J  0  1  I  I  1_  1.4  1.6  1.8  Figure 6.14. Simulation results: motion scaling and force scaling with a stiff environment E = 5 s + 500.  110  2  Chapter 6. Teleoperation Controller Design Problem  (a) hand force: fh CD O  3  6  9  12  15  (b) motion scaling: xm (solid line) and 10*xs (dashdot line)  E E,  _ .  ...  I  J  co in o  I  1  6  9  12  (c) motion tracking error: xm-10*xs  E  E  c. o '  o_ Q  3  6  9  12  15  (d) force scaling: fm (solid line) and 50*fe (dashdot line) CD O  6  9  12  15  (e) force tracking error: fm-50*fe CD O  6  Time (second)  9  Figure 6.15. Simulation results: motion scaling and force scaling with a time-varying environment, which switches at 3, 6, 9, and 12 seconds 111  15  Chapter 6. Teleoperation Controller Design Problem  E  (=  z^  sZ ) e  2s + 200  1.422s + 52.66 +  1.635 + 60.77+  55 + 500  1.4225+67.66+  1.765 + 5 8 . 9 7 + 2 ^ 1  1 3 9 0  5  Table 6!1 Simulation results: transmitted impedances to the hand  1S0i  0  •:,  ;  I  10"  3  ; i iiiiii  ;  10"  2  : :  ••  ;  ' 10"'  (a)  ;  :  10°  Frequency (rad/sec) (b)  10'  ; ::  :  10  2  : : ;;;;;\  10  3  Frequency (rad/sec)  Figure 6.16. Simulation results: transmitted impedances to the hand, Z h (solid t  .—.  line) and Z h (dash-dotted line), with a soft environment E = 2s + 200. t  112  Chapter 6. Teleoperation Controller Design Problem  (a)  Frequency (rad/sec)  Figure 6.17. Simulation results: transmitted impedance to the hand, Z h (solid t  line) and Z h (dashdot line), with a stiff environment E = 5s + 500. t  § 6.5.4 Experimental results Experiments were performed with a U B C magnetically levitated (maglev) wrist [56] as the master manipulator, a slave manipulator model and a virtual environment, both of which were simulated in a SPARC 1-e processor under VxWorks, shown in Fig/ 6.18. The dynamics of the master and the slave were modeled as in (6.52) and (6.53) respectively. The virtual environments were taken as the same ones as in the simulations in the last section. Fig. 6.19 illustrates the hardware configuration of the experiment setup. It consists of a U B C maglev wrist, a current amplifier, a SPARC 1-e processor which resides on a V M E bus, as well as an analog-to-digital (A/D) board and a digitalto-analog (D/A) board. A J R force/torque sensor was mounted on the top of the master 3  wrist to measure the hand force. AUsoftware development is performed in a SPARC station host, which is networked to the SPARC 1-e processor. 113  Chapter 6. Teleoperation Controller Design Problem  Figures 6.20, 6.21 and 6.22 presents the motion scaling and force seating results when the operator manipulates the three different virtual environments using the U B C wrist. It can be observed that the system is stable against the passive environments, and a fairly good motion and force scaling are realized as specified. As done in the simulations, the transmitted impedances to the hand, Z^,  for the  soft environment (case (1)) and the stiff environment (case (2)), were estimated by using the Identification Toolbox in Matlab with the collected experimental data. The obtained results are presented in Table 6.2. The good transparency of this scaling system in the specified frequency range can be also seen from the Bode plots of the transmitted impedances Z h and Z h illustrated in Fig. 6.23 and Fig. 6.24. t  t  SPARC 1-e  UBC Wrist  VxWorks  Slave Manipulator Model  Controller  Virtual Environment  Figure 6.18. Design example: experiment diagram of a motion scaling system  114  Chapter 6. Teleoperation Controller Design Problem  CPU A/D D/A  Host  V M E bus  Ethernet SPARCstation  < x  I  5"  Force/Position  Current Amplifier  U B C Wrist  Figure 6.19. Design example: experimental setup  115  Chapter 6. Teleoperation Controller Design Problem  (a)fh  (b) xm (solid line) and 10*xs (dashdot line) •  E E Ji Ii i  o  \  Ii i Ji  v  •  -5  0  0.2  0.4  0.6  i  1  AN  \  Ji It  V  0.8 1 1.2 Time (second)  A  \  Ji '/  \  / \\ \  \\  \  1.4  /' 1.6  :  \  !\  1.8  2  1.8  2  (c) fm (solid line) and 50*fe (dashdot line) 5h  p  </ \  ,7  i  : \y  -5h _i  0  A  0.2  J  0.4  A  / /  : \// i  0.6  : i  i  \\  i  0.8 1 1.2 Time (second)  . /  \  \// i  1.4  i_  1.6  Figure 6.20. Experimental results: motion scaling and force scaling with a soft environment E = 2s + 200.  116  Chapter 6. Teleoperation Controller Design Problem  (a)fh 41  1  0  i  0.2  1  :  :  1  i  i  0.4  0.6  —i  1  i  i  r  i  i  0.8 1 1.2 Time (second)  i  1.4  i  1.6  1.8  I  2  (b) xm (solid line) and 10*xs (dashdot line) 1  '/  >  "co o  0  / /  1  \ \  1/  0.2  0.4  1  1  Ii  0.6  V. \  1' V\ : Ii  V  \  ^; J"  0.8 1 1.2 Time (second)  \  / /  \\ \\ \ \  \...//...  \  1.4  ... \v  1/  1.6  1.8  1  !  2  (c) fm (solid line) and 50*fe (dashdot line) 5  I  il  .A o  \  LL  -5  I  0  il  A  \\  /  I  1  l  1  if il  \ . \ \\  . .//  .. .\\... \\  //  ij  i1  iI  i I  0.2  0.4  0.6  i I  iI  if il  \  iI  0.8 1 1.2 Time (second)  •/  il il i/  if ll il  \\ \\  iI  i I  iI  1.4  1.6  1.8  Figure 6.21. Experimental results: motion scaling and force scaling with a stiff environment E = 5s + 500.  117  I  2  Chapter 6. Teleoperation Controller Design Problem  (a) hand force: fh 5 CD O  0 -5  3  6  9  12  15  (b) motion scaling: xm (solid line) and 10*xs (dashdot line) E E.  1  c g  1  1  v y v  V| \j \] y v  9  12  15  12  15  y  o  6  (c) motion tracking error: xm-10*xs  3  6  9  (d) force scaling: fm (solid line) and 50*fe (dashdot line) CD O  6  9  12  15  12  15  (e) force tracking error: fm-50*fe CD O  -5 b 0  6  Time (second)  9  Figure 6.22. Experimental results: motion scaling and force scaling with a time-varying environment, which switches at 3, 6, 9, and 12 seconds  118  Chapter 6. Teleoperation Controller Design Problem  E (= sZ )  Zth  e  Zth  2s + 200  1,422s + 52.66 +  5s + 500  1.422s + 67.66 +  1.46s + 27.73 + i i ^ i 1.65s + 13.75 +  S  2 9 9 4  8  Table 6.2 Experimental results: transmitted impedances to the hand  •  1501  (a)  ! • II  • •  Frequency (rad/sec) (b)  i—i  >w  10"  3  i i i iiil  10~  i 2  :  ;  icf  1  : : : : :::t  ;  : : : :  10° Frequency (rad/sec)  :  10*  : = = =  10  2  :  :::::::  10  3  Figure 6.23. Experimental results: transmitted impedances to the hand, Z h t  (solid line) and Z h (dashdot line), with a soft environment E = 2s + 200. t  119  Chapter 6. Teleoperation Controller Design Problem  (a) 150  100  m  C  'a °  50 0 10"  3  10~  2  10"  2  10"' .  1  10° Frequency (rad/sec) (b)  10  1  10  2  10  3  10° Frequency (rad/sec)  10  1  10  2  10  3  100  o u>  50  -100 10~  3  10"  Figure 6.24. Experimental results: transmitted impedance to the hand, Zth (solid line) and Z h (dashdot line), with a stiff environment E = 5s •+ 500. t  § 6.6 Concluding Remarks Teleoperation controller design is challenging in large part due to large uncertainties of the hand impedance and especially the environment impedance, and communication delays. In addition to this, a teleoperation system is inherently multi-input and multioutput even in the single DOF case, and involves tradeoffs between performance and stability robustness. In this chapter, an optimization-based robust controller design approach for scaled teleoperation systems has been proposed, to optimize transparency subject to robust stability for all passive environments,  The transmitted admittance to the hand was  used to define the stability criterion, which was shown to be easily incorporated into the controller design. With the four channel control structure and a proposed ideal teleoperator, using Q-parametrization, performance measure or transparency measure were defined, while controllers were obtained as solutions to multiobjective convex 120  Chapter 6. Teleoperation Controller Design Problem  optimization problems. By defining distance to passivity as a robustness measure, the trade-offs between performance and robustness were also displayed. A design example of a motion seating and force reflecting system, with both simulations and experiments, demonstrates that such an approach leads to an effective controller design for scaled teleoperation. Even if this approach is for 1-DOF systems, it may extend to multi-DOF systems as well. For multi-DOF case, first the performance measure and transparency measure can be easily defined in the same framework as those in 1-DOF case, and then the stability criterion can be defined now as positive realness of the transmitted admittance matrix to the hand. However, this approach still needs the hand impedance model. In addition to this, this approach may lead to a conservative design when some understanding of environment structure exists. These issues still need investigation.  121  Chapter 7 Conclusions In this chapter, we first summarize the contributions of this thesis, and then propose some future research.  § 7.1 Contributions This thesis has dealt with the multiple objective control problems analytically and numerically, and, particularly, as an application example, has studied the scaled teleoperation control problem. The main contributions of this thesis are summarized as the following: (1) . Analytical work. (a). Solutions to the SISO version of multiple objective HQO control problem are characterized by using Nonsmooth Analysis. Either the "all-pass" property is established, or the optimal performance value is obtained under some assumptions. (2) . Numerical work. (a) . The solution to the multiple objective L Q control problem is obtained by using duality theory and convex optimization. It is shown that the solution depends on the initial states and is an open-loop control optimal only for the particular initial states. (b) . A cutting-plane based solver has been developed for solving convex optimization problems. It is demonstrated that this solver is quite efficient in multiple objective control system design. Some computational issues arising in the design process are also discussed. 122  Chapter 7. Conclusions  (c) . An LMI-based solution in state-space to the multiple objective Hoo control problem is presented. (d) . An approximate numerical solution to the multiple objective control system design for SISO systems, which yields low-order controllers, is described. Three schemes for updating the initial guess in non-convex optimization are proposed to improve the computational efficiency. (3). Application work. (a) . A two-port ideal teleoperation model is proposed. It is shown that the model scales both positions and forces, yet is stable when terminated by any strictly passive hand and environment impedances. (b) . A transparency measure is proposed to be defined as the Hoo-distance to the ideal teleoperator model, and for a fixed hand impedance, a robust stability criterion, against any strictly environment impedances, is proposed to be positive realness of the transmitted admittance to the environment port. (c) . The robust control problem for teleoperation systems is formulated as a multiple objective optimization problem. By using the Youla-parametrization, for a fixed hand impedance, this problem has been shown to be convex in design parameters. (d) . The tradeoff between performance and stability robustness is displayed for different position and force seating ratios. (e) . A controller design example for a motion-scaling system designed for microsurgery experiments is presented. Both simulations and experiments have been carried out to show the effectiveness of the proposed multiobjective optimizationbased controller design methodology.  123  Chapter 7. Conclusions  § 7.2 Future Work Multiple objective control design is quite a challenging problem, and will continue to attract attention in the control community, and to find more applications in practice. Some goals of future research stemming from this thesis work include: (1) . Study on the M I M O version of multiple objective Hoo control problem. One possible direction is to extend the results for SISO case to M I M O case by using Nonsmooth Analysis.  Another possible direction is to explore the connection  between the multiple objective L Q control problem and the multiple objective Hoo control problem, since a number of researchers have solved single objective Hoo control problem by using L Q game theory. (2) . Development of a software package for solving the general multiobjective control problem based on more efficient numerical algorithms, such as interior-point methods. One important issue is the further study on choosing good basis functions when approximating " Q " parameter. (3) . Further research on the optimization-based robust controller design approach for teleoperation systems in the following ways: (a) . Extend the approach to several or variable hand impedances by developing a robust stability constraint that can be easily incorporated into controller design. This would produce more reasonable and practical designs. (b) . Extend the approach by developing some robust criterion for a class of environments, e.g., mass-spring-damper systems, instead of all passive environments. This would produce better transparency performance for the situations where there is some understanding of environment structures.  124  References [I]  I. M . Horowitz, Synthesis of Feedback Systems. Academic Press, New York, 1963.  [2]  M . Athans, ed., Special Issue on Linear-Quadratic-Gaussian IEEE Transactions on Automatic Control, 1971.  [3]  J. C. Doyle, "Guaranteed margins for L Q G regulators," IEEE Trans, on Automatic Control, vol. 23, no. 4, pp. 756-759, 1978.  [4]  G. Zames, "Feedback and optimal sensitivity: Model reference transformations, multiplicative seminorms, and approximate inverse," IEEE Transactions on Automatic Control, vol. AC-26, pp. 301-320, 1981.  [5]  G. Zames and B. A. Francis, "Feedback, minimax sensitivity, and optimal robustness," IEEE Transactions on Automatic Control, vol. AC-28, pp. 585-601, 1983.  [6]  B. A . Francis and J. C. Doyle, "Linear control theory with an H optimality criterion," SIAM J. Control and Optimization, vol. 25, pp. 815-844, 1987.  [7]  B. A . Francis, A Course in HooControl Theory, vol. 88 of Lecture Notes in Control and Information Sciences. Springer-Verlag, Berlin, 1988.  [8]  J. C. Doyle, K. Glover, P. P. Khargonekar, and B. A. Francis, "State-space solutions to standard H2 and Hoo control problems," IEEE Transactions on Automatic Control, vol. 34, no. 8, pp. 831-847, 1989.  [9]  D. Mustafa and K. Glover, Minimum Entropy Hoo Control, vol. 146 of Lecture Notes in Control and Information Sciences. Springer-Verlag, 1990.  Control, vol. 16(6),  ro  [10]  J. C. Doyle, B. A. Francis, and A. Tannenbaum, Feedback Control Theory. Macmillan, New York, 1992.  [II]  D. Tabak, A. A . Schy, D. P. Giesy, and K. G. Johnson, "Application of multiple objective optimization in aircraft control systems design," Automatica, vol. 15, no. 5, pp. 595-600, 1979. 125  [12]  G. Kreisselmeier and R. Steinhauser, "Application of vector performance optimization to a robust control loop design for a fighter aircraft," International Journal of Control, vol. 37, no. 2, pp. 251-284, 1983.  [13]  R. E. Skelton and M . DeLorenzo, "Space structure control design by variance assignment," Journal of Guidance, vol. 8, no. 4, pp. 454^462, 1985.  [14]  P. M . Makila, "On multiple criteria stationary linear quadratic control," IEEE Transactions on Automatic Control, vol. 34, no. 12, pp. 1311-1313, 1989.  [15]  D. Kyr and M . Buchner, " A parametric LQ approach to multiobjectiye control system design," in Proceedings of the 27th Conference on Decision and Control, (Austin, Texas), pp. 1278-1284, 1988.  [16]  H. T. Toivonen and P. M . Makila, "Computer-aided design procedure for multiobjective L Q G control problems," International Journal of Control, vol. 49, no. 2, pp. 655-666, 1989.  [17]  P. P. Khargonekar and M . A. Rotea, "Multiple objective optimal control of linear systems: the quadratic norm case," IEEE Transactions on Automatic Control, vol. AC-36, pp. 14-24, 1991.  [18]  Z. Hu, S. E. Salcudean, and P. D. Loewen, "Multiple objective control problems via nonsmooth analysis," in Preprints of 13th IFAC World Congress, vol. C, pp. 415^120, 1996.  [19]  J. Medanic and M . Andjelic, "On a class of differential games without saddlepoint solutions," Journal of Optimization Theory and Applications, vol. 8, no. 6, pp. 413^130, 1971.  [20]  D. L i , "On the minimax solution of multiple linear-quadratic problems," IEEE Transactions on Automatic Control, vol. 35, no. 10, pp. 1153-1156, 1990.  [21]  N. T. Koussoulas and C. T. Leondes, "The multiple linear quadratic Gaussian problem," International Journal of Control, vol. 43, pp. 337-349, 1986.  [22]  H. T. Toivonen, " A multiobjective linear quadratic gaussian control problem," IEEE Transactions on Automatic Control, vol. 29, no. 3, pp. 279-280, 1984. 126  [23]  S. P. Boyd and C. H. Barratt, Linear Controller Design: Limits of Performance. Prentice Hall, Englewood Cliffs, NJ, 1991.  [24]  S. P. Boyd, L. E. Ghaoi, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia, 1994.  [25]  J. C. Doyle, K. Glover, P. P. Khargonekar, and B. A. Frances, "State-space solutions to standard H2 and Hoo control problems," IEEE Transactions on Automatic Control, vol. AC-34, pp. 831-847, 1989.  [26]  E. Polak and S. E. Salcudean, "On the design of linear multivariable feedback systems via constrained nondifferentiable optimization in Hoo spaces," IEEE Transactions on Automatic Control, vol. AC-34, pp. 268-276, 1989.  [27]  J. W. Helton, " A Bang-Bang theorem for optimization over spaces of analytic functions," Journal of Approximation Theory, vol. 47, pp. 101-121, 1986.  [28]  J. G. Owen and G. Zames, "Robust Hoo disturbance minimization by duality," in Proc. IEEE Conference on Decision and Control, (Brighton, England), 1991.  [29]  A. M . Holohan and M . G. Safonov, "Duality relations for the optimal two disk Hoo problem," in Proc. 1992 American Control Conference, pp. 1844-1849, 1992.  [30]  A. M . Holohan and M . G. Safonov, Neoclassical control theory: a functional analysis approach to optimal frequency domain controller synthesis, vol. 50 of Control and Dynamic Systems, pp. 297-329. Academic Press, 1992.  [31]  G. Zames and J. G. Owen, "Duality theory for M I M O robust disturbance rejection," IEEE Transactions on Automatic Control, vol. AC-38, pp. 743-752, 1993.  [32]  D. S. Bernstein and W. H. Haddad, " L Q G control with an Hoo performance bound: a Riccati equation approach," IEEE Transactions on Automatic Control, vol. A C 34, pp. 293-305, 1989.  [33]  E. Polak, P. Siegel, T. Wuu, W. T. Nye, and D. Q. Mayne, "DELIGHT.MIMO: an interactive, optimization-based multivariable control system design package," IEEE Control System Magzine, no. 4, 1982. 127  [34]  C. L. Gustafson and C. A. Desoer, "Controller design for linear multivariable feedback systems with stable plants, using optimization with inequality constraints," International Journal of Control, vol. 37, no. 5, pp. 881-907, 1983.  [35]  E. Polak, D. Q. Mayne, and D. M . Stimler, "Control system design via semiinfinite optimization: A review," Proc. IEEE, vol. 72, pp. 1777-1794, 1984.  [36]  S. P. Boyd, V. Balakrishan, C. G. Barratt, N. M . Khraishi, X . L i , D. G. Meyer, and S. A. Norman, " A new C A D method and associated architectures for linear controllers," IEEE Transactions on Automatic Control, vol. AC-33, pp. 268-282, 1988.  [37]  T. Ting and K. Poolla, "Upper bounds and approximate solutions for multidisk problems," IEEE Transactions on Automatic Control, vol. 33, no. 8, pp. 783-786, 1988.  [38]  P. Dorato and Y . L i , "U-parameter design of robust single-input-single-output systems," IEEE Transactions on Automatic Control, vol. 36, 1991.  [39]  P. Dorato, L. Fortuna, and G. Muscato, Robust Control for Unstructured Perturbations — An Introduction, vol. 168 of Lecture Notes in Control and Information Sciences. Springer-Verlag, 1992.  [40]  Y. S. Hung and B. Pokrud, "An H oo approach to feedback design with two objective functions," IEEE Transactions on Automatic Control, vol. AC-37, pp. 820-824, 1992.  [41]  E. Polak, D. Q. Mayne, and J. E. Higgins, "Superlinearly convergent algorithm for min-max problems," Journal of optimization theory and application, vol. 69, pp. 4 0 7 ^ 3 9 , 1991.  [42]  D. Hwang and B. Hannaford, "Modeling and stability analysis of a scaled telemanipulation system," in IEEE Intl. Workshop on Robot and Human Communication, pp. 32-37, 1994.  [43]  Y. Yokokohji, N. Hosotani, and T. Yoshikawa, "Analysis of maneuverability and stability of micro-teleoperation systems," in Proceedings of the IEEE International 128  Conference on Robotics and Automation, (San Diego, California), pp. 237-243, May 8-13 1994. [44]  J. Yan and S. E. Salcudean, "Teleoperation controller design using Hoo optimization with application to motion-scaling," IEEE Transactions on Control Systems Technology, vol. 4, no. 3, pp. 244-259, 1996.  [45]  D. A. Lawrence, "Stability and transparency in bilateral teleoperation," IEEE Transactions on Robotics and Automation, vol. 9, no. 5, pp. 624-637, 1993.  [46]  R. J. Anderson and M . W. Spong, "Bilateral control of operators with time delay," IEEE Transactions on Automatic Control, vol. 34, pp. 494-501, May 1989.  [47]  G. Niemeyer and J. E. Slotine, "Stable adaptive teleoperation," IEEE Journal of Ocean Engineering, vol. 16, no. 1, pp. 152-162, 1991.  [48]  Y . Yokokohji and T. Yoshikawa, "Bilateral control of master-slave manipulators for ideal kinesthetic coupling — formulation and experiment," IEEE Transactions on Robotics and Automation, vol. 10, pp. 605-620, Oct. 1994.  [49]  C. Andriot and R. Fournier, "Bilateral control of teleoperators with flexible joints by the Hoo approach," in SPIE Conf 1833: Telemanipulator Technology, pp. 8 0 91, 1992.  [50]  G. Leung, B. A. Francis, and J. Apkarian, "Bilateral controller for teleoperations with time delay via /j-synthesis," IEEE Trans, on Robotics and Automation, vol. 11, no. 1, pp. 105-116, 1995.  [51]  H. Kazerooni, T. I. Tsay, and C. L. Moore, "Telefunctioning: an approach to telerobotic manipulations," in Proc. American Control Conference, pp. 27782783, May 1990.  [52]  C. A. Lawan and B. Hannaford, "Performance of passive communications and control in teleoperation with time delay," in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 776-783, 1993.  [53]  B. Hannaford, " A design framework for teleoperators with kinesthetic feedback," IEEE Transactions on Robotics and Automation, vol. 5, pp. 426-434, 1989. 129  [54]  A. A. Goldenberg, D. Bastas, and Y . Strassberg, "On the bilateral control of master-slave teleoperators," Robotersysteme, vol. 7, pp. 91-99, 1991.  [55]  G. J. Raju, G. C. Verghese, and T. B. Sheridan, "Design issues in 2-port network models of bilateral remote manipulation," in Proceedings of the 1989 International Conference on Robotics and Automation, 1989.  [56]  S. E. Salcudean, N. M . Wong, and R. L. Hollis, " A force-reflecting teleoperation system with magnetically levitated master and wrist," in Proceedings of the IEEE International Conference on Robotics and Automation, 1992.  [57]  H. Kazerooni, T. Tsay, and K. Hollerbach, " A controller design framework for telerobotic systems," IEEE Tran. on Control Systems Technology, vol. 1, no. 1, pp. 50-62, March 1993.  [58]  J. T. Wen, "Robustness analysis based on passivity," in Proceedings of American Control Conference, vol. 2, pp. 1207-1212, 1988.  [59]  M . H. Vidyasagar, Control Systems Synthesis — A Factorization Approach. MIT Press, Cambridge, M A , 1985.  [60]  J. M . Maciejowski, Multivariable Feedback Design. Addison-Wesley, 1989.  [61]  D. C. Youla, H. Jabr, and J. J. B. Jr., "Modern Winner-Hopf design of optimal controllers — Part I," IEEE Transactions on Automatic Control, vol. AC-21, pp. 3-13, 1976.  [62]  C. A. Desoer, R. W. Liu, J. Murray, and R. Saeks, "Feedback system design: the fractional representation approach to analysis and synthesis," IEEE Transactions on Automatic Control, vol. AC-25, pp. 399-412, 1980.  [63]  C. N. Nett, C. A. Jacobson, and N. J. Balas, " A connection between state-space and doubly coprime fractional representations," IEEE Trans. Automatic Control, vol. AC-29, pp. 831-832, 1984.  [64]  J. C. Doyle, Lecture Notes: ONR/Honeywell Workshop on Advances on Multivariable Control. Minneapolis, M N , 1984. 130  [65]  S. E. Salcudean, Algorithms for Optimal Design of Feedback Compensators. PhD thesis, University of California, Berkeley, 1986.  [66]  A. E. Bryson and Y. Ho, Applied Optimal Control: Optimization, Estimation, and Control. Blaisdell Pub. Co., 1969.  [67]  J. G. Owen, Performance Optimization of Highly Uncertain Systems in Hoo. PhD thesis, M c G i l l University, Montreal, 1993.  [68]  J. C. Doyle, K. Zhou, and B. Bodenheimer, "Optimal control with mixed H2/ Hoo performance objectives," in Proc. 1989 American Control Conference, pp. 20652070, 1989.  [69]  D. Mustafa, "Relations between maximum entropy/ Hoo control and combined Hoo/LQG control," Systems & Control Letters, pp. 193-203, 1989.  [70]  P. P. Khargonekar and M . A. Rotea, "Mixed H /Hoo control: a convex optimization approach," IEEE Transactions on Automatic Control, vol. AC-36, pp. 824-837, 1991.  [71]  K. Zhou, J. C. Doyle, K. Glover, and B. Bodenheimer, "Mixed H2/ Hoo control," in Proc. 1990 American Control Conference, pp. 2505-2507, 1990.  [72]  H. Yeh, S. Banda, and B. Chang, "Necessary and sufficient conditions for mixed H / HQO control," in Proc. 27th IEEE conf. Decision and Control, pp. 1013-1017, 1990.  2  2  [73]  W. Rudin, Real and Complex Analysis. McGraw-Hill Book Company, 1966.  [74]  J. E. Kelley, "The cutting-plane method for solving convex programs," J. Soc. Indust. Appl. Math., pp. 703-712, 1960.  [75]  J. Elzinga and T. Moore, " A central cutting plane algorithm for the convex programming problem," Math. Programm. Studies, no. 8, pp. 134-145, 1975.  [76]  N. Z. Shor, "Cut-off method with space extension in convex programming problems," Cybernetics, vol. 13, no. 1, pp. 94-96, 1977. 131  [77]  N. Z. Shor, Minimization Verlag, Berlin, 1985.  Methods for Nondifferentable Functions. Springer-  [78]  Y. Nesterov and A. Nemirovski, Interior-point Polynomial Algorithm in Convex Programming. S I A M , 1994.  [79]  D. L i , "On general multiple linear quadratic control problems," IEEE Transactions on Automatic Control, vol. 38, no. 11, pp. 1722-1727, 1993.  [80]  W. M . Wonham, Linear Multivariable Control: A Geometric Approach. New York, Springer-Verlag, 1979.  [81]  B. D. O. Anderson and J. B. Moore, Optimal Control: Linear quadratic Methods. Prentice Hall, 1990.  [82]  J. C. Willems, "Least squares stationary optimal control and the algebraic Riccati equation," IEEE Trans, on Automatic Control, vol. 16, no. 6, pp. 621-634, 1971.  [83]  P. D. Loewen, "Notes.on multiple objective linear quadratic problem," tech. rep., Department of Mathematics, University of British Columbia, 1995.  [84]  A. Bensoussan, "Saddle points of convex concave functionals with applications to linear quadratic differential games," in Differential Games and Related Topics (H. W. Kuhn and G. P. Szego, eds.), Noth Hollan Publishing Company, 1971.  [85]  K. Zhou, J. C. Doyle, and K. Glover, Robust and Optimal Control. Printice Hall, 1995.  [86]  S.-P. Wu and S. Boyd, sdpsol: A Parser/Solver For Semidefinite Programs With Matrix Structure. User's Guide, Version Alpha. Stanford University, November 1995.  [87]  I. R. Petersen, "Disturbance attenuation and H ^ optimization: a design method based on the algebraic Riccati equation," IEEE Transactions on Automatic Control, vol. 32, pp. 4 2 7 ^ 2 9 , 1987.  [88]  G. P. Papavassilopoulos and M . G. Safonov, "Robust control design via game theoretic methods," in Proceedings of the 28th Conference on Decision and 132  Control, (Tampa, Florida), 1989. [89]  D. J. N. Limebeer, B. D. O. Anderson, P. P. Khargonekar, and M . Green, " A game theoretic approach to H ^ control for time-varying systems," SIAM J. Control Optim., vol. 32, 1992.  [90]  F. H. Clarke, Optimization and Nonsmooth Analysis. Universite de Montreal, Montreal, 1989.  [91]  R. G. Douglas, Banach Algebra Techniques in Operator Theory. Academic Press, 1972.  [92]  J. B. Garnett, Bounded Analytic Functions. Academic Press, 1981.  [93]  B. W. Char, Maple Reference Manual, 1988.  [94]  B. D. O. Anderson and J. B. Moore, "Algebraic structure of generalized positive real matrices," SIAM J. Control, no. 6, pp. 615-624, 1968.  [95]  B. D. O. Anderson and S. Vongpanitlerd, Network Analysis and Synthesis — A Modern Systems Theory Approach. Prentice Hall, 1973.  [96]  A. Grace, Optimization Toolbox for Use with MATLAB. The Math Works, Inc., 1990.  [97]  E. Polak, "On the mathematical foundations of nondifferentiable optimization in engineering design," SIAM Review, vol. 29, pp. 21-91, 1987.  [98]  Z. Hu, S. Salcudean, and P. Loewen, "Numerical solution of multiple objective control system design for SISO systems," in Proceedings of American Control Conference, vol. 2, pp. 1458-1462, 1995.  [99]  Z. Hu, S. Salcudean, and P. Loewen, "Robust controller design for teleoperation systems," in Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, vol. 3, (Vancouver, Canada), pp. 2127-2132, October 1995.  [100]  Z. Hu, S. E. Salcudean, and P. D. Loewen, "Optimization-based teleoperation controller design," in Preprints of 13th IFAC World Congress, vol. D, pp. 4 0 5 133  410, 1996. [101]  C. A. Desoer and M . Vidyasagar, Feedback Systems: Input-Output Properties. New York: Academic Press, 1975.  [102]  J. E. Colgate and N. Hogan, "Robust control of dynamically interacting systems," International Journal of Control, vol. 48, no. 1, 1988.  [103]  J. E. Colgate, "Robust impedance shaping telemanipulation," IEEE Transactions on Robotics and Automation, vol. 9, pp. 374-384, August 1993.  [104]  J. C. Doyle, "Analysis of feedback systems with structured uncertainties," Proc. IEE, PartD, vol. 129, no. 6, pp. 242-250, 1981.  [105]  R. W. Newcomb, Linear Multiport Synthesis. McGraw-Hill, New York, 1966.  [106]  J. Dudragne, " A generalized bilateral control applied to master-slave manipulators," in 20th ISIR, pp. 435^142, 1989.  [107]  K. Kosuge, T. Itoh, T. Fukuda, and M . Otsuka, "Scaled telemanipulation system using semi-autonomous task-oriented virtual tool," in Proceedings of the IEEE IROS, pp. 124-129, 1995.  [108]  J. Yan, "Design and control of a bilateral motion system using magnetic levitation," Master's thesis, University of British Columbia, March 1994.  [109]  S. Salcudean, J. Yan, Z. Hu, and P. Loewen, "Performance trade-offs in optimization-based teleoperation controller design with applications to microsurgery experiments," in Proceedings of International Mechanical Engineering Congress and Exposition,, vol. DSC-57-2, pp. 631-640, 1995.  [110]  L. Ljung, System Identification Toolbox with MATLAB. The Math Works, Inc., 1988.  [Ill]  J. Hicklin, A . Grace, and D. Packer, SIMULINK: A Program for Simulating Dynamic Systems. The Math Works, Inc., 1992.  134  Appendix A Cutting-plane Algorithms In this appendix, we first describe some of the basic tools of convex analysis [85], and then outline the Kelley's cutting-plane (KCP) algorithm [51,10] for solving convex optimization problems, and its refinements by Elzinga and Moore ( E M C P ) [30].  § A.1 Elements of Convex Analysis we review some basic tools of convex analysis: directional derivatives and subgradients. Definitions [85]: Let / : R " -> R be a convex, but not necessarily differentiable, function. (i).  The directional derivative of /(•) at x in the direction h, denoted by df(x, h) is defined by df (x, h) = lim  (ii).  f(x +  th)-f(x)  (A.1)  t  The subdifferential of /(•) at x is the set df(x)  = { ( £ R " : f(z)  Every vector in df(x)  > f(x) + i {z T  - x), for all ze  R"|.  (A.2)  is called a subgradient of / at x.  § A.2 KCP Algorithm We consider the following constrained optimization problem: (P ) a l  : <j>*min = {(f>(z) : ipi(z) < 0, • 1pm(z)< •• 0},  135  (A.3)  where both the objective function <f> and the constraint functions i/>; (i G m) are convex functions from R  n  to R. Define the constraint function  ib(z) = max ipi(z),  (AA)  l<t<m  so ( P ) a l  in (A.3) is equivalent to  (pa.2\ . ^  =  . ^  m i n  z€R  < j Q  ( A  5 )  n  We will say that z is feasible if ib(z) < 0. Suppose we have computed function values and at least one subgradient at a sequence of points x\, • • •, x for both the objective and the constraint functions: k  <t>{x\), • • •, (j>(x ), g G cty(xi), • • • ,-gk e d(t>(x ), (A.6) k  1  k  ^ ( x i ) , • • •,^(xjfc), / i l 6  • • • ,h  k  G d^(x ), k  where these points x need not be feasible. Then we can form piecewise linear lower 2  bound functions for both the objective and the constraint:  <t>k( ) = max (<j)(xi) + gj(z z  .  A  \  r  xi)), <A.7)  The lower bound function ib yields a polyhedral outer approximation to the feasible k  set <0}c { ^ ? ( « ) < 0 } , since V^ (*) < 6  for all 2 G R".Thus we have the following lower bound on <j)* : f>L  =  k  But L  k  (A.8)  min{4 (z)\1>2(z) < o}. b  (A.9)  in (A.9) can be obtained by solving a linear program: L= k  min \c w\Aw T  weR  n+1  t  136  < b\  J  (A.10)  where  w=  ' z'  , c=  L.  '0' 1  ,A =  r T 9i  -1"  9l  -1  r T - <j>(xi) ' 9l xi 9k  h x\ - if>(xi) T  0 .  -hi  x  (A. 11)  ,6 =  0  hi  ~ <f>( k)  x  k  -lk  ~  h x  Thus, the K C P algorithm for solving the e -relaxed  ^{ k). x  problem  con  (P°- ) : min {<f>(z) : ib(z) < e }  (A. 12)  3  con  is summarized as the follows: € bj <— objective tolerance; 0  tcon <— constraint  tolerance;  x\ ±-r- starting parameter vector; k^-0; repeat { k <- k + 1; compute <f>(x ) and any g G d<j){x ); k  k  k  compute rb(x ) and any h G dip(x ); k  k  k  solve LP in (A.10) to find its solution x and L ; k  k  k+\ <- k',  x  x  } until (if>(x ) < e k  con  and <f>(x ) - L < e ). k  k  obj  Theorem A.1 [51,10]: The sequence {x } constructed by K C P Algorithm converges to k  a point that is feasible and has objective value within e j,j of optimal for ( P 0  137  a 3  ).  § A.3 EMCP Algorithm We consider the problem of maximizing a linear function subject to m-concave constraints from R  n  to R :  ( P - 4 ) : /*  =  m  ax{ ^ c  i(x) > 0, • • • ,g (x) > o].  :  9  m  (A.14)  First, note that the assumption of a linear objective function involves no loss of generality. For if the objective function f(x) is concave, then f(x) — y is concave, and ( P - ) : m a x { / ( x ) : gi(x) > 0, • • •,g (x) a  (A.15)  > 0}  5  m  is equivalent to ( P ) : m a x { y : f(x) - y >0, (x) x£R" a 6  gi  Second, note that ( P ) , (P ) a l  (A. 16)  >0}.  m  and ( p - ) are equivalent optimization problems. Here  AA  we only consider problem ( P  >0,---,g (x)  a  a 4  5  ) in (A.14).  Let F = {x : gi{x) > 0, i G m}, let / be a lower bound on the optimal value of ( P f ) , e a pre-specified objective tolerance. Suppose we have computed at least one a  subgradient at a sequence of points x\, • • •, x for constraint functions: k  9i(xi),---  ,gi{x ), k  h\ £ dgi(xi),---,h  k  e  dgi(x ), k  (A.17) 9m(xi), • • -,gm(xk),  h  m  e dg {xi), • • •, h m  m  e  dg (x ). m  k  Then the E M C P algorithm is summarized as follows: Step 0. Let LPQ be the linear program max  U : c x - 8 > f\. T  Let k = 1. 138  (A.18)  Step 1. Let (x ,8 ) k  a solution to the linear program LP _\.  D e  k  If 8 < e, stop; x is  k  k  k  optimal. Otherwise go to Step 2. Step 2. Delete constraints from the linear program LPk-i  according to the following  deletion rules: Deletion rule I:  Delete the objective constraint from LP _i k  Deletion rule II:  Delete a constraint from LPk-i  (a) , the constraint was generated at the I  if  e F.  if  iteration, I < k,  th  (b) . 8 < 08 , where /? is a fixed scalar, 0 < /3 < 1, and k  l  (c) . the constraint was not tight in Call the resulting program L P ^ _ Step 3. ^  I  fX  k  e  j?  a(  jjoin to LP' _^ h  LP _\. k  r  the objective constraint (A. 19)  c x-8>cx . T  k  (ii). If x  F, adjoin to LP' _  k  k  g {x )+ r  k  v  h (x )(x k  r  k  the constraints: - x) k  £>0,  h (x ) k  r  k  (A.20)  2  where, g (x ) r  < 0, for r 6 m.  k  Call the resulting program LP , set k = k + 1 and return to Step 1. k  Theorem A.2 [30]: If / < /*, then under either deletion rule I or rule II (or both or neither), the sequence {x } constructed by E M C P Algorithm converges to an optimal k  solution to ( P  a 4  ).  139  Appendix B Linear Matrix Inequalities In this appendix, we first give a short introduction to linear matrix inequalities, and then point out sources of software for solving linear matrix inequality problems.  § B.1 Linear Matrix Inequalities A linear matrix inequality (LMI) is an inequality of the following form m  (B.l)  F{x) = F + J2xiFi>0 i=i 0  where x £ R  m  is the variable and Fo,...,F  £ ~£i  nxn  m  are given symmetric matrices.  The inequality symbol means that F is positive semidefinite. An L M I is a convex constraint on x, i.e., the set {x\F(x) > 0} is convex. The L M I (B.l) can represent a wide range of convex constraints on x. In particular, linear inequalities, (convex) quadratic inequalities, matrix norm inequalities, and constraints that arise in control theory, such as Lyapunov and convex quadratic matrix inequalities, can all be cast in the form of an LMI. For example, the Lyapunov inequality AP where A £ R  n x n  (B.2)  + PA<0,  T  is given and P = P  is the variable, can be readily written out in  T  the form of (B.l), as follows. Let P\,...,P  m  be a basis for symmetric n x n matrices  (m = n(n + l)/2). Then take Fo = 0, and Fi = -A P{ T  - Pi A. We often encounter  problems in which the variables are matrices as in (B.2). For these problems, we will not write out the L M I explicidy in the form F(x), but instead make clear which matrices are the variables. 140  Many problems in system and control theory can be formulated as optimization problems involving LMIs (see, e.g., [11]): mm  T  c x (B.3)  s.t.  F(x) > 0.  The optimization problem in the form of (B.3) is called semidefinite programming. Semidefinite programs are convex optimization problems; conversely, many convex optimization problems can be expressed as semidefinite programs. The survey paper [98] gives an overview of semidefinite programming and applications. The formulation and applications of more general problems than (B.3) can be found in [99].  § B.2 Software for Solving LMI Problems Pascal Gahinet, Arkadi Nemirovsky, and Alan Laub, at Mathworks, developed the L M I Control Toolbox, which allows one to efficiently solve problems involving LMIs.  This toolbox solves the problems using Nesterov and Nemirovsky's Interior-  Point methods [71] and taking advantages of some of the problem structure (e.g., block structure, diagonal structure of some of the matrix variables). This toolbox was not available while the related L M I work of this thesis was being done. Vandenberghe and Boyd at Stanford developed a software package, called SP, for solving semidefinite programming. They used a primal-dual interior-point algorithm [97]. SP was written in C, and is available via anonymous ftp at isl.stanford.edu in pub/boyd/semidef_prog. E l Ghaoui at ENSTA in Paris (elghaoui@ensta.fr) also developed a software package, called LMItool, that acts as an interface for SP, and allows users to specify semidefinite programming problems interactively from within Matlab.  It  is especially well suited for problems with matrix variables. It is also available via anonymous ftp at ftp.ensta.fr in pub/elghaoui/lmitool. 141  Based on SP, Boyd and Wu at Stanford developed another software package, called sdpsol, as a parser/solver to simplify the specification and solution of semidefinite programs, sdpsol parses semidefinite programming problems expressed in the sdpsol language, solves them, and reports the results in a convenient way. It is available via anonymous ftp at isl.stanford.edu in pub/boyd/semidef_prog/sdpsol.  142  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0064919/manifest

Comment

Related Items