- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Adaptive optimization of discrete stochastic systems
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Adaptive optimization of discrete stochastic systems Gracovetsky, Serge Alain
Abstract
The general theory of stochastic optimal control is based on determining a control which minimizes an expected cost. However, the use of minimum expected cost as a design objective is arbitrary. A direct consequence of this choice is the need for extensive statistical information. If the required statistical data is not available or not accurate, the controller is suboptimum. The thesis begins with the investigation of the conventional method of solution and proposes an interpretation of the solution which introduces a different approach. This approach does not use the expected cost as design objective. The suggested new criterion is based on a trade-off between deterministic optimization and a cost penalty for estimation error. In order to have a basis of comparison with the conventional method, the proposed adaptive stochastic controller is compared with the standard stochastic optimal controller for a linear discrete system associated with linear measurements, additive noise and quadratic cost. The basic feature of the proposed method is the introduction of an adaptive filter gain which enters the proposed cost index algebraically and couples the controller with the estimator. Unlike the conventional Kalman-Bucy filter gain, the proposed gain is a scalar independent of the second and higher order moments of noise distributions. Simulation is carried out on second and fifth order linear systems with gaussian and non gaussian noises distributions. There is a moderate cost increase of 1% to 12%. The method is then extended to nonlinear systems. A general solution of the nonlinear problem is formulated and a complete investigation of the properties of the solution is given for different cases. Stability of the expected tracking error of the filter is guaranteed by introducing bounds on the filter gain. Problems arising from the use of suboptimum structures for the control are examined and discussed. It is shown that for a class of systems the proposed method has a particularly attractive form. As in the linear case, the required statistical information is limited to the expected values of the noises, and the expected value of the initial state of the system. Simulation executed on second order systems indicates a cost decrease of 1% to 20% when compared with the method using an extended Kalman-Bucy filter.
Item Metadata
Title |
Adaptive optimization of discrete stochastic systems
|
Creator | |
Publisher |
University of British Columbia
|
Date Issued |
1970
|
Description |
The general theory of stochastic optimal control is based on determining a control which minimizes an expected cost. However, the use of minimum expected cost as a design objective is arbitrary. A direct consequence
of this choice is the need for extensive statistical information. If the required statistical data is not available or not accurate, the controller
is suboptimum.
The thesis begins with the investigation of the conventional method of solution and proposes an interpretation of the solution which introduces a different approach. This approach does not use the expected cost as design objective. The suggested new criterion is based on a trade-off between deterministic
optimization and a cost penalty for estimation error. In order to have a basis of comparison with the conventional method, the proposed adaptive stochastic controller is compared with the standard stochastic optimal
controller for a linear discrete system associated with linear measurements,
additive noise and quadratic cost. The basic feature of the proposed method is the introduction of an adaptive filter gain which enters the proposed
cost index algebraically and couples the controller with the estimator. Unlike the conventional Kalman-Bucy filter gain, the proposed gain is a scalar independent of the second and higher order moments of noise distributions.
Simulation is carried out on second and fifth order linear systems with gaussian and non gaussian noises distributions. There is a moderate cost increase of 1% to 12%.
The method is then extended to nonlinear systems. A general solution
of the nonlinear problem is formulated and a complete investigation of the properties of the solution is given for different cases. Stability of the expected tracking error of the filter is guaranteed by introducing bounds on the filter gain. Problems arising from the use of suboptimum structures for the control are examined and discussed. It is shown that for a class of systems the proposed method has a particularly attractive form. As in the linear case, the required statistical information is limited
to the expected values of the noises, and the expected value of the initial state of the system. Simulation executed on second order systems indicates a cost decrease of 1% to 20% when compared with the method using an extended Kalman-Bucy filter.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2011-04-18
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.
|
DOI |
10.14288/1.0302211
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Campus | |
Scholarly Level |
Graduate
|
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.