Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Stochastic averaging level control and its application to broke management in paper machines Ogawa, Shiro 2003

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2004-902439.pdf [ 6.1MB ]
Metadata
JSON: 831-1.0065478.json
JSON-LD: 831-1.0065478-ld.json
RDF/XML (Pretty): 831-1.0065478-rdf.xml
RDF/JSON: 831-1.0065478-rdf.json
Turtle: 831-1.0065478-turtle.txt
N-Triples: 831-1.0065478-rdf-ntriples.txt
Original Record: 831-1.0065478-source.json
Full Text
831-1.0065478-fulltext.txt
Citation
831-1.0065478.ris

Full Text

Stochastic Averaging Level Control and Its Application to Broke Management in Paper Machines by SHIRO OGAWA B. Eng., The University of Tokyo, 1967 M . Eng., The University of Tokyo, 1969 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES (Department of Electrical and Computer Engineering) We accept this thesis as conforming to the required standard  THE UNIVERSITY OF BRITISH COLUMBIA December 12, 2003 ©Shiro Ogawa, 2003  Abstract Averaging level control refers to liquid level control of storage tanks, where the objective is to keep the outlet flow u(t) as smooth as possible against the fluctuating inlet flow, while at the same time keeping the tank level y{t) within high and low limits. The thesis treats the stochastic averaging level control problem, where the input disturbance is a stochastic process. The problem is formulated to minimize a weighted sum of Var[u(r)] and Var[«(/)] subject to the target Var[y(r)]. The state-space linear quadratic optimal control method is used, resulting in a linear state feedback controller. When the input disturbance is modelled as the output of a first-order low-pass filter driven by white noise, the optimal controller is a phase-lag network. Broke storage tank level control is important in stabilizing the paper machine wet end. It is treated as a special type of averaging level control, where the input disturbance F (t) is a two-state b  continuous-time Markov process. The spectrum of F (t) is obtained and the linear optimal conb  troller is designed with the same methodology as for the general averaging level control problem. Taking this very specific nature of Fb(t), a new nonlinear control scheme called the minimum overflow probability controller (MOPC) is designed, and tested against data collected from a paper machine. The MOPC performs better than the optimal linear controller and manual control. A new theorem on the state probability distribution of a continuous-time Markov jump system is presented, which leads to new methods for evaluating the mean and the variance of the state of a linear jump system, and a new reliable numerical method to calculate the state distributions of jump systems. These results are utilized to evaluate the overflow probabilities of controllers.  ii  Contents Abstract  ,,.  ii  List of Tables  vii  List of Figures  ix  Acknowledgements  x  1  Introduction and Overview  1  1.1 Background and Motivations 1.2 Overview and Structure of the Thesis 1.2.1 Format and Notation 1.3 Contributions of the Thesis  1 2 4 4  Averaging Level Control  7  2  2.1 2.2  2.3  3  Introduction Mathematical Models 2.2.1 Process Model 2.2.2 Disturbance Model . . ! 2.2.3 Flow Smoothness/Roughness Model Historical Perspective and Literature Review 2.3.1 P/PI Detuning 2.3.2 Deterministic Constrained Optimization 2.3.3 Constrained Minimum Variance Controller  Linear Optimal Controllers  3.1 3.2 3.3 3.4  7 8 8 9 11 13 13 16 17 19  Introduction State-Space Method Evaluation of Variances Stationary Input  19 20 22 23 iii  3.5 3.6 3.7 3.8  3.4.1 System Equation and Performance Index 3.4.2 State-Space Solution . 3.4.3 Noise-Free Observer 3.4.4 Controller Parameterization 3.4.5 Comparison to PI Controller Minimum Variance of Outlet Flow 3.5.1 Comparison to P Controller Random Walk Input 3.6.1 Effects of Damping Factor Summary Mathematical Details 3.8.1 Solution of Section 3.4.2 3.8.2 R (y) and Rv(u) in 3.4.4 3.8.3 R (y) and Rv(u) in 3.4.5 v  v  4  Downstream Processes  43  4.1 4.2 4.3 4.4  43 44 47 49 50 52 54  Introduction Downstream Process Model State-Space Solution Performance Comparison 4.4.1 Effects of v(0 weight 4.4.2 Var[y2] reduction versus CJ Summary  P  4.5 5  23 24 25 26 29 32 33 35 37 38 39 39 39 41  Markov Jump Systems  55  5.1 5.2 5.3 5.4  55 56 57 59 63 63 71 73 75 77 78 78 80  5.5 5.6 5.7 5.8  Introduction Jump Markov Systems Holding Time Density State Distribution 5.4.1 Entry Probabilities 5.4.2 Examples State Moments of Linear Jump Systems 5.5.1 Example Linear Quadratic Optimal Control Summary Appendix - Mathematical Details 5.8.1 Covariance of x(f) 5.8.2 Linear Quadratic Control iv  6  Broke Storage Tank Level Control  6.1  6.2 6.3 6.4  6.5 6.6  6.7 6.8 7  84  Introduction 6.1.1 Paper Machine 6.1.2 Benefits of Wet End Stabilization 6.1.3 Broke Handling 6.1.4 Literature Review Characteristics of Broke Flow 6.2.1 Spectrum of Broke Flow ^ Flow Smoothness Requirements' . Linear Optimal Controller 6.4.1 Jump System Theory 6.4.2 Implementation Issues Nonlinear Controllers . 6.5.1 Minimum Overflow Probability Controller Optimal Change of u(i) 6.6.1 Asymptotic Analysis 6.6.2 Linear u(i) Summary Maximum Principle  Simulation With Industrial Data  7.1 7.2  7.3  7.4  7.5  84 84 85 87 87 88 89 92 92 94 95 95 96 100 101 103 103 104 110  Introduction Data 7.2.1 Process 7.2.2 Data Set C 7.2.3 Incoming Flow Estimate 7.2.4 Statistical Analysis 7.2.5 Probability Plots 7.2.6 Chi-Square Test 7.2.7 Summary Controller Design 7.3.1 QMOPC 7.3.2 Linear Controller Simulation 7.4.1 Manual Control 7.4.2 Linear Controller 7.4.3 QMOPC Summary  HO HO HO 112 113 115 117 118 119 120 120 122 124 125 125 -127 127 v  8  Conclusions  130  8.1 Summary 8.2 Future Research  130 132  Bibliography  134  vi  List of Tables 7.1  Chi-square test of break and normal durations  119  7.2  Overflow probabilities for QMOPC and MOPC  120  7.3  Overflow probabilities for QMOPC  121  7.4  The controller parameter for <x[y] = 20  123  7.5  Statistics of simulations  124  vii  List of Figures 1.1  Averaging level control problem  1  2.1  Storage tank material balance  8  3.1  Parameters of C  3.2  Gain of C  3.3  Magnitude of S for C  3.4  Comparison of C  L  and PI controller with n - 0.7  30  3.5  Comparison of C  L  and the PI controller with various n  31  3.6  The ratio of the variance of u(t)  3.7  Performance change by rj for PI controller  37  4.1  Downstream process and F  44  4.2  Gain plot of the downstream process transfer function  46  4.3  System for tank level control loop and downstream process  47  4.4  Gains of Co and CL  51  4.5  Magnituedof5 forC andC/,  4.6  Variance reduction of y>i(j) vs variance of v ( 0  4.7  Effects of o)  4.8  Effects of a>  vs. K  L  27  for K = 0.1,1 and 10  L  and C  L  28 28  P I  between CPD and Cp. x-axis is R (y) v  u  51  D  on variance of y2{t)  p  p  34  52 52  on variance of ^ ( 0 with normalized downstream process cut-off  frequency  53  5.1  Symmetrical binary signal d(t) is filtered by a low-pass filter.  64  5.2  Probability densities of filtered binary signal for B = 0.5,1, and 2  65  5.3  Probability density of filtered binary signal for fi = 10  66  viii  5.4  0i, 02 and <fo calculated with N = 300  69  5.5  Effects of state discretization  69  5.6  Comparison to simulation results  70  6.1  Simplified diagram of a paper machine wet end  85  6.2  Alternating Poisson process  88  6.3  Implementation of Ci  96  6.4  The minimum overflow probability strategy.  98  6.5  The phase plane of (u(t),y(t)) under MOPC  6.6  The logic of QMOPC  6.7  Graphs of the optimum change strategy  102  6.8  J /J vsaT  104  7.1  Simplified diagram of the broke management system  Ill  7.2  Graphs of tank level y(t), outlet flow u(t), and break detector  112  7.3  Graphs of estimated incoming flow and level  115  7.4  Graphs of model level y ( t )  116  7.5  Histogram of the estimated incoming flow F {t)  116  7.6  Probability plots of break and normal duration  118  7.7  Sample distributions and theoretical distributions of break and normal durations . . 119  7.8  Probability distribution of y(t)  7.9  Graphs of cr(u) and cr(u) vs damping factor  c  99  : ;  99  min  m  b  under the QMOPC  122 123  7.10 Probability distribution ofy(0 and u(t) under C  L  124  7.11 Comparison of the QMOPC, Cz,, and manual control  125  7.12 Histograms ofy(r) (left) and u(t) (right) by manual control  126  7.13 Histograms of y{i)  (left) and u(t) (right) by C  126  7.14 Histograms of y(t)  (left) and u(t) (right) by QMOPC  127  7.15 Comparison of u(t)  by manual, C  and QMOPC  128  7.16 Comparison oiy{t) by manual, CL and QMOPC  129  L  ix  L  .1.1  V,  Acknowledgements During the research of this thesis, I have been fortunate to receive help from many people, and I would like to express my gratitude to them. First, to my supervisors, Prof. Guy Dumont, Prof. Michael Davies, and Dr. Bruce Allison. They warmly encouraged me to take up the challenge, and provided me with steady guidance throughout my study. Their patience with and critiques for my ever changing ideas, are very much appreciated. Dr. Allison introduced me to a fruitful area of averaging level control, and kickstarted the research. It was one of highlights of my student life to exchange ideas with him on many subjects. Also thanks to Dr. Maryam Khanbaghi for introducing Markov jump systems and sharing her knowledge on broke tank level control with me. To the Network of Centres of Excellence, Mechanical Wood-Pulps, the Science Council of British Columbia, and the Pulp and Paper Institute of Canada, I thank for their financial assistance. I would like to thanks Messrs. Rick Harper and Leo Pelletier of NorskeCanada for supplying the industrial data. Without their help, this thesis could not have been completed. To my fellow graduate students, Dr. Junqiang Fan, Stevo Mijanovic, Manny Sidhu, and Dr. Michael Chong Ping, I would like to express my sincere gratitude for their friendship, and kindness to share knowledge with me. I appreciate that they treated me as their peer even though I am much older than them. I felt that I was young again. I thank Dr. Leonardo Kammer of PAPRICAN for his critique and advice on my work. The library services in the UBC and the Pulp and Paper Centre was essential to my research, and I thank the people their. Special thanks goes to Ms. Reta Penco for helping and teaching me in literature search. I am fortunate to have many friends who have supported me with their warm friendship. To my old MacBlo gangs, Glenn Swanlund, Eric Lofkrantz, and Ted Carlson, I thank for their continuing x  friendship. It is comforting to know that my old friends in Japan are always ready to help me. I would like to thank Dr. Toshio Kojima and Toshinori Watanabe for their encouragement. I would like to thank two of my friends, Bruce and Dr. Alf Isaksson who have been inspiration for me to pursue the study of control engineering. Bruce rekindled my enthusiasm for control theory when he implemented digester level control at Port Alberni more than 20 years ago. Alf's enthusiasm and dedication to control engineering is contagious. I would like to thank my supervisor at the University of Tokyo, where I studied 35 years ago as a master student, Prof. Mitsuru Terao who instilled the spirit of control engineering which I have kept ever since. Prof. Karl Astrom, after his retirement, told us that he had the best times in his life during his graduate study and retirement. No wonder I enjoyed so much because I was a graduate student and a retiree at the same time! However, one drawback of being an old graduate student is that I cannot share the joy of completing the research with my parents, whose memory sustained me during tough times.  xi  Chapter 1 Introduction and Overview 1.1  Background and Motivations  The research presented in this thesis started with a goal of devising a practical and effective automatic control scheme for broke storage tank level, which was identified as the most influential and difficult process for stabilizing a paper machine wet end. The importance of stabilizing a paper machine wet end and the role of the broke storage tank are detailed in Chapter 6. It was soon realized that the broke storage tank level control problem is a special case of the averaging level control problem depicted in Figure 1.1. Here, the incoming flow to the storage tank Fd(f) changes Fd(f) (disturbance)  yH  Storage Tank  m F (t) r -A I  Downstream Process  u  yL  Figure 1.1: Averaging level control problem randomly and the tank level y(t) must be keptbetween the high limit y  H  and the low limit yi. The  goal of averaging level control is to manipulate the outlet flow F (t) to minimize its adverse efu  fects on the downstream process(es), while at the same time keepingy(t) b e t w e e n ^ a n d ^ . The  1  1.2. Overview and Structure of the Thesis  broke storage tank level control problem has a very specific input disturbance, but the downstream processes are diffuse and difficult to pin down by a simple mathematical model. Before embarking on the task of designing a broke storage tank level controller, available techniques for averaging level control, stochastic and deterministic, were reviewed. Averaging level control can benefit many chemical processes that include surge tanks. Also, many areas in the paper machine wet end can be improved by proper applications of stochastic averaging level control. Chapters 2, 3 and 4 treat the stochastic averaging level control problem in a unified approach with optimization methods emphasizing the importance of the input disturbance characteristics and the downstream processes. The fact that the input disturbances to the broke storage tank can be modelled by a continuous time Markov process (Khanbaghi, 1998) motivated the study of Markov jump systems. Although it was found that jump linear quadratic theory as applied to the broke storage tank level control problem has no advantages over linear quadratic optimal control theory, a new theorem and numerical methods for evaluating the state variances and the state distributions are devised. The new methods are utilized in the synthesis and analysis of control schemes in Chapter 7. With the techniques of stochastic averaging level control and Markov jump system theory, linear and nonlinear controllers for the broke storage tank level control problem are designed and analyzed. The new control schemes are tested in simulations with industrial data.  1.2  Overview and Structure of the Thesis  The thesis has three main parts: (1) stochastic averaging level control, (2) Markov jump systems, and (3) broke storage tank level control. Trie first two parts provide technical tools necessary to attack the problem treated in the third part. Averaging Level Control Three consecutive chapters 2, 3, and 4 cover the stochastic averaging control problem. In Chapter 2, the stochastic averaging level control problem is defined. The importance of the input disturbance characteristics and downstream processes is highlighted. Existing techniques for deterministic and stochastic averaging level control are reviewed. The popular random-walk assumption on  2  1.2. Overview and Structure of the Thesis  3  the input disturbances is scrutinized. A simple first-order low-pass disturbance model is preferred. Chapter 3 treats the problem with linear quadratic control methods. A state-space approach with noise-free observer is utilized. The small dimension of the problem makes it possible to obtain solutions in closed form. Thus, straightforward algorithms and good perspectives are obtained. In Chapter 4, the downstream process is included explicitly in the problem formulation. The performance of the resulting controller is compared with ones designed without downstream processes. A convenient parameterization is devised to streamline the performance comparison. Markov Jump Systems  Chapter 5 presents new theorems on state probability distributions and their applications for the variance evaluation and the state distribution calculations. This chapter is independent from the previous chapters and can be read independently from the rest of the thesis. The new numerical method for calculating the state distribution is utilized in designing controllers for broke storage tank level. Broke Storage Tank Level Control  Chapter 6 provides motivation and background for broke storage tank level control. Difficulties arise from the input disturbance characteristics, which are quite different from those encountered in usual averaging level control problems, and defining appropriate outlet flow smoothness requirements. A new nonlinear control scheme called the minimum overflow probability controller (MOPC) is introduced. Also, the optimal linear controller is designed for comparison. In Chapter 7, industrial data collectedfroma paper machine where the broke storage tank level is controlled manually, is used to estimate the incoming disturbance Fd(f) from a six-month history of the broke storage tank level y(t)  and outlet flow F (t). The two controllers designed in Chapter u  6 are tuned for Fd(t) and tested by simulation. Comparisons are made among the two controllers and manual control.  1.3. Contributions of the Thesis  1.2.1  4  Format and Notation  As the three parts of this thesis deal with subjects more or less independent from each other, overviews and literature reviews are placed at the beginning of each part, not at the beginning of the thesis. Detailed mathematical derivations are included at the end of each chapter rather than at the end of the thesis. This is a compromise between two goals: that the thesis should be comprehensive and that mathematical details should not impede the smooth logical flow. Most of the mathematical symbols are standard in the literature. The same symbol is used with different meanings, depending on the context, if there is no danger of confusion. Economy in notation is obtained by using the same symbol with different arguments to represent a time signal in continuous time, discrete time, and in the Laplace transform. For instance, y(t) denotes a process variable in continuous time, and y(s) the Laplace transform of the same variable. For discrete time, the standard integer symbols  k etc. are used, i.e. y(i) denotes y(t), t = iAt, At being a sample  period. A notation ":=" is used to signify "is defined to equal". Blackboard bold letters: F[event] means the probability of event and E[X] means the expectation of a random variable X. All the controllers in the thesis are meant to manipulate the outlet flow F (f) to control the u  tank level. The formal controller output is denoted with u(t), which may be u(t) = F (t) or u{t) = u  F (t) - E[Fd(t)]. In either case, Var[F (r)] = Var[w(r)] and F (t) = ii(t). The rate of change in the u  u  u  outlet flow F {t) appears often, and is given a special symbol v(t): u  d „.  d  ,„  This convention is used throughout the thesis, but the definition of v(r) reappears a few times as reminder. Sometimes, u{t) is used as the manipulated variable of an abstract problem, such as, x(t) = Ax(t) + Bu{t). The meaning of u(t) should be clear from the contexts.  1.3  Contributions of the Thesis  Averaging Level Control  • The stochastic averaging level control problem is put on a solid theoretical basis by identifying the importance of input disturbance characteristics and the outlet flow smoothness  1.3. Contributions of the Thesis  5  definition. Exploiting the continuous time state-space approach, clear perspectives are obtained for synthesis and analysis of the linear optimal controllers. • The random walk input disturbance assumption is given a detailed analysis. Advantages of stationary disturbances are presented; The optimal linear controller for a first-order low-pass input disturbance, which is a phase lag network, is synthesized. • A new problem formulation to include the downstream process explicitly is presented. This helps to clarify the meaning of outlet flow smoothness. With a simplified downstream process model, practical controllers are obtained. • A proper state-space formulation is presented when the input disturbance is a random walk. The key is to make the state vector x(t) stationary, so that Var[x(r)] exists. Markov Jump System  • A new theorem on the state probability distributions is given in Chapter 5, which leads to new methods for evaluating the mean and the variance of the state of a linear jump system, and a new reliable numerical method to calculate the state distributions. Unlike approaches based on simulations, the new method does not have time-discretization approximations nor randomness due to computer generated random numbers. These results are utilized to evaluate the overflow probabilities of controllers. • Linear quadratic optimal theory for jump systems, where the state has discontinuity, is extended to include load-change disturbances in a natural way. This formulation provides a moreflexiblemeans to include F (t) in the performance index. u  Broke Storage Tank Level Control  • The actual incoming flow to a broke storage tank Fb(t) was estimated from the tank level and outlet flow measurements. This is thefirstpublished study of its kind. It was found that break and normal durations follow the exponential distribution quite well. This is the second confirmation of the published results of Khanbaghi et al. (1997). However, Fb(t) is not truly binary as was assumed by the previous studies based on material balance.  1.3. Contributions of the Thesis  6  • A practical nonlinear controller with hard constraints on the outlet flow F (t) and its rate of u  change F (t) is presented. The controller minimizes the overflow probability and guarantees u  that the level will stay above the low limit yi. The new method for calculating the state distribution of Markov jump systems makes the design process straightforward without relying on simulations. The nonlinear controller is compared to the optimum linear controller and manual control with the industrial data, and is found to control the tank level better and with a smaller maximum outlet flow. • The optimal change strategy of F (t), where F (t) is changed from its initial value UQ to the u  u  final value u\ in a given time T, is investigated. The performance index is the integrated square error of the downstream process variable. With a simple downstream process model, the problem is solved with the Maximum Principle's singular control. A closed form solution is obtained, which makes it possible to compare linear change strategy to the theoretical optimum.  Chapter 2 Averaging Level Control 2.1  Introduction  Averaging level control refers to liquid level control of storage tanks, where the objective is to keep the outlet flow as smooth as possible against thefluctuatinginletflow,while at the same time keeping the tank level within high and low limits (Figure 1.1). The term "averaging" indicates that the average of the tank level is significant rather than the level itself. For example, Shinskey (1967) (page 147) states, "This application is often referred to as 'averaging level control,' because it is desired that the manipulated flow follow the average level in the tank". The reason that the outlet flow F (t) should be smooth is to minimize its adverse effect on downstream processes. This is u  quite different from "normal" regulatory control problems, where the goal is to keep the controlled process variable y(t) close to its (constant) target y . Even for regulatory problems, excessive r  movements in the manipulated variable u(t) are usually avoided for a number of reasons, including to avoid saturation in u(t) and to make a controller robust for modelling errors. However, smoothing of u(t) is never a principal goal of control. An underlying idea for investigating the averaging level control problem in the thesis is constrained optimization, where the "best" controller is a one that minimizes some performance index J subject to a constraint condition g(x, u) < 0. This idea provides systematic procedures to design controllers for given situations, which are characterized by J and g. Also this methodology makes it possible to compare many published control schemes in a consistent and unified way.  7  8  2.2. Mathematical Models  2.2  Mathematical Models  Optimization problems are tackled either analytically or numerically. In either case, mathematical models for the problem are necessary. Mathematical models for practical problems are always a compromise between fidelity to the target objects and mathematical tractability. To judge the fidelity and usefulness of the mathematical models, intimate knowledge of the subject area is necessary. Thus, construction of practically useful mathematical models is one of the important activities that distinguish control engineering from applied mathematics. The three main models needed for averaging level control are: 1. disturbance model 2. process model 3. flow smoothness or roughness model In the following sections, the mathematical models for averaging level control are discussed in detail. This is important because historically mathematical models for the input disturbance and the flow roughness have been treated rather indifferently, although it will be found that they influence controller design greatly.  2.2.1  Process Model  Figure 2.1 shows a simplified diagram of a material balance around a storage tank, where V(t) denotes the liquid volume in the tank, sn.dy(t) the tank level. Fj(t) denotes the incoming flow, and  F (t) the outgoingflow,both expressed in volume!time. u  Assuming that the liquid is noncompres-  F (t) u  Figure 2.1: Storage tank material balance. V(t) is the volume of liquid in the tank, and y(f) is the level.  2.2. Mathematical Models  9  sive, we obtain the following equation on V(t):  V(f)=V(0)+  f[F (t')-F (t')W d  o  (2.1)  u  Dividing the both sides of (2.1) by the cross-sectional area of the tank, A, yields,  y(t) = v(0) + K f[F (t') p  where K  p  d  -  F (t')]dt',  (2.2)  u  = 1 /A is the process gain. This simple integrator model works well in practice, and is  used throughout the thesis. When the cross-sectional area varies by height, K  p  is not a constant,  and (2.2) becomes nonlinear. However, linearity can be kept by treating the problem by (2.1). The level high limit is replaced by the volume high limit. A storage tank for noncompressive material is a rare example in process control of a simple mathematical model derived from the first principles that works well. In a physical process, many dynamical subsystems, including manipulating valves, and measurement sensors with noise and delay, are involved. However, they can be ignored, at least for liquid storage tanks in a paper machine wet end because the execution time of level control can be slower than other loops. For instance, a broke storage tank level can be controlled using control intervals of 30 seconds to 5 minutes, while most pressure or flow loops are executed at 1 second intervals. Therefore, these dynamics can be ignored by the level controller that is executing every 30 seconds to a few minutes. This fact is generalized in Marlin (1995), page 586: The process has no dead time and a phase lag of only 90°, indicating that feedback control would be straightforward for tight level control. This is actually the case in many systems, since the sensor and valve dynamics are usually negligible. The system parameter K  p  can be obtained with a good accuracy from experiments or physical  dimensions of the tank. Thus, robustness problem due to the process uncertainty is not a concern.  2.2.2  Disturbance Model  The role of the disturbance model was rather neglected in the early optimum control literature. However, its importance, especially in process control, is now well accepted. Still this is not always the case as stated in Harris and MacGregor (1987):  2.2. Mathematical Models  10  All optimal controllers (i.e., those optimizing a performance index) must be based on a specified disturbance (or set-point change) model. Often in the design of such controllers the disturbance model is not explicitly stated, but rather indirectly implied in the problem formulation. In averaging level control, very few papers have paid detailed attention to the input disturbance. The  term "averaging" itself suggests some stationary stochastic process because non-stationary  process cannot be averaged. However, many papers accept the need to bring back the tank level to a setpoint to maximize the surge capacity. This suggests that the incoming flow is non-stationary since if Fdif) is stationary there is no need to bring back the tank level to its setpoint as stated in Shinskey (1988). Probably the authors of these papers envisioned the input disturbance as a combination of stationary rapid fluctuations, which can be averaged out, and less frequent step-type load changes. This kind of disturbance poses interesting control problems that involve statistical detection of abrupt changes or hidden Markov processes. However, this is left as one of the future endeavours. Most of the performance analyses are based on responses to step disturbances, probably because of mathematical tractability. If the input disturbance is step-like and stays at a constant value long enough for the closed loop system to settle down to a quiescent state, then the analyses based on the step responses would be valid. However many real disturbances do not have this characteristic and care must be taken in interpreting analyses based on the step assumption. In the paper machine wet end, disturbances for storage tanks are mostly stochastic. Most of the studies in a stochastic context assume that the input disturbance is a random walk, which is non-stationary. The random walk is a popular choice for process control because it introduces integrating action in the controller. However, in reality, all disturbances are bounded and most of them can be deemed stationary if observed for a long enough time. In this thesis, stationary disturbance models are preferred because there is no strong need to eliminate offset for averaging level control. A simple stationary process with first-order dynamics is used as a disturbance model. Let d(t) denote the input disturbance deviation variable d(t) := Fj(t)-E[F (f)] d  and w(t) white noise.  It is assumed that d(t) is the output of a first-order low-pass filter with the cutoff frequency  d(s) = -^—w(s). S +  0)d  (2.3)  2.2. Mathematical Models  11  This model was used because most disturbances have some kind of roll-off frequency. Also the input disturbance model for broke tanks, which is a binary-valued random process, can be characterized with respect to the frequency power spectrum with this model.  2.2.3  Flow Smoothness/Roughness Model  All the literature in averaging level control states that one of the control objectives is to keep the outlet flow smooth. However, the mathematical definition of flow smoothness varies from author to author. Furthermore, some papers contain no mathematical definition of flow smoothness at all, so it must be guessed from context. To apply optimization based methods, which minimizes some performance index  it is more convenient to express flow "roughness" or "variability",  denoted with r(u(t)). In deterministic cases, the controller tries to minimize r(u(t)) subject to the level constraints. In stochastic cases, a natural choice for the flow roughness is the mean value of  r(u(t) ,E[r(u(t) ]. Since the purpose of outlet flow smoothing is to minimize its adverse effect on the downstream processes, the flow roughness indicator should be selected taking into account the downstream process. As shown in Section 4.4, for downstream processes that can be approximated by a firstorder system, Var[«(r)] = Var[v(r)] works, well if the downstream process time constant is much shorter than that of the closed loop tank level system. If the downstream process mixes various material streams, and it is desired to keep mixing ratios constant, then Var[w(r)] rather than Var[v(/)] might be the better choice. However, if the input disturbance is non-stationary, Var[w(r)] does not exist, and cannot be used. For broke tank level control, Bonhivers et al. (1999a) and Dabros et al. (2003) expand the idea of downstream process to a certain paper quality index. Crisafulli and Peirce (1999) contains an excellent concrete example of the downstream process and industrial data. The process is a raw cane sugar factory, where crushed cane fibre and water, called "mixed juice", is heated and fed to a clarifier to extract clear juice. This clear juice is subsequently crystallized to produce the final product, sugar. The mixed juice goes through two surge tanks connected in series before it is fed to the clarifier. The paper reports a new feedforward control scheme to maximize the combined surge capacity of the two tanks resulting in smooth inlet flow and reduced turbidity of the clear juice. The paper says that, "the main technical control  2.2. Mathematical Models  12  objective was to minimize the rate of change of the secondary mixed juice flow." Since the inlet flow to the surge tank fluctuates randomly, E[|v(f)|], or the more mathematically tractable E[v(/) ] = 2  Var[v(r)], would be an appropriate mathematical measure for flow roughness in this case. MRCO  The maximum rate of change in the outlet flow (MRCO) is mathematically defined as MRCO := max |v(0|. 0<Koo  Although the definition implies an infinite horizon, the MRCO makes sense in a finite horizon context because, for a step input disturbance, the maximum occurs early in the step response. The MRCO is used as the flow roughness indicator in many papers since it was used for PI tuning analysis with step responses (Cheung and Luyben, 19796). For step disturbances, the MRCO is mathematically tractable for P and PI controllers to the extent that it is expressed in closed form. Also intuitively, the MRCO seems to represent some kind of flow roughness. However if the MRCO is taken at its face value (not as a control parameter for step responses by the PI controller), from the point of view of the downstream processes, its effectiveness as a flow roughness indicator is limited. For most chemical processes, the effect of the outlet flow is cumulative, not a function of the value at a single time point. There may be some processes that the MRCO must be kept below some limit value to prevent catastrophes from happening. Then the problem should be posed as the minimization of some other flow roughness indicator subject to the MRCO limitation, not the minimization of the MRCO itself. Another drawback of the MRCO is that it is not cumulative, thus many analytical and numer•'XX-. >  ical optimization techniques, such as dynamic programming or quadratic programming, are not applicable. This non-cumulative nature of the MRCO may cause impractical (very oscillatory) solutions by linear programming as reported in Sidhu (2003). Therefore, the MRCO is not used in this thesis.  2.3. Historical Perspective and Literature Review  2.3  13  Historical Perspective and Literature Review  It was recognized from the early stage of process control technology development that surge tanks require different control objectives from normal regulation loops, and many papers and sections of books have been written for averaging leve^cpntrol since the 1960s. They may be categorized into the following three groups: (1) P/PI Detuning!;(2) Deterministic Constrained Optimization and (3) Stochastic Constrained Optimization. Each approach is discussed in detail in the sections below.  2.3.1  P/PI Detuning  Studies published up to the early 1980's constitute this group. The ideas expressed in these papers reflect control hardware prevalent at that time, pneumatic or electronic single loop controllers. They are summarized as follows: 1. no mathematical definitions of the input disturbance nor outlet flow smoothness are given. 2. performance analyses are mostly non-mathematical being heavily depending on visual inspection of responses for a step disturbance. 3. controller synthesis is mostly ad hoc or heuristic. This lack of clear definitions both of the input, disturbance and of the meaning of flow smoothness lead to a number of different schemes claiming good performance. Proportional Only Control  This is advocated by Shinskey in Shinskey (1988) and Shinskey (1994), although originally he preferred nonlinear controllers (Shinskey, 1967). Since there is no integrating action, the tank level tends to be away from its setpoint (off-set). To some authors, this off-set is a shortcoming of the P controller. However, when the input disturbance is stationary or bounded, the off-set is not disadvantageous. This point is clearly stated in (Shinskey, 1988): "Not only is there no need to return the level to some set point such as 50 percent, but this practice actually reduces the effective capacity of the vessel." Although, the meaning of flow smoothness is not stated clearly, it appears to be defined as a smaller variation of u{f). It is shown later that the P controller is close to optimal when the input disturbance is a first-order low-pass stationary process and the flow roughness is represented by Var[w(r)]. Thus, if me input disturbance is such that the downstream process  2.3. Historical Perspective and Literature Review '.;)';[  14  operates better with a smaller Var[w(/)], then the P controller performs well. However, care must be taken since for some downstream processes, Var[v(0] is abetter indicator of the flow roughness. Then a phase-lag network or low-pass filter is a better choice. PI Tuning Rules  In Cheung and Luyben (19796), tuning rules for PI controllers are given based on analyses of step responses. The design parameters are the maximum peak height (MPH) and the MRCO. No justifications for using the MRCO as aflowroughness indicator was given. It seems that the MRCO was selected for its mathematical tractability in step response analysis. The tuning procedure in the paper determines the PI controller settings from two design specifications, the MPH and the MRCO. This procedure potentially leads to a very lightly damped system depending on how the MPH and the MRCO are selected. A better procedure would be tofixthe damping factor n to some reasonable value between 1 and 0.7, specify the desired MPH, and accept the resulting MRCO as a price to pay for obtaining the target MPH. This is done in Marlin (1995) with n = 1. Nonlinear PI Controllers  In Shunta and Fehervari (1976), two kinds of nonlinear PI controllers were proposed, one with two settings switched by the level error (when the level error is small, slower tuning makes the outlet flow change more smoothly, and faster tuning kicks in when the level error exceeds a limit to prevent level limit violation), and the other called "wide range" which changes the proportional gain and reset time continuously. No tuning guidelines were given in the paper. To estimate the behaviour of a nonlinear controller against random input disturbance is a difficult problem even if the input disturbance distribution is available. Tuning of these controllers must be done on a trial-and-error basis. It seems these controllers did not gain much popularity. The basic weakness of nonlinear schemes based on the level error is that a large step-like input disturbance is not detected until the level error exceeds the threshold. It is best to act immediately against a large input disturbance to minimize any flow roughness indicator based on v(t) - u(t). As discussed later, the model predictive control (MPC) methods address this problem properly.  2.3. Historical Perspective and Literature Review  15  PL Level Controller  A Proportional-Lag (PL) level controller was proposed in Luyben and Buckley (1977) and further studied in Cheung and Luyben (1979a). It is claimed that their approach maintains most of the desirable features of proportional-only control but eliminates off-set. It is shown below that the PL control is a PI controller when measurement noise or delay is negligible. In the PL controller, the outlet flow u(f) is a sum of a proportional term of the level error and a filtered inlet flow w(t):  u(i) = K e(t)  + w(t),  L  (2.4)  where K is the proportional gain, e(t) is the level error (y(f) -y ). The filter is a first-order low-pass L  r  filter KF/(TFs + 1) (Cheung and Luyben, 1979a). In the following analysis, KF is set to 1 to avoid an off-set in the level. Also there is no advantages in setting K ± 1. Without losing generality, F  it is assumed that the level setpoint.y is zero, so e(t) = y(t). Using the same symbols for Laplace r  transforms of time signals, we get  u(s) = K y(s) L  + -^-. TfS + 1  (2.5)  - «(*)].  (2.6)  The process is  y(s) = ^[w(s) s  Then, the transfer function from w to y, Gp , is L  y(s) _  v^(sj  KpTfS ,  " (T s F  + \)(K K L  + s)  P  K~ s  + s(K  l 2  + (K T )- ) + K T 1  L  p  F  L  Now, we close the loop with a standard PI controller K (l + 1 / T s ) , where K is the proportional gain and T the reset time. Then, the transfer function from w to y, Gpj, is c  r  . y W  Q  =  PI  '  l 2  + Ks + c  (2S)  K T~ ' l  F  K  )  c  From (2.7) and (2.8), for a given K and T , we can make G L  c  £  =  w(s) K~ s  r  PL  = G by selecting K and T as PI  c  r  follows: Kc = K  L  + -r\=-,  Kl p  (2.9)  F  and Tr = T ^ F  = T + -^— F  (2.10)  Thus, the PL controller has no special advantages over the PI controller. The performance may be different from the PI controller if dead-time or large measurement noise is involved.  16  2.3. Historical Perspective and Literature Review^'V  2.3.2  Deterministic Constrained Optimization  Supervisory computers and computer based distributed control systems (DCS) became widely available in the 1980s, and computation intensive control algorithms such as model predictive control (MPC) became practical as a result. Papers reviewed in this section are characterized by their handling of level constraints in a deterministic context (step input disturbance). DMC  In Cutler (1982), the Dynamic Matrix Control (DMC) algorithm was applied for averaging level control. The paper does not give mathematical details, but the comments here can be inferred. In normal DMC, the future trajectory of the controlled variable in discrete time, y([k, k + N ]) (a p  positive integer  is a prediction horizon) is compared to the desired trajectory yr([k,  the squared sum of the error E^'CKO ~>v(0) ^ 2  a  k + N ]) and function of control increment, Au(k), is used p  in the performance index to be minimized. For averaging level control, the desired trajectory is set as a funnel shaped region centred around the level setpoint. The tank level error is deemed to be zero if it is within the funnel shaped region. Thus, the controller makes no move (Aw(r) = 0) if the level error is inside the funnelled region, or a smaller move just enough to bring back the level into the region if the level error is outside. The paper does not mention level constraint handling, which is one of the strength of DMC. It reported good filtering characteristics of the algorithm when the input disturbance is small so that the predicted level stays inside the funnel shaped region. Although a number of simulation results were included, actual applications of this DMC algorithm to averaging level control problems have not been reported (McDonald and McAvoy, 1986). Optimal Predictive Controller  In McDonald and McAvoy (1986), the averaging level control problem was studied in continuous time for a step input disturbance of known amplitude. The control objective is to minimize the MRCO while keeping the level within constraints. Due to the simplicity of the plant dynamics and the MRCO criteria, analytical solutions, which are nonlinear, were obtained. The amplitude of the input disturbance is estimated from the level and the outlet flow, and the implemented control algorithm takes a form of predictive control, and thus, was named the optimal predictive controller  2.3. Historical Perspective and Literature Review  17  (OPC). The OPC does not have the weakness of the level error based nonlinear control schemes. The authors thought that it was necessary to bring the level to its setpoint, and combined a PI algorithm with the nonlinear solutions. The matter of how quickly the level should be brought back to the setpoint can only be discussed with stochastic input disturbances. The MRCO is difficult to use in a stochastic scenario. However, the paper treated the input disturbance as steps of undefined length. This makes the tuning procedure ad hoc, heavily depending on simulations. The contribution of the paper is to put the averaging level control problem in the MPC framework. MPC  The problem discussed (above) by McDonald and McAvoy (1986) was treated in discrete time in Campo and Morari (1989). As in continuous time, if u(t) or Au(t) is not constrained, analytical solutions are available. The need to bring the level back to its setpoint was acknowledged without discussion. The problem was posed as afinitehorizon constrained optimization. To bring the level to its setpoint, a terminal condition that the predicted level at the end of the control horizon, y(te), is equal to the setpoint y was included. This makes the control algorithm cleaner than adding r  integral action in ad hoc way. The horizon length becomes a tuning parameter. The algorithm is implemented as a receding horizon controller. With the finite control horizon, the problem can be solved by linear programming when \u(t)\ and |Aw(r)| are constrained. As was discussed for the case of OPC, the proper selection of the control horizon requires stochastic arguments, but the paper treats the problem in a purely deterministic context. Nevertheless, the paper provides the most complete treatment of the problem for step-like input disturbances and the MRCO criteria. When \u(t)\ and |Aw(f)| are not constrained, the control algorithm is implementable on any DCS as was reported in Allison and Khanbaghi (2001).  2.3.3  Constrained Minimum Variance Controller  This methodology was introduced by Kelly (1998) for tuning PI controllers, and extended to more general cases including dead time by Foley et al. (2000). Both papers used a discrete-time formulation to investigate averaging level control as a stochastic optimal control problem. The input disturbance is assumed to be a random walk. The controller is synthesized to minimize the outlet  2.3. Historical Perspective and Literature Review  18  flow roughness (usually expressed as the variance of the outlet flow rate change Var[v(f)] in continuous time or Var[Aw(&)] in discrete time) keeping the variance of the tank level Var[y(0] to a target value. The performance index J in discrete time, is set up as follows:  J = Var[y(k)]  +  pVav[Au(k)]  In discrete time, p can be set to zero to yield the minimum variance controller, where Au(t) is not constrained. The term "constrained" stemsfromthe fact that p > 0 constrains Var[Aw(r)], and does not imply that the level is constrained. Constrained optimization of stochastic systems is a very difficult problem, and no practical algorithm for present day control hardware is available (Batina et al, 2002). The level is indirectly constrained by specifying its variance. Then, there is some probability that the level will violate its limits. For most practical applications, this method provides satisfactory result because input disturbances in the real world are bounded. The gaussian assumption, which makes the input disturbance unbounded, is a fiction to expedite the mathematical derivations. However, if the input disturbance happens to be larger than the design specification, the level will exceed the limit because there is no hard constraint mechanism. This aspect is different from OPC and MPC of the previous section, where closed loop hard level constraints are built into the algorithm. When there is no deadtime in the process, the optimal controller is a PI controller that makes the closed loop system second order with the damping factor 77 = V2/2. This fact was made clear by treating the problem in continuous time (Ogawa et al, 2002). This thesis extends this methodology to include stationary input disturbance and to include the downstream process in the performance index.  Chapter 3 Linear Optimal Controllers 3.1  Introduction  In this chapter, linear optimal controllers for averaging level control are studied. Although linear controllers cannot guarantee that the level constraints are met for random walk disturbances, they are considered for the following reasons: 1. For some problems, it is sufficient to detune existing linear controllers (PI or PID) in order to make the outlet flow smooth. The study here provides tuning guidelines. 2. The performance of linear controllers can be calculated analytically (and for some simple problems, in closed form). Usually this is not the case for nonlinear controllers, where simulation is the only reliable way to assess performance. The linear optimum controllers provide a reliable benchmark, which any non-linear controller must surpass. The input disturbance Fd{t) is assumed to be either a non-stationary or stationary stochastic process. The performance index is chosen to express the smoothness of the outlet flow F {t) in mathu  ematically tractable form, usually the variance of F (f) or a weighted sum of the variances ofF (t) u  u  and F (t). Note that the latter choice is only available for stationary input disturbances. The output u  of a controller is denoted by u(t) and either u(f) = F (t), or u(i) = F (f) - F (F = E[F (t)]). In u  u  m  m  d  either case, F (i) = ii(t) = v(t), and Var[F (t)] = Var[w(0] and Var[F (r)] = Var[v(0]. Thus in the u  u  u  sequel, u{t) is used for the variance expression. Letj>(r) denote the tank level andy its setpoint. A r  19  3.2. State-Space Method  20  linear controller is sought that minimizes the following performance index J,  J := E[(y(0 - y f] + p E[v(t) 2  r  2  + n(u(t)  -  E[u(t)]) ] 2  = Var(y(0) + p [Var(v(f)) + fiVar(u(t)),  (3.1)  2  where // = 0 when the disturbance is non-stationary. The weight p is a Lagrange multiplier 2  (Newton et a l , 1957) so that Var[y(r)] is minimized subject to a constraint Var[v(0] + //Var[w(r)] < c,  (3.2)  where a constant c is the upper limit of the flow variability. In practice, c is not specified, but p is adjusted to obtain an acceptable Var[y(r)]. Then the resulting optimum controller provides 2  the minimum flow variability (or maximum smoothness). This approach was pioneered by Kelly (1998) and Foley et al. (2000) using discrete-time polynomial methods. In the thesis, the problems are treated in continuous time using state-space methods, so that some characteristics of the resulting controllers, such as the closed loop damping factor, become evident. One advantage of the discrete-time approach is its ease of handling dead-time. However, the target applications are such that dead-time can be safely ignored.  3.2  State-Space Method  The problem is solved using the state-space linear quadratic (LQ) optimization methods combined with noise-free observers. Since most of problems in the thesis are single-input-single-output, the transfer function approach (Wiener-Hopf method) is a viable method, since there is no need for observer design. However, the state-space approach was adopted for the following reasons: 1. The algebraic Ricatti equations (ARE) can be solved analytically for this size of problem, and controller parameters, therefore,, can be expressed in closed form. The procedure is more straightforward than the factorization which is necessary for the Wiener-Hopf method. When the problem dimension increases, stable computer algorithms and software for solving the ARE numerically are widely available. Presently, this is not the case for the Wiener-Hopf approach.  3.2. State-Space Method  21  2. From the nature of the problem, noise-free observers are practical. Thus, the requirement of an observer is not really a disadvantage for the state-space approach. Rather it may, in some cases, add flexibility in controller design by providing a framework for selecting proper (or ad hoc) observers when the measurement noise cannot be ignored (Allison and Khanbaghi, 2001). Of course, by adding a noise model, the transfer function approach can produce the optimum controller that includes the observer. However, then the problem size becomes too large to accept easy factorization by hand and the approach loses the edge. The basic equations and notation which willibe used throughout this chapter are introduced here. The plant is expressed in the following state-space form.  —x(i)  = Ax(t)  + Bu(t)  +  Dw(t),  (3.3)  where x(t) is the state vector, u(t) is the manipulated variable (not F (t) here), w(t) is a Wiener u  process, and w(f) white noise. A, B, D are constant matrices of appropriate size. The white noise is expressed as a formal derivative of the Wiener process w(t). The derivative of the Wiener process does not exist in a strict mathematical sense, and the ordinary differential equation (ODE) (3.3) must be written as a stochastic differential equation (SDE). However, the SDE does not provide any advantage over the (formal) ODE in the development below. Thus the SDE notation is not used here. Generally, w(f) can be a vector process, but in the thesis w(t) and w(f) are always scalar processes. The performance index / is (3.4) where Q is a positive semi-definite matrix and R a positive definite matrix. It is well known that in stochastic LQ problems where the white noise enters the system equation additively, the linear state feedback solutions are identical to those for the deterministic LQ problem (for example see Fleming and Rishel (1975)). The optimum control u(t)* that minimizes J is a feedback form,  u(ty =  -Kx(t).  (3.5)  The feedback gain K is related to a positive-definite solution P of the following algebraic Ricatti equation (ARE):  PA + A P - PBR- B P T  l T  +Q = 0,  (3.6)  3.3. Evaluation of Variances  22  and  , K = Rr B P.  (3.7)  l T  A number of stable computer algorithms and programs are available for numerically solving the ARE. However, the low dimensionality of the problems here makes it possible to obtain closed form solutions easily. Thus, most of the AREs are solved analytically, so that the equivalence of the state-space approach and the Wiener-Hopf method can been clearly seen.  3.3  Evaluation of Variances  It is necessary to calculate the variances of process variables to evaluate and compare the controllers. When a variable is the output of a rational transfer function driven by a signal with a rational spectrum, its variance can be calculated analytically. Ify(0 has a rational power spectrum density  G (-sr) ._:=  N(s)N(-s),  v 7  v  (3.8)  v  where  N(s) =  b„-is"  + • • • + bis + b  1  0  D(s) = a„s"+a -is"~  + • • • + a\s + ao,  l  n  its variance Var(y(r)) is calculated as  Var[y(01 = ^ - £ N(sS)N(-s) H ^ . _ D(s)D(-s)  (3-9)  1  JOO  Let /„ denote the above integral when D(s) is an «-th order polynomial. Newton et al. (1957) contains a table that lists /„ up to n = 10. The first three /„ are listed below. h =  h  2  2a ai 0  / =% i l % : 2  ( 3  2aoaia2  bja ai 0  + (b\ - 2bob )a a 2a ai(a\a 2 0 3  0  2  + aa) 0 3  b\a a^ 2  .  1 0 )  3.4. Stationary Input  23  It becomes unwieldy for n > 3 for hand calculations. Astrom (1970) includes a recursive algorithm and a FORTRAN program for /„. In this chapter the following short-hand notation is used. A  I  3.4  —  ) - = ^ j J  D{s)j  2nj  7^7777^7  j  J_  ~d -  D(s)D(-s)  7 0 0  (3.H)  s  Stationary Input  In this section, the input disturbance Fd{i) is assumed to be the output of a first-order low-pass filter driven by white noise plus a bias. The state-space formulation is straightforward when the input disturbance is zero-mean. Let F denote the mean of Fd(i), F m  m  := E[Fd(t)].  The deviation  variable d(i) for the input disturbance is defined.  d(t):=F (t)-F .  (3.12)  d(s) = -^—w(s),  (3.13)  d  m  Then, S + lOd  where o)d is the cut-off frequency and w(s) the Laplace transform of white noise.  3.4.1  System Equation and Performance Index  Since the process is an integrator, the mean of the outlet flow must be equal to F to keep the level m  stationary. Thus, u(t) is set as a deviation from its mean F . m  x\(t) = y(f) - y  tank level error  r  xiit) - d(t) u(t)  input disturbance  = F (i) - F u  m  outgoingflowdeviation (manipulated variable)  As shown below, x i) and u(t) appear as the input disturbance and outlet flow for the system (  2  equation. Xi(t) = K [F (t) p  d  - F (t)] = K [x (t) + F - (u(t) + F )] = K [x (t) - u(t)]. u  p  2  m  m  p  2  (3.14)  Since u(t) is stationary, not only the variance of u(t) = v(t) but also u(i) can be included in the performance index. / := E[(y(r) -  y ) ] + p E[v(t) 2  2  r  2  + ^u{t) } 2  (3.15)  = Var[y(0] + p (Var[v(0] + pVar[ (0]), 2  M  3.4. Stationary Input  24  where p is a weight for u(t) variation. The control weight is set to p rather than p to simplify the 2  calculations for the optimal gain below. When p > 0, the resulting controller tries to minimize the variances of both v(r) and u(t). Those cases are studied later in Chapter 6. Here, cases where p = 0 are investigated to compare with PI controllers for random-walk disturbances. To include u{f) = v(f) in the performance index in a straightforward way, v(r) is treated as a formal control input, and u(f) as an element of the state vector x(t), x(t) :=  x {t) 2  (3.16)  u(t)] . T  Then the performance index J is expressed with the following two matrices Q and R, 1 0 0  J =  E{x(tf  0 0 0  (3-17)  x(t) + v(t)  0 0 up  ~R  2  As X2(f) is the output of a low-pass filter with the cut-off frequency o>d,  x (s)  =  2  w{t) => X2(f) = -(OdX2(t) + 0Jdw(t). S + CJd  (3.18)  Then the following system equation is obtained  d_ Jt  Xl(t) x (t) 2  u(t)  0 =  0  K -K -o> 0  0  p  d  0  0  p  x (t) 2  0  0  Xi(t)  + 0  u(t)  1  v(0 +  (3.19) 0  I T  3.4.2  State-Space Solution  Analytical Solution  The solution of (3.6), P, is expressed with its elements as P\\ Pn Pn  P =  ph P22 Pn  Pl\ P32  P33  (3.20)  3.4. Stationary Input  25  Substitution of A,B,Q, and R of (3.19) and (3.17) into (3.6) yields six second order polynomial equations on pij, from which a positive definite P can be obtained in closed form (algebraic details are included in Section 3.8.1). Since K =  = [p p  B P/p T  2  u  p ]/p , 2  2i  (3.21)  3i  pn, pn and 7733, which are listed below, are required for K. P\3 = ~P,  P23 =  ;  —— ^ — P i 3 = p ^ 2 p K  (3.22)  p  Thus, K = [k\ k £3] is obtained as follows. 2  k\ = —, k =  ,  2  P  3.4.3  ,  k = J2K p 3  (3.23)  P  V  coj + ojd^KpT^ + Kp/p'  Noise-Free Observer  In most applications, x (t) is not measured, and an observer is necessary. A simple noise-free 2  observer is constructed by differentiating x\(f). Since  = K [x (f)-u(t)] p  2  => sx\(s) =  K [x (s)p  2  u(s)],  (3.24)  x (s) = sKpXx (s) + u(s). 2  Usually, constructing an observer by differentiating the measured variable is not a good practice due to measurement noise. However, the problems here are formulated with a formal manipulated variable ii(t) and differentiating once does not introduce real differentiation when the real manipulated variable is u(i). Even if real differentiation is necessary, usually level measurements have low noise. Also, most applications can be executed with long sampling periods, say 5 seconds to a few minutes, which make it possible to filter the level measurement adequately if noise is not negligible. Thus a noise-free observer is practical. See Allison and Khanbaghi (2001) for a Kalman filtering approach for noisy cases. The equation u(t)  = v(t) = -Kx(t)  is expressed in its Laplace transform,  su(s) = -kiXi(s) - k x (s) - kiu(s). 2  2  (3.25)  Substitution of (3.24) into (3.25) yields, u(s) (s + k + k ) = xi (s) (-Jfci - sk K- ). 1  2  3  •;5 n n  2  (3.26)  3.4. Stationary Input  26  Let C (s) denote the transfer function of the controller, L  u(s) = C (sMs)-y ) L  Then  = C (s) (s).  r  L  -k\ - sk K~  + k\KJk  ]  2  C (s)  s  =  L  s+ b  2  = -k K-  = K  1  2  s+ k + k  ^ + ^ 3  5 +  w  2  K —  ,  c  p  where  (3.27)  Xl  s+ a  3  (3.28)  '  v  —k /K  c  2  p  a = k +h  (3.29)  2  b-  k\K lk . p  2  The sensitivity function S is  S=  1  K  s+ b  p  ^ s + s(a + K K ) +  =  f3 K  + K K b'  2  P C  '  p c  }  1+ — K - c  s  s+ a  From (3.29) and (3.23),  a + KK  = k + ki+  P C  KKb  2  K (-k /K ) p  = Kp{-k lK )k Kplk  p c  2 p x  = h = ^2K /p,  2 p  2  and  p  = -^Kp  =  Kp/p.  Thus, s  _  s(s + a) s + s y]2Kplp + Kp/p 2  where wc =  ^K /p. p  _  s(s + a) s + y/2co s + 2  c  co ' 2  c  The closed loop system is a second order system with damping factor  rj =  V2/2. The second-order maximally flat (Butterworth) filter has the same damping factor. This fact does not come out clearly in the discrete-time approach (Foley et al, 2000), where the sampling period introduces additional complexity.  3.4.4 Controller Parameterization For controller design, the following parameters are given: K  p  process gain  o)d input disturbance cut-off frequency  3.4. Stationary Input  27  As a tuning parameter, the variance ratio R (y) between the input disturbance Fd(t) and the level v  y ( t ) is selected.  R (y)  :=  v  Var[y(0]  VartxKO]  Var[F (t)]  (3.32)  Var[d(t)]  d  R ( y ) is specified to meet the level constraint requirements (the standard deviation of the tank v  level is  ^R (y) v  times the standard deviation of the incoming flow rate). A convenient way of  characterizing the controller is by the bandwidth of the closed loop system relative to o> , d  K :=  Substitution of  y/K /p = OJ p  =  C  are expressed with K and  KCJd  OJ  (3.33)  C  into (3:23.) yields k2 and k-$ in terms of K and CJd- Then a and b  as follows. V2/c  K + 2  a -  K + V2/c + 1  ,  K2 +  (jj < did, d  2  V2/C + 1  b =  cj > codd  V2K+ 1  This shows that a < b , thus, the controller C  = K ( s + b ) / ( s + a) is a phase lag network . Figure l  L  c  3.1 shows a and b for K between 0.01 and 100 for o j  Figure 3.1: Plots of a and b of C K = 0.1,1,  and 10 for  K = p  1.  L  d  = \. Figure 3.2 shows the gain of C  L  for  = K ( s + b)/(s + a) for K between 0.01 and 100. c  Figure 3.3 shows the magnitude of the sensitivity function  'This is why the controller is denoted with Ci, where L stands for "lag".  S for  3.4. Stationary Input  28  20 10 CD  10~  2  10"  10°  1  10  1  10  2  to  Figure 3.2: Gains of C for K = 0.1,1 and 10. (K = 1 and cod = 1) L  p  K = 0.1,1 and 10. In addition, for comparison purposes, the magnitude of S for a PI controller, of which shape is a function of the damping factor n only and does not change by K (see Section 3.4.5), is plotted. The plot marked as PI is for n = 0.7 and K= I.  10~  2  10~  1  10°  10  1  10  2  (0  Figure 3.3: Magnitude of the sensitivity function S for C for K = 0.1,1 and 10. \S \ for C with n = 0.7 is L  also plotted. (K = 1 and o>d = 1) p  PI  3.4. Stationary Input  29  The controller performance is represented with the variance ratio between  R (u)  v  Var[w(0] Var[d(r)]  :=  v  d(i) and ii(t), R (u), (3.34)  Thus, for a given Rv(y), smaller Rv(u) indicates a better controller. With the use of the formulae in (3.10),  R (y) and R (u) can be expressed in terms of K (algebraic details are in Section 3.8.2), v  v  K  Rviy) = — or, R (i )  4  2  V2K(K + V2K+ l ) 2  3  (3.35)  ( K (\ + V2K)  or  =  v  K + 3 V 2 V + 9K + 6 V 2 K + 3  2  2  V2/f+ l). [\K +  y/2~K(K + 2  (K + V2K) + K 2  A  yj2K+\)  2  :i  It is observed that Rv(y) ~ K /iod and Rv(u) ~ o?d. The variance ratio of the level is proportional to 2  the square of the process gain (this is easy to understand), and is inversely proportional to the square of the input disturbance cutoff frequency. When cod is high, more energy in the input disturbance is contained in higher frequencies and it is easier tofilterout them with the same closed loop system, which is a low-pass filter. When o>d is high, the input disturbance is "busy", and increases Rv(u). To subsume the two parameters K and o)d in the design parameter, the "standardized" variance p  ratio R (y) and R (u) are defined as follows. v  v  := Ry(yWd *1  R (y) v  R (u) :•v  R M  K + 3 V2 2 V + 9K + 6V2~/C +3 3 4  V2/d> + V2K + l )  V2/c(/c + V2K+ 1)  2  2  d  (3.36)  1  =  0J  2  U + V2*+ I ; 2  R (y) and ^ (") are convenient in comparing controllers as they are independent of K and a>dv  v  p  Figure 3.4 shows R (y) and ^ (") for C and a PI controller that makes the closed loop damping v  v  L  factor n = V2/2.  3.4.5  C o m p a r i s o n to P I C o n t r o l l e r  A phase lag network necessary to implement the optimal controller CL  = K (s+b)/(s+a) c  is usually  not readily available in present day control system hardware, and it must be base loaded with the mean incoming flow F . On the other hand, PI controllers are a standard feature of every process m  control system. Also they don't have to be base-loaded with F . In short, the PI controller has a m  3.4. Stationary Input  30  R (y) (decreasing graphs) and R (ii) (increasing graphs) for CL and a PI controller with r\ = 0.7. x-axis is K = (o /<o<j. Figure 3.4:  v  v  c  practical advantage over Cx. In this section, the performance of a PI controller is compared to C . L  Let Cpj denote a PI controller,  Ts Cpr — K T s  +1  r  s + T~  ^  x  = K  r  (3.37)  r  c  r  where K  2 c  is the proportional gain and T reset time. The sensitivity function S is r  S =  s + KKs 2  p c  where OJ  c  as OJ  c  = ^K K T~  l  p  c  + K K T;  s + 2noj s  1  2  P C  c  +  (3.38)  co ' 2  and n = KpKc/(2coc) is the damping factor. As before,  OJC  is parameterized  After some calculations (details are in Section 3.8.3), the standardized variance ratios  = KiOd-  R (y) and R (u) are obtained. v  v  Rv(y)  =  1 ~>r>K + +1 1) ^ 2T}K(K + 2rjK 2  y [(l+4?y > + 87? ] 3  R (i ) v  2  2  3  = . . 2T)K(K + 2r]K + 1) 2  The same symbol K is used for both Q and C>/ as this does not cause any confusion. c  (3.39)  3.4. Stationary Input  31  If the optimum tuning of the PI controller is sought, it becomes the minimization of R (u) subject to v  R (y) = c\ (a positive constant c\ is the design specification). R (y) = c\ means 2T]K(K + 2TJK + 1) = 1/cj. Since 2VK(K + 2nK + 1) is the denominator of R (u), the minimization of R (u) becomes 2  v  v  2  v  v  minimization of its numerator K [(1 + 4n )/e + 8n ]. So the optimum PI controller is characterized 3  2  3  by a pair (K*, if): (K*,V*)  = argmin{/c [(l + 4n )K 3  2  subject to 2TJK(K + 2TJK + 1) = l/c{.  8n ]) 3  +  2  This is a mathematically tractable problem at least numerically. However, since (K*, n*) is a function of Rv(y), the solution may be impractical. Practically more meaningful cases would result when the PI controller is tuned to a "moderate" (meaning that the closed loop system is not too oscillatory nor too slow) mode, say n between 0.5 and 2. When the input disturbance is a random walk, the optimum controller is a PI controller with n = V2/2 as shown in Section 3.6. Also the critical damping rj = 1 is used often. These two values of n and n = 2 for slow mode, are used for comparison. Figure 3.5 shows the ratio of ^ (") by the PI controller and R (u) by CL- The v  v  performance of the PI controller deteriorates when Rv(y) = Rv(y)a)j/K is high. Thus 2  CL  is worth  consideration for cases of high Rv(y).  2.8 2.6 -  10"  3  1<f  10"' r„(y)  2  10°  10'  Figure 3.5: Comparison of CL and the PI controller with n = V2/2, 1, and 2. The x-axis is R (y) and the v  y-axis is R (u) of the PI controller divided by ^ (") of CLv  v  3.5. Minimum Variance of Outlet Flow  3.5  32  Minimum Variance of Outlet Flow  For some types of downstream processes, the adverse effect of F (f) is proportional to the deviation u  of F (t) from its mean value. Then the natural choice of the performance index is u  J=E[ (t) +p u(t) ]. 2  2  2  (3-40)  Xi  Now, u(i) is no longer included in J, so the system equation is constructed with u(t) as a control input.  d_ dt  Xl(t)  =  x (t) 2  0  K  0  -co A  -Kp  X\(t)  p  +  0  Xlit)  d  u(i)  +  0  w(t)  (3.41)  0)  d  B  The optimal control w*(r)is a linear state feedback with the feedback gain K =[k\ k ], 2  u(s)  = -kixi(s)  With the noise-free observer as before (x2(s)  u*(s) = x\(s)[-ki  -  k  2  K p S ]  - k u(s)  -  k x (s)  = KpSX\(s) + u(s)), - sk K-x\(s) 1 + k  u*(s) =  2  (3.42)  2 2  2  = (c + 0  cis)xi(s),  (3.43)  2  where c = -0  l+fe'  (3.44)  l+k  "  2  This is a proportional plus derivative (PD) controller. Let P = {pij} denote the solution of the ARE. Then, the feedback gain K is  K = [k k ] = B P/p T  x  = [-Kppn/p  2  -  2  2  K /p ] 2  pPl2  (3.45)  The ARE is solved analytically with the following result. Pn  = -7T,  K  P\2 =  p  Then co =  The controller CPD(s)  = Co + C{S  pco  d  Kp  +io  p  d  pOJd  ,  (3.46)  + K  p  Ci =  1  (3.47)  PUd  is not proper, and is implemented with a low-pass filter a/(s + a)  to make the controller proper as  C {s) - ci(s;+ C Q / C I ) PD  a s+ a  =  s+ b K r .  s+ a  ,  (3.48)  where K = ac\ and b = c /ci. Thus, the PD controller is implemented as a lead-lag controller. c  0  3.5. Minimum Variance of Outlet Flow  3.5.1  33  Comparison to P Controller  In Shinskey (1994), the proportional only (P) controller is recommended for averaging level control. Although flow smoothness was not mathematically defined, it can be understood from the context that the author sought to reduce the variance of u(t). In the following, the PD controller is compared to P controllers. Although, the PD controller is implemented as a lead-lag network, its performance is evaluated as  C (s) PD  = CQ + c\s to simplify the calculation and to provide the best  case scenario for the PD controllers. PD Controller Performance  The sensitivity function S is  S=  1 + (c + c s)K /s 0  =  s  1  x  5(1 + K ci) + KpC  =  p  p  1  1 + Kc  0  s  .  p x  s  +  (  3  49)  \+K c\ p  It is convenient to define a parameter o) as c  co = -^. P  (3.50)  c  From (3.47),  l+K c\ =  ,  p  So  KpC  ,  OJc+OJd  0  and  —  C  (3.51)  1 + Kpc\  ojd S  = OJ .  = -J* CO + OJd c  i _ . S +  (3.52)  0)  c  As before, define K as o>c = KOJJ, then S =— 1 + K  —. S + KOJd  After some calculations, R (y) is obtained. v  The variance ratio between Var[w(r)] and Vax[d(t)], Rv(u) is VarKQ]  i r ^ + Sic+l)  (3.53)  3.5. Minimum Variance of Outlet Flow  34  P Controller  The transfer function of the P controller C (s) is p  •Cp  (3.56)  Kr  —  Then the sensitivity function with this controller is  S =  1  1 + K K /s c  where coc  = KK C  P  s + KK  p  C  (3.57)  S + OJ  P  C  = KOJJ. Then, Rv(y) and Rv(u) for Cp are obtained.  R (y) v  =  =  v  v  PD  =  v  OJIK(K+ 1)  R (u) The reduction of R (u) by C  R (y) K+  1 l)  K(K+  (3.58) (3.59)  1  compared to C is plotted against R (y) in Figure 3.6. The maximum P  v  reduction is 0.8638 when R (y) = 0.7077. This reduction is 0.93 in terms of standard deviation. v  So the PD controller reduces the standard deviation of u{t) by 7 % compared to the P controller at best. In reality, the reduction is less than 7 % due to the low-pass filter. This small reduction r  probably means that for practical purposes, the PD controller is no better than the P controller.  Figure 3.6: The ratio of the variance of u{f) between Cpo and Cp. x-axis is Rv(y).  3.6. Random Walk Input  3.6  35  Random Walk Input  Although stationary processes are preferred as a model of the input disturbance, the random-walk case is treated here for two reasons: (a) to show that the optimal controller is the same PI controller for deterministic cases where the input disturbance is a step, and (b) to show a proper way to choose the state variables for a non-stationary disturbance. Doss et al. (1983) considered the control of liquid levels in three tanks in series by a state-space LQ approach without explicitly including the input disturbance in the state , but resorted to other more intuitive approaches that they felt led to a simpler, more physically acceptable scheme. Harris and MacGregor (1987) reworked the same problem, and showed that the same satisfactory result can be obtained by the transfer function method with a disturbance model. If the input disturbance had been included in Doss's work as shown below, the state-space approach would have produced a satisfactory result. A continuous-time random walk is a Wiener process w(t). Since w(f) is non-stationary, the difference between the incoming flow and the outgoing flow, which is stationary, is chosen as a state variable. With a Wiener process w(t) as the input disturbance, the manipulated variable  u(i) = F (t), u  and the tank level error x\(t)  = y(t)  = xi(0) + K  With X2(t)  = w(t)  -y , r  p  Jo  f  the process is expressed as (3.60)  (w(r)-u(T))dT.  - u(t), w(f) = dw(t)/dt, and v(t) = u(t), the system equation is d_ dt  X\(t)  =  x (i) 2  0  K  0  0  p  *i(0  +  x (t) 2  0 -1  v(0 +  0 1  w(t).  (3.61)  B The performance index J is  J := E[xi(t) + pv(0] = E{x(t) 2  2  2  T  x(t) + v(0  v(0}  (3.62)  Note that the outlet flow u(t) itself cannot be included in the performance index because it is nonstationary and may not have a well-defined variance. The optimum control v(t)* that minimizes / is a feedback form  v(0* = -Kx(t)  (3.63)  3.6. Random Walk Input  36  P = {prf is obtained  As before, the solution of the ARE,  analytically. Substitution of  A,B,Q,  and  R of (3.61) and (3.62) into (3.6) yields the following three equations.  -p /p 2  + \= 0  2  2  PnK -pnP22/p  =0  2  p  IpnKp  - p\ lp  (3.64)  =0  2  2  The above three equations and the positive definiteness of P determine ptj as follows.  Pn=p,  P22 = P yj2pK  p  p  n  =  ^2p/K  (3.65)  p  And  K = R- B P  = [-p /p  l r  Since u(t) = v(t) =  -P22/P ]  2  2  l2  = [-l/p  -  y/2p~Kp/p]  (3.66)  -Kx,  su(s)  = xi(s)/p  + x (s) 2  yjlpKpIp  (3.67)  Usually x (t) is not available for measurement and an observer is necessary. A simple noise-free 2  observer is designed below. Since x\(t)  = K x (t), p  in Laplace transform,  2  x (s)  p  2  SX\ (s)  =  2  sx\(s) = K x (s).  Thus, (3.68)  Substitution of (3.68) into (3.67) yields su(s)  =  X\(s)  fl ~ +  ^2pK \ —-p— = P  s  xi(s)  (3.69)  Thus 1  u(s) = X\(s) — +  sp  This is a PI controller with the proportional gain Kc  (3.70)  ypKp = yf^,  and the reset time  T = ^2p/K . r  p  The sensitivity function S = 1/(1 + PC) is  S = l *z(± +  where OJ = C  •\fK ~/p. p  1 +  / X \  s + Syl2Kp7^ + K /p 2  p  (3.71) S + 2^U> S + G J I 2  C  So the optimum controller is a PI controller that makes the closed loop system  a second order low-pass filter with a damping factor 0.5 VI.  3.6. Random Walk Input  3.6.1  37  Effects of Damping Factor  Some PI tuning rules recommend the damping factor n = 1 for integrating processes , for example Morari and Zafiriou (1989). Effects of n on Var[v(/)] was investigated when n is deviated from the optimum value rf = 0.5 V2. The PI controller is tuned so that the sensitivity function becomes  s  2  S = -^—z  =•  (3.72)  s + 2T]0) S + (x> 1  L  c  c  Then Var[y(r)] and Var[v(f)] become (denoting the variance of w(t)  Var|M0] = S .  with cr^),  Var[v(0] = ^  I '  c  (3.73)  For a given Var[y(J)] target, o) is adjusted for each n ± n* to keep Var[y(0] to its target. Then c  Var[v(£)] is compared to the optimum value Var*[v(0], Var[v(0]  Var*[v(0] This is plotted for 77 = [0.5  (4n + l ) 2 V 2 ( 1 lV2^J 4n 2  x l / 3  (3.74)  1.0] in Figure 3.7. Var[v(f)] increases by 5% when 7/ = 1.  1.06H  r\ (damping factor) .1 r  Figure 3.7: Comparison of the optimum controller (n = V2/2) and non-optimum (n = [0.5,1]). X-axis is for the damping factor n and Y-axis for the variance of u divided by the optimum variance.  3.7. Summary  3.7  38  Summary  The averaging level control problem is formulated as a minimization of Var[w] subject to the target tank level variance Var[y(f)]. The problem is solved by the state-space method combined with a noise-free observer. When the input disturbance d(t) is modelled as a stationary random process (output of a first-order low-pass filter with a cutoff frequency ojd driven by white noise), the optimum controller C is a phase lag network, L  Ci = K  .  c  s + a When the input disturbance is modelled as a random walk, the optimum controller is a PI controller,  Cpi = K  .  c  s For both Q, and C/>/, the closed loop system is a second-order system with damping factor V2/2. A single parameter R (y) that characterizes the controller is introduced, v  (oj \  2  Var[y(Q]  d  For a stationary input disturbance, Ci and CPI are compared by the ratio of Var[zi(r)] with the two controllers set up to give the same tank level variance. CL is not much different from CPI when  R (y) is small (below 1), but C v  L  performs better for higher Rv(y). For instance, for Rv(y) = 10, C 's L  Var[w(0] is 1/2.6 of that of C . PI  When the flow smoothness is represented by the variance of u(t) in stead of u(f), the optimum controller, which minimizes Var[u(t)] with a target tank level variance, is a proportional plus derivative (PD) controller, Cpo,  CpD = CQ +  C\S.  However, the performance difference between CPD and a proportional only controller C is small P  (7 % in standard deviation of u(t) at best), and for practical purposes, Cp is recommended.  3.8. Mathematical Details  39  3.8  Mathematical Details  3.8.1  Solution of Section 3.4.2  The A R E yields the following seven equations.  («)  -P /P  + 1 = 0  2 2 n  (b) p Kp  - pA  n  (c) - p Kp  1  2  (d) 2p Kp  - 2p A - p\ ip  n  22  b  (e)  = 0  2  3  -p K p Kp-p ,A -p p /p l 2  (J) First from (a), p \  =0  - pnpislp  n  = 0  - pnpnlp  X2 b  -2p Kp-p /p +pp 2  l3  2  P +  l 3  2  b  3  (a) - p /p 2  + 1 = 0 => p  2  13  (/) - 2p Kp - p\ lp l3  3 3  real.  = -p  l3  +pp  2  3  3 3  = 0  2  3  = ± p , but from (J), p \ must be - p to make p  3  2  2 3  2  = 0 => p i 3 = p  yJ2pK  +pp  p  2  Then using (c), (c)  -pnKp-pnp /p  = 0 => -/?nA> + ^2pK + pp = 0 => /?n = ^2plK  2  +p(p/K )  2  33  p  p  2  p  Using (6),  (6) puKp-pnh-pnPi-ilp  = 0=> p  2  = ^ (pnK -p p /p ) l  l2  = A;\^j2pK  2  p  u 23  + pp + p  /p)  2  p  2i  Then, substitute the known to (e), (e) -  p Kp l2  + /^.K/? -  p n h - P2iPnlp => - p u K p -pKp - p A - p y J 2 p K + p p / p 2  V  2i  b  23  p  So  K (p  + A~ yj2pK +pp ) l  p  p  23  b  + pp ) 2  p  = b  p  R (y) and R (u) in 3.4.4 v  + y/2pK  p b  = A + ^/2K /p+p  3.8.2  K (A p  2  p  v  As the sensitivity function  + K A- p-  l x  p b  A + A ^/2K /p+p 2  b  p  +  K p~  l  p  3.8. Mathematical Details  40  the transfer functions from d(s) to x\(s) G and from d(s) to ii(s) G y are K ( s + d) DY  GDY =  D  p  s + y/2o) s  + co  2  2  c  DU  =  G  K K (s p  + bs)  2  c  s + ^j2co s  + co  2  2  c  Thus  K (s  1  Var[x!(0] = /v  2 + y/2co s  [s + co  s  d  + co )  KI  pv  + £4/o> ,  + *(o>rf+ V2o» ) + s( yf2co co +co ) 2  2  c  c  2  d  ,,2 + A/2/C >C  CO = K (O < 0) K + V2A: + 1  a =  D  A  D  d  2  V2/c +  K + 2  =  Var[x,(0]=^/v  U  3  are expressed as follows:  c  b  +a  s  2  c  b, and K  d  =  2  c  We express co as Kco , then a, c  + a)  p  1 + V2K  1 iO = KbO) d  >  d  s  0)  d  +a  [s + s (l + K V2)w + S(K V2 + /c )^ + K ^ , 3  2  2  2  2  rf  AC W +^(l-r/cV2)^ 2  2  = K:  2 co 2  1  ( ( 1 + K V2)(/f V2 + /c ) - K )  6  K  3  2  d  2  +  ' 2w V2/rt> -t- V2/c+ 1) P  Since Var[d(t)]  = I (l/(s  3  + co )) =  v  d  _ Var[x,(Q] _  ^ ~ Verity] ~*  \/(2co ), d  J^^^  l + 2  Rv  2  l + K  ^  _K  2  *  P  3V2«  4 +  K  co . V2K(K + V 2 K + 1) 2  w  2  9tt  2  3 +  +  6V2«  V2K(K + V2K+ l ) 2  2  +  3  3  So *v(v) :=  Rv(yWd  3  (3.75)  K + V2K + 9K + 63 V2K + 3 V2K(K + V2/f+ l ) 4  *2  3  2  2  Similarly for Var[w(0] and Rv(u), Var[zi(r)] = 7  (  V  Since K P K C  1 +  K K (s +bs)  )  2 yJ2co S  + C0 )  2  p c  5  +  C  s + bs 2  U  2  = -k , 2  3  + s (co 2  d  +  V2fa>) + s( ^[2co co c  c  y (l + V2/c) 2  KK — p  3  c  V2/c+ 1  Here, the variance ratio is concerned, so the input variance can be set to any convenient value.  + co ) + co co 2  d  2  d  3.8. Mathematical Details  41  So, ( K\\  Var[zi(0] =  + V2K)  + Vz* + 1  2  Substituting b = ^j-^^^d,  R (u)  * (u) := v  OJd  )  U  + K V2)it»d + S(K V2  + S (\  3  2  + K )CJ 2  + K OJ.  2  2  0>A  + V5#c + u  V2K(K + V2/f + 1) 2  1  /? (M) V  ~ V2K(K + V2* +1)  w, 2  3.8.3  2  and after some algebra, we get,  =  v  s + bs  ^  2  V(l  + V2*)  ,K +  V2K +  2  (3.76)  (K + V2K) + 2  lj  R (y) and i? (w) in 3.4.5 v  v  _1  L  . s  K ~—^-=K -  c  PI  T~  s +  = K \ \ + ^= fS  C (s)  c  +b  b=  c  T:  1  r  Then  S =  1 + Ikri+b  s + sK K  + KKb  2  p  c  p  s + 2rjoj s + 2  c  c  Thus,  K K = 2na> , P  Substitution of K P K C  - 2T]OJ  C  C  into K p K c b  K K b = OJ  and  c  =o  2  2  p  c  yields  So, G n v  =  Ks p  s + 2no) s + OJ K K (s + bs) 2  2  c  2  DU  G  =  c  S + 2r\(x> S + QJ>  S  2  s + ojd  , ,„ ^ s  2  2 d  2nKOJ s d  S  2  2  d  +  + 2r}K0)dS +  K o? s 2  d  K OJ 2  2  , , ,)= KL 2  2  d  2 ((1 +2i7/t)(2^K+«r )w5 -  2  c  2  + 2nKiOdS + K co  2  2  + 2j]KCOdS + K W  2  c  Var[xi(0] = /v( —  K to  2  p c  2  Kps  s + 2r)KO)dS + 2 n o j ( s + soj /2n)  2  K ufy 2  j  ^  v  \ 5 - + ^ (1 +2r)K)a>d + s(2nK+K )oJ 2  3  2  d  2<uj2^ir(»r + 2 ^ + 1 ) 2  +  K 0J , 2  d  3.8. Mathematical Details  42  = Var[xi(0]2w =  R (y)  rf  v  co 2r]K(K + 2nK + 1) 2  d  R (y)co  1  2  :=  R (y) v  Var[w(0] = I  1 v  s  +  v  K u) s 2  2  cod  s  2  + K O>• ,  + 2r\K0) s  2  4  d  2co 2r]K(K + 2T]K +1) 2  d  4  3  ^  v  +2nK)cod  1 + 4n )/c + 2  St] ) 3  4TJK(K + 2TJK + 1) 2  OJ K\(1+4?] )K 2  R ( u ) = Yar[u(t)]2tOd  + s (\ 2  \S  d  {2t]Kco ) {2T}K + K ) + K G> 2  2rjKO>dS +  d  2  d  2  (3.77)  2TJK(K + 2TJK + 1) 2  2nKu>dS + 2  d  =  + 8T] )  2  3  2TJK(K + 2nK + 1) 2  K co s 2  2  d  + s(2rjK+K )a> 2  d  +  K co ; 2  d  Chapter 4 Downstream Processes 4.1  Introduction  In the previous chapter, the smoothness of the outlet flow F (t) is mathematically captured as the u  variance of Fu(t)  = v(t) and/or F (t), and the problem is solved by optimization with the following u  performance index (3.15),  J\ = VarLMO] +p (Var[v(0] +pVar[(0]), 2  U  wherey\(t) is the tank level and u(t)  is the deviation of F (t) from its mean value F . The reason u  m  for requiring outlet flow smoothness is to minimize adverse effects of tank level control on the downstream processes. In this chapter, the downstream process variable is explicitly included in the problem formulation. The averaging level control problem is considered as the minimization of the downstream variable variance while keeping the tank level between high and low limits. Let y ( t ) denote the downstream process variable. The minimization of Var[y (r)] makes the problem 2  2  mathematically tractable, and also it makes practical sense for some cases. For example, y ( t ) may 2  represent a product quality parameter, and y (t) must be in a specified range for the product to be 2  acceptable. Minimizing Var[y(0L then maximizes the product yield. 2  In this chapter, the following performance index is used.  J := Var[y,(0] +P^Var[y(0] +pVar[v(0] 2  2  (4.1)  As in Chapter 3, pi and p serve as Lagrange multipliers so that Var[yi(0] is minimized subject to 2  Pi Var[v(0] + plVar\y2(t)] = c, where c is some constant. Actually, c is not given, and pi and p are 2  43  4.2. Downstream Process Model  44  adjusted to obtain a required Var[yi(r)], which is specified as one of the design parameters. Since the problems are treated in continuous time, non-zero p\ is necessary to keep the feedback gain K and Var[v(r)] finite.  4.2  Downstream Process Ivlodel  Figure 4.1 depicts a model of a combined system of the tank level control loop and the downstream process, where the outlet flow F (t) enters at the output of the downstream process P . u  2  The  F (i)d  Ci  Fu(t) c  6-r*y (t)  2  2  Figure 4.1: The tank level control loop (Ci and P\) and the downstream process P . The outletflowF (t) enters at the output of the P . 2  u  2  purpose of the model is to investigate how the downstream process influences tank level controller design when the  statistical relation between F (t) and y (f) is taken into account. The model is u  2  not meant for designing a multivariable system, where F (t) is determined from both y\(t) and u  y (t). Thus, the interaction between F (t) and y (t) are simplified excluding any deadtime or gain units, which may exist between F (t) and y (t). Therefore, it is assumed that the downstream 2  u  2  u  2  process variable y (t) is not available to the tank level control loop. If F (f) affects more than one 2  u  downstream processes, y (t) is a hypothetical "representative" process variable that has no real 2  physical entity. Downstream processes are represented here by afirst-orderplus deadtime model.  Pi  K  2  =  TS 2  -e~ = K TS  +1  2  S +  0>2  where K is the steady state gain, T the time constant, co = T the cutoff frequency and T the 2  2  2  2  4.2. Downstream Process Model  45  deadtime. It is assumed that P is closed with a PI controller C , 2  2  C = Kc ( l + -U = K -^-, \ T sj s  (4.2)  S  2  C  r  where K is the proportional gain, T the integrating time, and co = l/T . Then the sensitivity c  r  r  r  functions'2 is o  1  <*+<Ol)  1 + PC 2  (  4  3  )  s(s + io ) + K to K (s + co )e-  TS  2  2  2  2  c  r  The IMC tuning rules (Morari and Zafiriou, 1989) recommend either T = T or T = T + 0.5r r  2  r  2  and (Ogawa, 1995) T ~ T + 0.5T for robust performance. Here, T is set to T , that is co = co , to r  2  r  2  r  2  simplify the analysis. Then, S becomes 2  S = .. s,+ K K o) e-  .  2  (4.4)  TS  2  c  2  '  S = \ /(l + P2C2) is the transfer function from u(s) to y (s) when u(s) enters at the output of the 2  2  downstream process, The transfer functionfromii(s) = v(^) toy is S /s, which is a low-pass filter. 2  2  When the deadtime T is small, S /s can be approximated by thefirst-orderfilter. 2  G :=^*—!—, v s + co  (4.5)  V2  2c  where co = K K co 2c  2  c  2  is the closed loop cutofffrequencyof the downstream process. For robust  performance, K is set to make K K between 0.25 and 0.4 , that is, 1  c  C  2  co = 0.25 ~ 0.4w . 2c  (4.6)  2  When the deadtime is not negligible, e~ can be represented by a Pade approximation, TS  _, T  1 - 0.5™  io -s  2  1 + 0.5TS  CO + S  T  T  T  Then G  L_Li!i =  n  S + K ^ C J ^ j ^  s +  S  2  + S(C0 T  \  .  C0 ) +  (4.8)  L0 LO  2c  2c  T  This is a low-pass filter with a cut-off frequency = ^co co . The gain plot has -40 dB/decade 2c  T  slope between to' and w , and -20 dB/decade slope beyond co . This can be approximated by a p  T  T  first-order low-passfilterwith a cut-offfrequencyco chosen between co' and co . p  G  V2  ~ —-—, S + COp  p  yJoj co <co < co . 2c  T  p  T  T  (4.9)  4.2. Downstream Process Model  46  20 10  0  C TD 3  -10  -20 -30 -40 -50 -2  -1.5  -0.5  -1  0.5  0  log(to)  1  1.5  2  Figure 4.2: The magnitude of Gvi (solid line) and two approximations for io - K = r = 1 and K - 0.3. The dashed line is for OJ = io'p and dotted line is for a>p = l.4a>'p. 2  2  c  p  Figure 4.2 shows the true Gvi and two approximations with different choice of a> . p  When u enters at the input of the downstream process, Gvi has an additional turn-over frequency a>2fromwhich \Gvi\ decreases with -40dB/decade. If such a model is used, the resulting controller, which is more complex, will produce outputs that have more highfrequencycomponents. Therefore, in the sequel, the downstream process is assumed to be a simple low-pass filter driven by v(t). Let x (t) denote the deviation of^(0 from its mean value: 3  xi(t):=y {f)-E\y (t)l 2  (4.10)  2  Since x (t) is the output of a low-pass filter driven by v(r), 3  x (i)  = -oj x (t)  3  + v(t).  p 3  (4.11)  In reality, there is some "gain" involved between v(t) and x (t) depending on where and how u(t) 3  enters the downstream process. Then the downstream model is  x (s) = -^-v(s),  (4.12)  3  S +  (O  p  where Kj is the gain. The downstream process variable x (t) is assumed not available for the tank 3  level control loop, and recovered by a noise-free observerfromxi(t) and w(r) using the downstream process model (4.12). Therefore,  N (s) c  'The exact value depends on the plant uncertainty and T (Ogawa, 1995), but the IMC rules recommend similar values (Morari and Zafiriou, 1989)  • ' •  4.3. State-Space Solution  47  where CD(S) is the controller transfer function. CD is calculated to attain required Var[yi(0] and Var[v(r)] with the minimum Var[y ] by adjusting pi and p in (4.1). Let C denote the optimum 2  2  D  controller for Kj = 1, and C' for Kd ± \, both with the same Var[yi(r)] and Var[v(f)]. Let cr , 2  D  denote Var[y (r)] f ° K = 1 and o\ for K ± \. From (4.12), cr = K^a if the same controller r  2  2  d  2  d  is used for both cases. If CD i C' , cr ± K a . This contradicts the fact that CD and C' are the 1  2  2  D  d  D  optimum controllers for each case. Thus, CD must be equal to C' . This means that Kd does not D  influence the optimum controller. However it does influence the value of p to obtain the desired 2  Var[yi(r)] and Var[v(r)]. Thus, Kd is set to a convenient value for calculation.  4.3  State-Space Solution  The downstream process variable x->,{f) is added to the state variables of Chapter 3. x  \ (0  =  x(0  tank level error  = Fd(i) - F  2  *3(0  y\ (0 ~~ yr  input disturbance deviation  m  =  u(t)  yi(f) ~ E[y2(0] downstream deviation variable  = F (t) - F u  outgoing flow deviation (manipulated variable)  m  In this chapter, the input disturbance is assumed to be a stationary process with the first-order low-pass spectrum. x (t) = -ojdXiit) + toaMt). 2  (4.13)  Figure 4.3 shows the structure of the combined system of the tank level control loop and the white noise —H  C  bid  x  s + a>d u  J ~ T " T ' J  T  2  K  n  i  cop s + a>  n  Figure 4.3: The system structure for the tank level control loop and the downstream process.  4.3. State-Space Solution  48  downstream process, where the downstream process variable x is controlled in an open-loop man3  ner from v. The controller, C, calculates the formal manipulated variable v from x\, x , x and u, 2  3  but x and x are not measured. Thus, they are not shown as inputs to C. With (4.11) and (4.13), 2  3  the following system equation is obtained.  d  Xl(t)  0  x (t)  0  2  dt  u(t)  K -co  0  p  0  0  0  0  o  0  d  0  X\(t)  x {t)  +  xii ) u(t)  p'"  -ojr  0  2  0  1  0  v(t)  +  1  COd  0  (4.14)  Mt)  0  The performance index is / = E[xi(0 + pv(0 + P*3(0] 2  2  2  2  2  = E[x(t) Qx(t) + Rv(t) ], T  (4.15)  2  where r  1 0 0 0 0 0 0 0 0 0  p\  and  R  =  [p]].  (4.16)  0  0 0 0 0 In continuous time, p\ must be greater than/zero for the problem to have a proper solution. As ' j\ '  usual, the positive-definite solution of the ARE, P and the feedback gain K = [k\ k  2  k  3  k*\ are  !1 • •  obtained. For a problem of this size, it is difficult (may not be impossible) to obtain closed form analytical solutions. Thus, all the analyses in this chapter rely on numerical solutions. The optimal control is a linear state feedback form, su(s) = -k\X\(s) - k x (s) - faxiis) - k u(s) 2  2  4  (4.17)  With the noise-free observer,  x (s) 2  = K sx\(s)  xi(s) =  su(s)  = -&iXi(s) -  S + 0),  k [K^sxiis) 2  +  p  u(s)  (4.18)  •u(s)  + u(s)] - k S + 3  -u(s) - k^u(s)  0)  B  (4.19)  4.4. Performance Comparison  49  Thus the controller denoted with CD(S) is -(s + io )(k K- s + ki) x  p  C (s) = D  2  s + (oj + k + h + k )s + oj (k + k )  ^  2  p  2  4  p  2  4  b s + b\s + bo s + a\s + ao 2  2  2  Since C (s) includes a factor (s + OJ ), which is due to the noise-free observer, in its numerator, it D  P  cancels the pole in the downstream process . The sensitivity function S is 2  (s + a  + ao)  2  s  =  S  lS  s + s (a + K b ) + s(a + K b ) + K b ' 3  2  {  p  The transfer functions from x (s) to x\(s), X  p  to v(s), G  X  DV  x  p  0  and to x (s), 3  are  GDY  2  K (s + a\s + ao) 5 + s (ai + K b ) + s(a + K b{) + K b 2  p  3  2  p  Q  0  GDY ,  2  GDY =  2  2  0  p  p  0  sK jb s + b s + 6p) 2  =  p  2  x  s + s (ai + K b ) + s(a + K b ) + K b  D V  3  '  2  p  2  0  p  x  p  0  sK (b s + bis + b ) s + s {ai + K b ) + s(a + K b\) + K b 2  p  GDY  3  2  2  0  2  p  2  0  p  p  0  1 s + a>  p  From the above transfer functions, the variances ofy\(i), v(r), and y {t) driven by F (i) are obtained 2  d  using the integrals of (3.11). Since Var[F (f)] = a>d/2, the variance ratios R (-) are obtained by rf  v  dividing the variances by a>d/2. For example, R  ^  i  ) 1  =  (_^£  K (s + a 2  I  \s + CL)  p  v  d  4.4  lS  \ 2_  +a) 0  s + s (a\ + K b ) + s{ao + Kpbx) + Kpbo) ojd' 3  2  p  2  Performance Comparison  In this section, the performance of Co (4.20) is compared to that of a controller designed with no consideration given to the downstream process. The controller, denoted with d, that minimizes the performance index J = Var[yi(r)] + pfVar[v(0] is compared to C . To compare on an equal L  D  basis, CD and C are tuned so that the two, controllers yield the same tank level variance. The main L  concern of comparison is the reduction of Var[y(0] by CD compared to CL- We denote the variance 2  2  In the Wiener-Hopf method, the reason (s + OJP) shows up in the controller transfer function numerator is obscured  by complicated algebra - another advantage of the state-space approach.  4.4. Performance Comparison  50  ratios under a particular controller by subscripts, for instance, R (y2) under CD(S) is denoted with v  R-v(yi)c - The reduction of the 3/2 (0 variance due to CD is denoted with r (y2) that is, v  D  r  4.4.1  M  5  (4.24)  .*^S..  :  Effects of v(0 weight  Since CD is designed with the performance index J = Var[yi(Y)] + p Var[v(/)] + p^Varlj^COL for a 2  given target R (yi), different manipulated variable variance, R (v), is obtained by adjusting pi and v  v  P2. As CL minimizes Var[v(r)] for a given Var[yi(r)], VarKOk^VarKO]^. The Var[v(r)] increase by C over C is denoted with r (v), D  L  v  r  " (  v  )  :  p  =  /  \  ^v(v)  •  (  4  -  2  5  )  Ci  Intuitively speaking, Co reduces Var[y2(0] compared to Cx by increasing Var[v(/)]. The downstream process is a low-pass filter driven by v(t). Thus, high frequency components of v(t) perturb 72(0 very little. Therefore, Co shifts control energy to higher frequencies to reduce Var[>>2(0]Var[y2(0] would be reduced more for larger r (v) with large high frequency components. The v  gains of Co and Q are compared as shown in Figure 4.4. CD has higher gains at high frequency ranges compared to CL and lower gains at lower frequencies.  Figure 4.5 shows the magnitude  of the sensitivity function S, \S\, for Co with r„(v) = 1.2 and 2, and for CL. \S\ for CD has lower magnitudes around the cut-off frequency co = 1 of the downstream process, and small increases p  at lower frequencies as a result of CD shifting the control energy towards higher frequencies. This is the case as shown in Figure 4.6, where r (y>2) steadily decreases to 0.7068 for r (v) = 100. In v  v  reality, the reduction is less because the dynamics of a valve actuator has a limited bandwidth, and high frequency components of v(f) cannot influence the tank level y\(t). The valve actuator dynamics is not included in the mathematical model to simplify the derivations. As r (y2) = 0.75 at v  r (v) = 2, a good portion of possible R (yi) v  v  reduction is obtained with a modest increase in R (v). v  4.4. Performance Comparison  51  CO  Figure 4.4: Gains of Q> and CL- Both controllers are tuned so that Var[yi(r)] are the same. The graphs marked with 2 and 1.2 are the gain for CD tuned tp make Var[v(0]c equal to 2Var[v(t)]c and 1.2Var[v(t)]c respectively. The system parameters are: K =.cod = o> = 1. The controllers are tuned for R (y\) = 1. D  p  p  L  v  5  m 13  CO  Figure 4.5: Magnitude of S for CD and CL- The controller tunings are the same as for Figure 4.4.  L  4.4. Performance Comparison  52  '''  1  R(du) ratio v  Rv(y2) reduction rv(y2) in y-axis and rv(v) in x-axis. The system parameters are: The controllers are tuned for Rv(y\) - 1.  Figure 4.6:  (jj -\. p  4.4.2  K - ojd p  reduction versus o>  Var[y2j  p  The i? (v2) reduction, v  r (y ), by CD compared v  2  to the minimum Var[v(r)] controller  CL  is plotted  ojp/ojd between 0.01 and 100 in Figure 4.7, parameterized by five different normalized tank level variance, R (yi), between 0.001 and 100. The downstream process cut-off frequency OJ for  v  Figure 4.7: r (y2)for5 different v  P  R (y\) of 0.01,0.1, 1, 10, and 100, which are denoted on the graph. v  4.4. Performance Comparison  53  is measured relative to the incoming disturbance bandwidth cod, thus, jc-axis of the graph is for cjp/cod- The manipulated variable variance increase, rv(v) is set to 2. When r (y) =10, the graphs v  are very similar with r (y ) ranges between 0.6 and 0.85 for io /o)d = 0.01. The graphs shift to v  2  left for increasing R (yi).  p  Since \/R (y\)  v  is proportional the bandwidth of the level control loop  v  (the loop must be fast to make the level variance small), the downstream cut-off frequency to is p  measured against -\J\/R (yi) v  (inverse of the standard deviation). Let o> denote the normalized n  downstream cut-off frequency relative to o>d and l/i? (vi), v  (On •= —  J& (yi). v  Figure 4.8 plots the same graphs of Figure 4.7 with the x-axis for the normalized downstream frequency u>„. Figure 4.8 demonstrates that co is a convenient parameter for evaluating the effect n  1  0.95  0.9  0.85  0.8  0.75  0.7  0.65 10"  3  10"  2  10~  10°  1  10  1  10  Figure 4.8: The same graphs as Figure 4.7 with x-axis for o)„  10  2  = u> /(Od p  3  V^vCji)-  of the downstream process bandwidth o> on r (y )p  v  2  1. The asymptotic value of r (y ) towards cj = 0 is smaller for higher R (yi). v  2  n  v  Thus, when the  downstream process is slow (relative to the incoming disturbance and the level control loop), and the level loop is slow, Q> reduces the downstream process variable variance as much as 35% from Q .  4.5. Summary  54  2. When the downstream process is fast,  say OJ„ > 10, r (y2) v  >  0.9 for all values of R (yi). v  Thus, CD is not effective compared to Q . Therefore, when the downstream process is fast, Var[v(r)] is an appropriate measure of the outlet flow roughness.  4.5  Summary  The downstream process is modelled as a first-order low-pass filter from v(r) to its process variable yi(t),  v yi(s) = ——v(s). S + dip  The optimum linear controller C that minimizes Var[y2(0] f ° target tank level variance Var[yi(r)] ra  D  is obtained when the input disturbance is a stationary process with the cutoff frequency at o)d- CD is a second order system, (s + (o )(s + b) p  s + a\s + ao 1  It is noted that CD cancels a pole in the downstream process. The best case downstream process variable variance reduction compared to CL ranges 0.8 to 0.6 depending on the tank level loop tuning R (yi) and the relative downstream frequency oj = oj /ojd ^R (y\). CD is not much better v  n  p  v  than CL when the downstream process is fast or the tank level control loop is tight.  Chapter 5 Markov Jump Systems 5.1  Introduction  Jump Markov systems, where sudden changes in system parameters and disturbances, or component failures are modelled by continuous-time Markov processes, can model a wide range of systems not only in engineering but also social and economic spheres. Sheet breaks on paper machines can be modelled by a continuous time Markov process as shown in (Khanbaghi et al, 1997) and later in Section 7.2.4 of this thesis. The input disturbance to a broke tank level control loop can be modelled as a finite state (2 or 3) continuous time Markov process. This makes it possible to apply the jump linear quadratic control method to broke tank level control (Khanbaghi et al, 2002). It is shown later that the jump linear quadratic control method applied to broke tank level control has no special advantage over the normal linear quadratic control approach. However, the investigation of jump Markov systems leads to a reliable numerical method to calculate the distribution of state variables without simulations. This is useful for broke tank level control because the probability of level violation for various controllers, linear or nonlinear, can be evaluated reliably. In this context, jump systems are included in the thesis. Thus, the emphasis is on a new theorem for state distributions. The investigation of jump Markov systems has been active since the late 1960s ((Sworder, 1969) and (Wonham, 1970)), and many studies have been published. Two areas, optimal control (unconstrained and constrained (Costa et al, 1999)) of linear systems and stability and controllability aspects ((Mariton, 1990) and (Ji and Chizeck, 1990)), are the main subjects of published 55  5.2. Jump Markov Systems  56  studies \ However, very little has been published on the evaluation of performance indices or state variances, which is important for practical applications. From the nature of disturbances, the state distributions are most likely non-Gaussian. Therefore, the state variance may not provide enough information on control performance. To the author's knowledge, Sworder and Choi (1976) is the only study that investigated the distribution of performance indices under optimal control. Since a system undergoes sudden discontinuous changes (mode or regime change), it is desirable to know the performance index or the state and control variances for each mode so that the controller can be tuned for each mode. This chapter presents a new theorem that makes it possible to calculate the state variances and distributions for eaehihode.  5.2  Jump Markov Systems  Let x(t) G R  Nx  be the continuous state of a system, u(t) e R^" the control input. The system is  subjected to sudden changes (jumps). Between jumps, the system is described by a time-invariant differential equation  jx(t)  = f(x(t),u(t),m  = f (x(t),u(t)),  (5.1)  m  where £(t) is a finite-state continuous-time Markov chain. Without loss of generality, the Markov states are expressed as integers between 1 and N%. Thus, £(t) e S , where a set S = {1,2,..., A^}. The system (5.1) is in mode £ when £(t)  = €.ln  this chapter, any entity that is a function of the finite  state continuous time Markov process £(r) is expressed with a subscript I , e.g. f t := /(£(0)lfM=fSelected elements of continuous-time Markov chain theory in Tijms (1994) and Stewart (1994), which are used in the thesis, are stated here. Continuous-time Markov chains can be characterized by the probabilities of state transitions in a short time period At, P[£(f+Af) =  F[((t+M) where qu  = ?']=1+ quAt  = M(f) = i\ = qijAt +  + o(At), o(Ar),  q„ < 0 q > l}  (5.2)  0,  = - }Zj±i Qij and °(0 denotes a quantity such that lim^o o(t)/t  = 0. As it is assumed that  Markov chains are homogeneous, the <?,/s are not functions of time t. The matrix Q , whose ij -th element is qtj, is called the  infinitesimal generator.  It is assumed that  is irreducible and has a  'The literature is vast, and only a smallfractionof representative papers are cited in the thesis  5.3. Holding Time Density  57  unique stationary distribution n = [n\  (A^-row vector), which satisfies the equation  712... 7IN ] (  nQ =  (5.3)  0.  Note that a continuous-time Markov chain'with a stationary distribution is ergodic (Grimmett and Stirzaker, 1992). Another way of describing continuous-time Markov chains is by the transition probability and holding time distribution. To distinguish points in time and time durations, following Feller (1968), the word epoch is used to denote points on the time axis and time for durations.  t„,n = 1,2,... be epochs when £(i) changes its state. We call t the n-th jump epoch and T '•— t - t -\ (to = 0) the «-th switching epoch, that is, jump epochs have a fixed time origin  Let  n  n  n  n  at 0, and switching epochs have time origin shifted to the previous jump epoch. Time durations between jump epochs {r, 0  < T<  T ) are called holding times. n  • The switching epoch of £(i) in state /, T is exponentially distributed with intensity i}  Let  Xi > 0 be -q , then u  P[7/ <t]=l-  e  q i i t  = 1 - e~ . Xit  (5.4)  • If £(t) leaves the present state i, it jumps to state j £ i with probability pij which is related to the qi/s as follows.  *L=*i Pij  =  -QU  j* i (5.5)  ^  j=i  0  and,  zZPij = 1. The matrix P whose (/'-th element is p j satisfies conditions to be a transition probability matrix t  (rows sum up to 1, and all elements are non-negative). A discrete-time Markov chain whose transition probability matrix is P is called the  embedded Markov chain of the continuous-time  Markov chain £(t) (Stewart, 1994). The embedded Markov chain of £(t) plays an important role in the development of the new method.  5.3  Holding Time Density  The holding time density h (r) is introduced to describe the long-run average "distribution" of short c  time intervals included in holding times of state I. The material here is not new, but described in  5.3. Holding Time Density  58  detail because the concept is important for a new theorem introduced in the next section. The holding time density would be best understood by an example of its usage. Suppose that a random variable X(t) taking its value in R is a function of £(/) and how long £(t) stays at the present state, X(t)=fm,t-t )=fr(T)  (T  s  = t-t ),  (5.6)  s  where t is the latest jump epoch before t . The conditional distribution of X(t) given £(t) = I is s  obtainable if we know h i ,  where B is any Borel set and 1  denotes the indicator function of the inverse image of fi(B) =  fit, B). Then, for instance, the conditional mean of X(t) given £(/) = t is calculated as  = f xfieidx).  E[*(0I£(0 = t]  JR  Conceptually the holding time density is similar to the histogram for holding times. Suppose a histogram of holding times for state t is built with bin width At, where relative frequencies that the holding times spend in each time intervals (bins), ([0, At), [At, 2At), [2At, 3At),...)  are collected.  The bins are numbered from 1, so the /-th bin covers a time interval [(/ - I)At, iAi). If a switching epoch of a sample is longer than (/ - 1)A/, bins 1 to / are incremented by 1. Let c,(0 denotes the /-th bin count up to epoch t. The relative frequency of the /-th bin is  Since £(/) is ergodic, limr,(/) = E ( r j = '- °  I  h (f)dt e  a.s.  (5.7)  J(i-1)A/  >0  For a short time interval [t, t + At) to be included in a holding time, the switching epoch must be longer than / +  At.  The probability that the switching epoch is longer than / +  h((t) is proportional to  At is e~ ^' \ A +&t  Thus  e~ ', Ae  h (t)  (5.8)  = ae~ , Xct  e  where a is a normalizing constant. The constant a is determined such that  h (t)dt t  =1  5.4. State Distribution  59  holds. Since JT°° e~ 'dt = A~ , Xe  x  (  h (t) = X e .  (5.9)  Xlt  (  t  From the definition of continuous time Markov chains (5.4), the switching epoch T(£) for state £ is exponentially distributed with intensity At, F[T(-C) < t] = 1 - e~ . Let st(t) be the probability X(t  density function of the switching epochs of state £, seiO := ^[T(£)  <t]= Ate'*".  at  (5.10)  With (5.10) and (5.9), the following equation is proved. s (f) = h (t) = A e- ,  (5.11)  Xtt  e  t  e  where At is the exponential intensity for the state. That is, in a continuous time Markov chain, the switching epoch probability density function st(t) and holding time density function h((t) for state £ are the same. This is a variation of Poisson arrivals see time average (Kao, 1997), but presented in a form that is convenient for the later development. The exponential distribution is the only distribution that has this characteristic. When IAT(f) of (5.6) is the holding time r, the mean holding time fe for mode £, is A~ as shown below, •; , x  e  r  o  Th (T)dT= \ e  Jo  TA e- dr = A- , XeT  e  x  (  which is equal to the mean switching epoch ft. However, on each mode £ segment with a switching epoch T, the mean holding time is 0.5T. Therefor, it seems unlikely that the mean holding time equals to the mean switching epoch since the mean holding time for every segment is one half of the switching epoch. This paradox is explained that holding times from longer segments contribute proportionally more to the mean holding time than those from shorter segments while all the switching epochs contribute equally to the mean switching epoch. The memoryless property of the exponential distribution is fundamental to this paradox including the well-known "waiting time paradox" (for instance, see Feller (1971) page 11).  5.4  State Distribution  The probability distribution of the continuous state x(t) is an important issue in jump systems because most likely it is not Gaussian and the variance of the state may not provide enough in-  5.4. State Distribution  60  formation for utilities of control (Sworder and Choi, 1976). In this section, a system of integral equations for the state probability distributions is derived. The new method is not limited to linear systems, and is applicable for non-linear and state-constrained systems. We consider jump systems under state feedback control (it is assumed that both x(t) and £(t) are measured without noise),  u(t) = f (x(t),at)).  (5.12)  u  Then (5.1) becomes jx(f)=f(x(t),f ( (t),at)),at))= u  Let X c R  Nx  X  fc{x(t),m\  (5.13)  denote the region where x(t) is defined, and S denote the Borel field on X. It is  assumed that x{t) has a unique bounded stationary distribution LI(B) := F[x e B] and that the system function / and the feedback function f are such that the closed-loop system function f satisfies u  c  a Lipschitz condition in X and that the closed-loop system is stable in each mode. Therefore, the closed-loop system (5.13) has a unique solution g for a given initial condition xo at each mode segment where £(t) is constant. X{(XQ, r) = g(x0, r, £(T) = €) = ge(x0, r),  where r = t - t is the holding time and X{{XQ,T) denotes s  (5.14)  x{r)\^ ) { (Q) . T  =  TX  =XQ  Since f satisfies a c  Lipschitz condition, ge{x$,T) is a continuous function of xo and T. For a fixed xo, let ge,  (T) :=  X0  gt(xo, T), which is a continuous function of r. Let B e & and xo be given, and T{(x , B) be a set of 0  T  e [0, co] such that gi  (T)  XO  e B, that is, ,\.{xi (5.15)  Te(x ,B)=g^ (B). 0  X0  Since ge, ( ) is continuous function of r, T  a  Xo  T((xo,B)  is measurable. Let K{(xo,B)  probability that X{(x , r) e B for a fixed initial value x . With the holding time density 0  0  J  denote the h(,  r»oo  \ {r)hiT)dT. TeM  o  (5.16)  The conditional probability distribution L> (B) := P[x(f) e B\£{t) = £] is obtained by {  HAB)=  f  Jx  iu (dx )K (xo,B), eo  0  (  (5.17)  5.4. State Distribution  61  where //«>(•) is the probability distribution of the initial value xo for mode I. Now, we consider the probability distribution of x{t) at jump epochs t . Let t~ denote the epoch n  just before t„ and t+ just after t„. The sampled values of x(t) at t~, where  = I, are denoted  by xe\(i), i = 1,2,..., that is, xt\{i) is the value of x(t) at the end of time segment where  = I.  Let Afi (JCO, B) denote the probability that X((xo, T) e B for a fixed initial value JCO and a switching epoch 7\ Since xt\(i) = ge(xo, T), a set of switching epochs that xe\(J) e B is T((xo,B) of (5.15). Therefore, with the switching epoch density Se(T),  J  f^OO  \ (T) (T)dT. T(M  (5.18)  Sl  o  Since s (t) = h (t) by Equation 5.11, from (5.16) and (5.18), Kt(x , B) = Kn(x , B). The probabilt  t  0  0  ity distribution of x \(i), /*t\(B) is e  fia(B)= f Li o(.dx )K (xo,B)= f fi (dx )K (x ,B). Jx Jx (  0  a  m  0  e  0  (5.19)  Comparing (5.17) and (5.19), we conclude that ne(B) = Lie\(B).  Theorem 5.1 For a stable Markov jump system, the conditional probability distribution of x given £(t) =  £, Li((-)  and the probability distribution of x(t) at the end of mode I segments, pfi(  are the same.  By the above theorem, statistical characteristics of x(f) of a continuous-time Markov jump system can be obtained by investigating a discrete-time system (sampled at t~). Since Theorem 5.1 relates a continuous-time Markov process to a discrete-time Markov process, it may appear that there are some common aspects between Theorem 5.1 and uniformization. The basic idea of the uniformization of a continuous-time Markov process £(t) is that under certain conditions £(f) can be represented as the composition of a discrete-time Markov process Z„, n > 0, and a Poisson process {N(f)}, i.e. £(t) = ZN(I) (Liu and Tu, 1999). However, the state space of a Markov process must be discrete for uniformization is applicable, whereas x(t) of Theorem 5.1 is continuous. Uniformization is a much stronger concept than Theorem 5.1 as it is valid not only for stationary distributions but also for transient analysis. Theorem 5.1, is only applicable for stationary distributions. In summary, Theorem 5.1 is independentfromuniformization. In the following, a system of integral equations for the conditional probability distributions Ht, I e S is derived. Suppose that a state transition j -> I takes place. In this case, the initial  5.4. State Distribution  62  value xo for mode t is the value of x(t) at the end of mode j segment, which has the probability distribution Lij\ = pj. Let rjt be the probability that the prior state is j given that a state transition to  I happened.  In the thesis,  is called the entry probability to state t, which satisfy,  2Zj&s,*e  ji - 1 •  r  Then, m = £ rjefij  (5.20)  Substitution of (5.20) into (5.17) yields  jeS,*e  J  (5.21)  x  which is an Nf set of integral equations. Since it is assumed that x(t) has a unique bounded stationary distribution p, when the stationary distribution of £(f), n is such that 7r, > 0, i e S, a unique bounded pe exists for all I e S. In principle, we can obtain pt, I e S (at least numerically) by (5.21) because lQ(xo, B) can be calculated from f and Q for £(f). The task is much easier when p c  (  and Kg have the densities (pe and Q>e respectively, that is,  Pe(B)= I fa{x)dx, and JB  Ke(x ,B)=  I ®e(xo,x)dx. JB  0  (5.22)  Substitution of (5.22) into (5.21) yields, I <p (x)dx= Z jt I <f>j(xo) I JB j€S,±e J x [JB r  t  =  dxo ®e(xo,x)dx  (5.23)  dx. | | ^ ( x , x ) 11 rje(pj{xo)dxo JB [ J x jcs,±e 0  Equation (5.23) must hold for all B e S, thus, the following system of integral equations for the densities is obtained. 4>e(x) = I ®e(xo,x) 2 rje(pj(x )dx , J x j€S,*e 0  Equation (5.24) is a system of  integral equations for  0  t e S .  (5.24)  unknown functions <pe(x), € e S. Except  for very simple cases, (5.24) must be solved numerically. However, this approach has two advantages compared to simulations: (a) no approximation due to time discretization is involved, and (b) randomness due to computer generated random numbers does not exist. Methods for solving (5.21) or (5.24) numerically are an important practical issue, but are not treated in the thesis. Note  5.4. State Distribution  63  that O^(x,xo) can be calculated from the knowledge of (5.14) and the distribution of the holding time T for state I. The overall probability density for x(r), <p(x) is obtained by taking the expectation with respect to £,  X i i  = ZjiW<(*)-  0(*)  (5-25)  i I ,  AO-  _  Note that the probability density for the sampled state x(t~) is differentfrom<p(x) as it is  pt(pt(x),  where pc is the £-\h element of the stationary distribution of the embedded Markov chain of £(f).  5.4.1  Entry Probabilities  Entry probabilities are all dependent on state transitions, and are not influenced by how long £(f) stays in a state. A new Markov chain is constructedfromthe events of state transitions. This new Markov chain Z is a discrete-time chain. n  Z„:={(t ), n  «=1,2,...,  (5.26)  where t is the «-th jump epoch. Z„ is the embedded Markov chain of g(t) mentioned in Section n  5.2. Z„ is governed by the transition probability matrix P = [py] in (5.5). If (t) has the stationary y  distribution n, Z„ has the stationary distribution p = \p\ pi •• • PN ] which satisfies pP = p and is (  related to n by the following equation (Tijnis, 1994),  * = v^TTT'  ( 5  '  2 7 )  From the definition of the conditional probability, _ P[Z„ switches from j to £] P[Z„ switches to £] '  V j t =  (  '  '  As P[Z„ switchesfromj to £] = PjPjt,  r =J  >IPJE  H  5.4.2  ,  ItwPiPit  jeS,±t.  (5.29)  Examples  Filtered Symmetrical Binary Random Signal  As an example of the integral equation (5.24), filtering of a random symmetrical binary signal d(t) = ±b (b > 0) is considered (Figure 5.1). This is a rare case where analytical solutions are available.  5.4. State Distribution  64  b-i  r-i  d(t)  0-  a  s+a  -b  H )  Figure 5.1: Symmetrical binary signal d{i) is filtered by a low-pass filter.  The signal d(t) is a function of two-state continuous-time Markov chain £(r) such that d(t) = b when £(t) = 1 and d{i) = -b when (t) = 2. With the following infinitesimal generator for (t) r  r  Q=  -A  A  A  -A  the stationary distribution n = [0.5 0.5], and d(t) is symmetrical, that is, the probabilistic properties for d(t) = b and d(t) - -b are the same. Let x(t) denote the output of thefilter.Since the isfirst-order,-b < x(t) < b. Suppose that £(t) just switched to state 1 (d(f) = b) at time to, with T = t - to and xo = x(to), = (xo - b)e~ + b aT  (T)  X  (5.30)  Let xi be a constant such that b > x\ > xo. From (5.30), X(T) = x\ when -ii T — T\ — —(X  X  l  log  ~  b  -.  x -b 0  Since X(T) is monotone increasing when d(t) = b, x(r) < x\ for 0 < r < T\. AS T distributes exponentially with intensity A, W  T  )  ^  ]  =  1  - . - n  =  1  .gz|)  (5.31)  With B = Aa~\ the density of X(T) for state: 1, Oi(x ,x) is obtained by differentiating (5.31) with 0  respect to xi. Oi(xo,x) = i  0  for x < xo  Similarly the density O(xo, x) for state 2 (d(t) = -b) is obtained 2  O (x ,x) = I 2  0  0  for x > xo  5.4. State Distribution  65  As £(r) has only two states, the entry probabilities r \ = r \ = 1. So (5.24) becomes 2  2  = J ®i(x ,x)(f> (xo)dxo  (pi(x)  0  2  B(b -xf=^  cp (x) 2  (5.32)  J (b - x y <p (xo)dx  x  p  0  2  0  O (x , x)<p\(xo)dx 2  0  0  B(b + xf~ £ (b + x T Mxo)dx l  (5.33)  p  0  0  From the symmetry of d(t), (p\{x) = tp (-x). Indeed, substitution of 0i(x) = <pi{-x) into (5.32) 2  produces (5.33). So the system of integral equations (5.32) and (5.33) is equivalent to a single integral equation <p (x) = B(b - xf~  x  x  J  (b - x ) - V i ( - ^ o ) ^ , 0  0  which has a solution 0i(x) = (b + xf(b - xf~ ,  (5.34)  x  Cl  where c\ is determined so that £ <p\(x)dx = 1. The overall density of x(t) from (5.25), (p(x) = b  O.50i (x) + O.50i (-x) is «x)= b(b  - x*t =ci^-  2  1  Ci  (5.35)  [i -  1  When 8=1, that is A = or, (p{x) = c\b =constant. So x(t) distributes equally between -b and b.  \  M  • \ \  \ •  /• -0.8  -0.6  -0.4  -0-2  0  0.2  0.4  06  OS  -OA  -0.6  -0-4  -02  0  02  0.4  0.6  0.8  Figure 5.2: Probability density <p(x) of filtered binary signal for B = 0.5 (at left), and B = 1 and f3 = 2 (at right).  When 6 < 1, 0(x) is concave with the minimum at x = 0 and increasing towards x = b and x = -b  5.4. State Distribution  66  as shown at the left graph of Figure 5.2. On the other hand, when B » 1,  approaches the  Gaussian distribution as shown below. <  Kx) =  C  ^ - > ( i - ^ +  l  ^ ^ - - )  Compare the above to the Gaussian density, •VSr  2^  24^  /  When B » \ , ( B - 1 ) 0 8 - 2 ) « 08-1) and so on, thus 0(x) « ^ e " * ^ , where 2cr = b / ( B - 1 ) . 2  Figure 5.3 shows that (p(x)  2  2  2  for B = 10 is already very close to the Gaussian distribution. For B > 1 of  moderate values, 0(JC) assumes a shape between the flat distribution and the Gaussian distribution as shown in the right graph of Figure 5.2. The signal d{i) has a first-order low-pass spectrum with  Figure 5.3: <f>(x) (solid line) for 8 = 10 and b = 1. The corresponding Gaussian density is shown in dashed line.  the cut-off frequency 2A. So the filtered signal x(t) has a second-order low-pass spectrum with cutoff frequencies at 2A and a . Thus, it is possible to accommodate both the spectrum characteristics and the amplitude distribution by adjusting A and a . The filtered signal x(t) may be useful as a noise or disturbance model because it is bounded (the Gaussian noise is unbounded theoretically) and can have a wide range of distribution characteristics with a low-pass frequency spectrum.  5.4. State Distribution  67  Filtered Three-state Random Signal  As an example of numerical solutions of the integral equation (5.24), a case offilteringa three-state random signal d(i) is considered. In this example, £(/) has three states, and d(t) takes one of three values depending on the state of £(t), that '-is;- d(i) = d(£(i)), and d(\) = d = -\2A,  d(2) = d = -0.24,  x  2  d(3) = d = 0.76. 3  The Markov chain £(/) has the following infinitesimal generator Q and transition probability matrix P :  Q =  -M  qi2  qi3  -2.0  q\  -A  q  0.5  -1.5 1.0  ~h  0.8  0.2 -1.0  2  2  q-s\ qn  0 P  =  23  0  pn pn  P2\  0  /?23  0.333 0.8  P31 P32 0  1.0  1.0  0.5 0.5 0 0.666 0.2  0  The stationary distribution n of£(t) and the stationary distribution p of the embedded Markov chain Z„ are,  n = [0.26 0.24 0.50],  p = [0.3768 0.2609 0.3623].  The entry probabilities ry in (5.29) are r = 0.2308, r,i = 0.7692, r = 0.7222, r = 0.2778, n = 0.5200, r = 0.4800. 2l  l2  32  3  23  Since the filter is a unity gainfirst-orderlow-pass filter, x(t) ranges between d\ = -1.24 and d = 0.76. Let x 3  min  = -1.24 and x  max  = 0.76. The continuous state space [x ,x ] is discretized min  for numerical methods. The grid size for discretization Ax is set to (x -x )/N, max  min  max  where an integer  TV is a parameter to be chosen. Let x ,i = 1,2,..., N+l, be the /-th grid point, i.e. x, = x +(/-l )Ax. t  min  Let fa be an N + 1 vector that approximatesfa,andfa(i)the /-th element offa.Then, fa(x) « fa(i) forx/ - 0.5Ax < x < x, + 0.5Ax.  (5.36)  5.4. State Distribution  68  Then from (5.24), for i = 1,... N+1 fa(i) = I Q>t(xo,Xi)  r e(f> (x )dx k  k  0  0  (5-37) »^<b (Xj,Xi)  ^ r (j) {j)&x.  e  U  j=\  Let  be an N+l  k  kzs,±e  x N+l matrix whose iy'-th element is 0^(x/, x,)Ax. The equation (5.37) is written  as fa = Vt'Lkttn&k, for £eS. To calculate  (5.38)  wefixXy and evaluate O (x ,x,) for the varying x . From (5.30), since the initial f  value is Xj, for mode I and sojourn time r,  t  7  ;  .  x(r) = e- ( - d ) + d .  (5.39)  aT  Xj  e  e  Let T(X) be r such that x(r) = x, then T(X) = - o r log  (5.40)  1  xj - de  Since T is exponentially distributed with intensity Xe, the //'-th element of ^ ( ¥ , ) , ,  =  P[xi  _  e  -0.5Ax < X(T)  -A{T(x,-0.5Ax)  is calculated as  < xi +0.5Ax]  _ -A T(Xft0.5A;r) f  g  ( ¥e)ij = 0 when (5.40) is negative or not defined. In this example, an approach that does not use x  the Jacobian is shown because the same principle is applicable when x(t) e R^*,  N > 2. With x  ^e known, (5.38) can be solved forfaunder the constraint that 'Z fa(i)Ax = 1. Figure 5.4 shows i  fa,faand fa separately for N = 300. In this example, the results are stable for a wide range of N's. Figure 5.5 compares <f>(x) = 2Ze tfa( ) calculated with N = 300 and N = 20. Except n  x  for the narrow peak at the center, two results are very close with each other. Figure 5.6 shows differences between <p{x) calculated with N = 300 and the histogram from a simulation run with 10,000 switches. This example shows the robustness of numerical solutions of the integral equation (5.24) even with a very simple numerical integration method. It is conceivable that the lack of timediscretization errors of (5.24) contributes to this robustness.  5.4. State Distribution  69  Figure 5.5: Comparison of the state density function <p(x) for two state discretizations. The one with a sharp peak is the result from N = 300, and the otherfromN = 20.  5.4. State Distribution  70  1  '  -1.5  _  1  -  -1  1  1  0  0.5  J  -0.5  1  xffl  Figure 5.6: The difference between <p(x) for N = 300 and the histogram from a simulation run of 10,000 switches.  5.5. State Moments of Linear Jump Systems  5.5  71  State Moments of Linear Jump Systems  The state moments of linear jump systems can be obtained without explicitly calculating the state probability distribution. As a simple example, the first moment (mean) is treated in this section. A linear jump system is described by the following differential equation. ^x(o=A(mm+B(t(t)Mt)+dm) (5.42)  d t  = A x(f) + Beu(t) + d , t  ((f) = t,  e  where x e R Nx is the state , and u e R Nu the input. A, B and d are matrices of appropriate dimensions. A mode-dependent constant disturbance de is included to accommodate  "load change " type  disturbances common in process control applications. The state may have a discontinuity (jump) at jump epochs (Sworder and Rogers, 1983). As the state discontinuity can be handled easily, it is assumed that the state x(t) is continuous to simplify the presentation. It is assumed that x(t) and ((f) are measured, and linear state feedback control with a bias is applied (Sworder and Rogers, 1983), (5.43)  u(t)= -G (x(t) + ae), e  where the linear gain Ge and bias cte are mode dependent. Then, the closed-loop system becomes d —x(t) = A x(t) + d ,  (5.44)  c  {  (  where A° := Ae - B{G{ is the closed-loop system matrix and de := de - BeGeote the closed(  loop constant disturbance. The system (5.44) is a hybrid system whose state is composed of a continuous part x(t) and a discrete part ((f). It is assumed that the system (5.44) is stable in each mode and de isfinitefor all (,. Let me denote the conditional mean of x(t) given that ((f)  = £,  that is, me = fx X(f>e(x)dx. The  overall mean of x(f), m is obtained from (5.25) as follows. m = I xcf>(x)dx - Y?el ne I x(p (x)dx = Y^i nem . Jx Jx x  x  e  t  (5.45)  Let xe\ denote the value of x(f) just before ((f) leaves state I. By Theorem 5.1, me is the mean value of xe\. With a switching epoch T and the initial value xo := x(t ), s  <x x+ + xei = e < AC A  TT  00  I  u< • Jo  e ( ds-d . AC  s  (  (5.46)  72  5.5. State Moments of Linear Jump Systems  Since xe\ is a function of T and xo, the conditional expectation of xe\ for a given initial value xo,  ni((xo)  is calculated with the switching epoch density function S((T)  m (xo)  = Qoxo  (  +  = Aee~ , X{T  C \d(,  (5.47)  (  where Qo and Q i are mode-dependent constant matrices, o  eWe-WdT,  C  := A \  n  e  \ etf'ds e  X t T  Jo Jo  dT.  Then me is obtained taking the expectation with respect to xo  m = E [me(x )] t  xo  = Q E(x ) + C t \d t  0  0  0  = Ceoxeo +  C \de, t  (5.48)  where xeo is the mean value of x(ts)\^tt)=(, that is, the mean value of x(r) when £(t) just switches to state I. When the prior state is j, XQ = Xj\, so with the entry probabilities r /, y  xeo = H nMxj\]  rjtntj.  (5.49)  £&S.  (5.50)  = 2  Substitution of (5.49) into (5.48) yields  ™e = E rjeCeomj  + Ceide,  So me,t e S is obtained as the solution of a system of linear equations constructed from (5.50). When the overall mean m is finite and the stationary distribution of £(?), n is such that 7T, > 0, / e S, the conditional mean me is finite for all t e S, and Equation (5.50) has a unique solution. The overall mean m is obtained from n and /w^'s as shown in (5.45). As an example, the linear equation for  = 3 is shown below.  I  -r i Cio 2  -r*\C\o  m\  -7*32^20  m  \  I  -rnCio  C di n  (5.51) 2  -rnCzo  I  W3  Cndi  5.5. State Moments of Linear Jump Systems  73  The mode dependent second moment E[(x(t)x(t) \((f) = i] can be obtained by the same approach T  with more complicated algebra, and detailed in the Appendix of this chapter. With the knowledge of me and the mode dependent second moment of x(f), the quadratic performance index J can be evaluated, J := E[x(t) R (t)] + E [ ( O W ) J = IMk + IMkT  (5.52)  r  lX  M  t i  With the new method, ||x||«, and \\U\\R are evaluated separately. This is advantageous because R  2  2  is used as the Lagrange multiplier to limit the control input energy. By evaluating ||x||^, and  \\U\\R  2  for different R , the cost of control input for the control performance in terms of the weighted state 2  variance is known. This is impossible by evaluating just J. Another advantage of the new method is that the mode dependent ||x|k and  can be evaluated. The optimum control theory for  \\U\\R  2  linear jump systems allows R\ and R to be changed by mode (Sworder, 1976). Evaluating \\x\\ 2  R]  and \\U\\R for each mode allows R\ and R to be adjusted to obtain desired compromises between 2  2  control input energy and control performance.  5.5.1  Example  The filtered 3-state signal of Section 5.4.2 is studied to compare the variance calculation of the new method and the one based on the autocovariance. The cut-off frequency a is set to 2. The mean input is zero as Y,e tde = 0. From (5.30),, , n  X(T) = e~ x + (1 - e )d .  (5.53)  aXt_ —  (5.54)  aT  ar  0  t  Thus, Ceo= f" e Xee dt= at  Xtt  Jo  C«=  <* +  C (l-e- )A e- 'dt= — at  Ae  (  Jo  a  +  Xe  The system of linear equations (5.51) becomes, 1.0  -0.3846  m\  -0.3095  1.0  -0.1190  m  -0.1733  -0.1600  1.0  m  t  2  3  -0.6200  -  -0.1371 0.5067  (5.55)  5.5. State Moments of Linear Jump Systems  74  which yields m = -0.5024, m = -0.2474 and m = 0.3800. x  2  3  Entities for the variance calculation (see Appendix of this chapter for notation):  G®G G®H=H®G= H®H  =  a e2  a(e = {\-  => HG  2at  2e~  at  t  2a +  —  e~ )  at  GG  2at  + e~ )  = GH  t  => HH  2at  —  e  Ae' = a  t  = 11  I Xe  a + Ae  2Ae Ae + a + Ae 2a +  Ae 2a +  Ae/  Ae}  Let \e denote the conditional second moment. The system of linear equations (5.76) becomes, 1  -0.0769 , -0.2564  -0.1970  1.0  0.4153  Vl  -0.0758  -0.1040 -0.0960 - 1  v  2  v  3  =  0.0432 0.2310  which yields vi = 0.5048, v = 0.1653 and V3 = 0.2994. Since m = 0, the variance T is equal to 2  the second moment v. Thus  neve = 0.3206.  T=v=  (5.56)  t  Another approach for the evaluation of the variance is to calculate the autocovariance y {i) of x  x{t). Then the variance is y (0). As a first step, the autocovariance of d(t) is calculated from the x  probability row vector p{i)  = [pi(t) p (t) Pi(t)], where pe(t) is the probability that £{t) = I. The 2  following differential equation governs the evolution of p{t).  -p{t) which is solved under a constraint that  =  (5.57)  p(t)Q,  2ZePe(t) = l.Vr > 0. The conditional autocovariance  •y e(t) = E[d(0)d(t)\d(0) = de] is obtained by solving (5.57) with the corresponding initial conditions. For instance, let q(i) = [q\(t) #2(0"'#3(0] be the solution of (5.57) with the initial condition dt  [1 0 0]. Then jd,\{t)  is obtained as 7rf.i (0 = q\(t)d\  + q (t)d d 2  x 2  + qi(t)d d . x 3  (5.58)  The (unconditional) autocovariance jd(i) is obtained by taking the expectation on d(0), that is 7d  (t) = J]ne7dAt)-  (5-59)  5.6. Linear Quadratic Optimal Control  75  In our case, we get  y (i) = d  c e  +c e  a t  x  b t  2  for t > 0,  (5.60)  where a = 2, b = 2.5, ci = 0.1520, c = 0.5504. 2  Since x(t)  is the output of GF(S)  2  =  - , y {t) is calculated from the weighting function w(t)  s+ 2 .  2e- ofG (s):  x  =  (  2t  F  y (f) = x  JT  w d ) { j ^ ° w(fe)yrf(f + f i - f e ) * 2 J < & i .  Noting that the autocovariance is an even function of time lag, 7 x  (0)= 4^°  we get  <T " | ^ ° ° - ' [c,e- "-' + 2  2  2  21  (5.61)  21  e  c e~  2 5  2  ^ } d t ^ dt 2  x  (5.62)  Ac\ 4c?  = -7T + -7T = 0-3206 8 9 , which is identical to (5.56).  5.6 The  Linear Quadratic Optimal Control subject of this section is the optimal control of jump linear systems with mode dependent  constant disturbance, which was introduced in Section 5.5. The system equation and definitions are repeated here.  x(t) = A(x(t) where x e R  N  x  is the state , and u e R^"  appropriate dimensions. The  + B u(t)  + d,  e  (5.63)  (  the input. A , B and d are mode dependent matrices of t  (  (  performance index is the long term average of quadratic functions of  the state and input: J := lim  |  JT  {x(t) Qx(i) + u{t) R u{ij)dt^, T  T  t  e  where R( is a positive-definite, and Qt semi-positive-definite matrices. Note that Qt and mode dependent. The state may  (5.64) Re may  be  have a discontinuity at mode changes x(t ) s  = x(t~)  + Sn,  (5.65)  5.6. Linear Quadratic Optimal Control  where  76  switches from / to j at epoch t , and dy is a state jump from mode i to j. In Sworder s  and Rogers (1983), the problem is solved for de = 0. The optimum control u*(t) is a state feedback (mode dependent) with bias: u*(t) = -R- B Pe(x{i) + a ), x  t  T  e  e  when  =€  (5.66)  where Pe is the solution of the following coupled Ricatti equation: PeAe + A\P - PeBtR?B P + ZqejPj + Qe = 0, T  t  t  t  for t e S,  (5.67)  j  where qej is the entries in the infinitesimal matrix Q of £(r). The bias term ae is obtained by solving the following system of linear equations: (At + P? Qe)ae + Y.qtjP~e fj(a - a - 6 ) = 0, j l  t  }  t]  for I e S.  (5.68)  In Khanbaghi et al. (2002), the constant disturbance de is absorbed in the state discontinuity: x(t) = A x(t) + B u(t) + d =A e  t  {  [x(t) + A~ d ] + B u(t).  (5.69)  1  {  (  e  e  With a new state, z(t):=x(t) + A^de,  (5.70)  the system equations becomes, z(t) = A z(t) + B u(t), (  (5.71)  e  where the state jumps dy includes A~ de. Then the method of Sworder and Rogers (1983) can be x  (  applied. However this approach makes it difficult to express x(t) Qex(t) in the performance index T  in terms of the new state z(t). In Khanbaghi et al. (2002), x(t) = \y(t) u(t)] and A] did not affect T  1  y(t) part and Qe matrix has only non-zeros element for y(t), so this problem did not surface. If it is ^' :'• desired to include u(t) term in the performance index, this approach does not work. The problem was solved with the original state x(t) and non-zeros disturbance de. The detailed mathematical derivation is included in Appendix of this chapter. The coupled Ricatti equation is the same as (5.67), and the equations for ae become as follows: (A + P - Qt)at-dt + 2Zq*jPt' PAat-aj-6tj) = 0, l  e  e  l  forteS.  (5.72)  5.7. Summary  5.7  77  Summary  It is shown that continuous-time Markov jump systems can be investigated as discrete-time systems sampled at the end of each mode. From this fact, two practically important applications are derived: calculations of state probability distributions ofjump systems, and calculation of the state moments of linear jump systems. The probability density function of a filtered binary random signal is obtained analytically. Unlike approaches'based on simulations, the new method does not have time-discretization approximations nor randomness due to computer generated random numbers.  5.8. Appendix - Mathematical Details  5.8  78  Appendix - Mathematical Details  5.8.1  Covariance of x(t)  In this section, the subscript I for mode is omitted if no confusion exists. Since T := E[(x(r) -  m){x(f) - m) ] = E[x(t)x(t) ] T  and m is known, the second moment v := E[x(t)x(t) ] T  where G(f)  := e* and H(t) 1  x(t)x(t)  T  T  is calculated. We write (5.46) as  +H(f)dd H(t)  T  T  Q 0  + H(i)dxlG(f)  T  + Git)x d  T  H(t) .  T  T  0  Taking vec operations on both sides (dropping t argument for  (C  mm ,  e*'ds.  := j f  = G(t)x x G(t)  T  -  T  G and H) noting that vec(ABC) =  ®A)vec(B),  T  vec (x(t)x(tf)  G)vec(x x )  + (H®  T  = (G <8>  0  0  + (G® H)vec(dx 0)  + (H®  T  //)vec(J<f)  G)wec(x 3 ). T  0  For a fixed xo and mode I,  = E{G®G)vQc^  vQc{$[x{t)x{t) \x ]) T  0  The following expectations are calculated.  dt. HH, HG and GH are calculated similarly. So vec(E[x(r)x(O |*o]) = GGvec(x xS) + HHvec(dd )  + GHvec(dx )  T  r  T  0  0  The conditional second moment of x(t),  HGvec(x d ). T  0  is obtained by taking the expectations on xo,  E (vec(E [x(t)x(t) \xo])) T  vec(v^) =  Xo  = GGvec(E (x x )) T  X0 0 0  =  +  GGvec(Vfl)) +  t  +  + 5Gvec(E X o (x 0 )^)  GHVGC(3E (X )) T  XO  GHvec(dx )) T  0  0  + HGvec(x d ) T  0  +  +  IMvecidd ), 7  HHwecidd ) 7  5.8. Appendix - Mathematical Details  where veo  := E ( X X Q ) 0  and x0  :=  79  E(x ). It is shown in Section 5.5 that 0  (5.73)  xeo = Itjes^jemj,  where it is understood that rjj = 0. If the prior state is j, XQ *o*oV=./ = Xj\x  =  JC ISO 7  T jX  By the same reasoning as for the evaluation of the mean of  E(x x )  x(t),  = Vj.  T  n jX  So jeS,*e  where  is the entry probabilityfromj to € (5.29). Finally  vec(v^) = GG^vec (Eyastyv,-) + GH vec (d E / G S O * ; ) + HGevec (iZjesn^A) + HH vec(d d]). W  e  f  {  The above is rewritten as vec(v^) = GG^vec ( Z ^ / V , ) + c ,  (5.74)  t  where ce is a vector made by combining terms without v, and known after the calculation of AM,-. c = GHevec (d Zjesntrf) + ^ v e c (ZjesneMj^e) + HHe^c(d d ). T  e  {  A system of linear equations is constructed for \ . An example for t  I  -r GG\ 2i  I  -r GG X2  -r GG  2  l3  3  32  3  = 3 is shown below.  c  2  I  (5.75)  vec(vi)  -r GG  -r GG 23  -mGGi  e  2  vec(v3)  C3  (5.76)  5.8. Appendix - Mathematical Details  5.8.2  80  Linear Quadratic Control  The following jump system is considered. - (t)=Rx, u, t, m=Mx, X  u, t),  (5.77)  at) = t  Subscripts I denote the state of a continuous time Markov process (it) with the infinitesimal generator Q =  [qij]. The problem is first solved forfinitehorizon, and later steady solution is sought.  The performance index J is  J  r'f  1  T  r'/  (5.78)  Z(x, u, t, at))dt E L(( )ix, u, t)dt o J LJo Note that the penalty function L may be a function of £(0, and its dependence on at) is denoted =  t  with the subscript on L. The to-go value function F(x, t) when at) = ^ is denoted with Vtix, t): Vtix, t)  -  min E I  %  (5.79)  LAx, u, t)dtU )=e t  Jt  Also in our case, there are discontinuity in the state when at) changes its state. Let (5y denote the deterministic jump of the state when at) shifts from / to j.  xit) = xiQ + 6  u  (5.80)  iat) = j\at-) = i)  The following coupled Hamilton-Jacobi equations for I e S are obtained following Wonham (1970) and Sworder and Rogers (1983):  dV  e —rVtixj) = min L ix, u, t) + -r-ftix, u, t) at u ox (  + %qtj[Vj(x  + 6 ,f)\, tj  i6 = 0) ee  (5.81)  Linear System with Bias dt  (5.82)  x(0 = A(xit) + Btuii) + dt  U = xitf&xit) +  (5.83)  u(t) Reu(t) T  It is assumed that Vt has the following form:. Vtix, t) = ix + atit)) Ptit)ix + a tit)) T  +Be(t).  (5.84)  5.8. Appendix - Mathematical Details  81  Differentiation of the right-hand side of (5.84), (time argument t is dropped to streamline notation) with respect to t and x yields, LHS = -—V{ = -x P x - d/Pedt - a Pta - a P a ot T  T  e  T  e  (  - (a P + a P )x - x\P a T  + Pa)  T  {  {  (  t  e  e  e  t  t  e  t  (5.85)  -0 , e  and ^-V = —(x + a ) Pe(x+:a ) = [Pe(x + a )Y + (x + a ) P . ox ox T  e  T  e  e  e  e  t  So a- tft = V {Atx + B u + d ) ox ox v  t  (  t  = [P (x + a )Y{A x + B u + de) + (x + a ) P{(A x + B u + di) T  e  t  t  (  e  {  t  = (A x + B u + dtfPeix + a ) + (x + ai) P (A x + B u + di) T  t  e  e  t  e  t  = u B P (x + a ) + (x + a ) P B u + (x A + d )P (x + a ) + (x + a ) P (A x + d ). T  T  T  e  t  e  t  T  t  T  t  T  (  T  e  t  (  e  (  e  t  Then U + s-Vefe = x Q x + u R u + u B P {x + a ) + (x + ai) P B u ox T  T  T  t  T  e  T  t  t  t  t  t  + (x A\ + d^Ptix + a ) + (x + ai) Pt(A x + d ), T  T  e  t  e  which takes its minimum at (5.86)  u* = -Rf B P (x + a ). T  e  t  t  And min u  Le+^-Vef ox  = -ix + a ) P B R^B P (x + a ) + x Q x T  t  e  T  (  e  T  e  e  t  t  (5.87) + (x A + dfiPtix + a ) + (x + at) Pt(4tx + d ). T  T  T  (  t  e  A matrix St is defined as, Se '•= P(B(R B P(. x  e  T  t  (5.88)  5.8. Appendix - Mathematical Details  82  Substitution of (5.87) and (5.84) into the RHS of (5.81) yields, RHS = -(x + a ) S {x + a ) + x Q x + (x A + d )P (x + at) + (x + aA P (A x T  t  T  {  T  {  T  e  T  e  + dt)  T  {  t  (  e  + %qtj [(* + (*j + Stj) Pj(x + aj + 6tj) + Bj] T  =x  PtAt + A\P - St + ZqejPj + Qt x - x Siat - a Stx - a Stat T  T  + x A P at T  T  t  t  + d(P (x + at) + a Pe(Atx + dt) + x Ptdt  T  {  e  T  (  e  T  e  + %qtj [x Pj((*j + 6(j) + (aj + StjfPjX + (aj + 5tj) Pj(aj + 6 ) + Bj] T  T  tj  =x  PtAt + A\P -St + zZqtjPj + Qtx + x [-Stat + A\P at + P d + }ZqtjPj(aj + 5 )] T  t  t  t  t  ej  + [-a tS + a P At + d P + zZq (aj + 6tj) Pj]x j T  T  e  - a Sa T  e  e  T  e  e  e  (j  + d Ptat + a P di + Zqej [(aj + Stj) Pj(aj + 6 ) +Bj] T  t  T  e  T  t  T  t  t  {j  (5-89)  JliiS  From the Hamilton-Jacobi equation (5.81), LHS (5.85) = RHS (5.89). This must hold for any x, so equating x [ • ]x term in the both sides, the following coupled Ricatti differential equation is T  obtained. -Pi = PtAt + A\Pt -St + ZqejPj + Qt |  Next, from x [ T  (5.90)  • ] and [ • ]x terms, -Ptdt - Peat = -Stat + A P at + P d + ZqtjPMj  + hj)  T  t  t  t  t  Pi is substituted with (5.90). -Pea + \p Ai + A\P -S e  t  t  + zZqtjPj + Q)J a = -Sea + A Ptai + P d + ZqtjPjiaj + 5 ) T  e  e  e  t  t  e  {j  So -Ptdt = -PtAtat - Qtai + P d + 2ZqtjPj(aj + 8 j - a ) (  t  t  e  From the above, the differential equation for at is obtained. a = (At + Pi Qi)at l  t  -d  + zZqtjP~ Pj(at ~ aj - o~ej) X  e  t  (5.91)  5.8. Appendix - Mathematical Details  83  Equating the terms without x,  -d/P ae  - a Pa T  e  ( t t  - a P de  -fie = -a S a  T  + <f Peae  T  t t  e { e  + a Pd T  (  ( ( {  +  [(aj + 6ij) Pj(aj  + <%) + Bj\  T  From the above, a differential equation forfieis obtained, of which derivation is skipped asfieis not used for control. Stationary Solution If the stationary solution exist, P i , de andfieare all zero. From P( = 0, we get the coupled algebraic Ricatti equation.  * i  PA  + AP  -S< + ZqejPj  T  t t  For  ;  ( t  + Qe = 0  1  (5.92)  ae, (A + P~ Q )a  -d + iZqtjPf  x  t  t  t  t  Pj(a  e  {  - aj - 6 ) = 0  (5.93)  TJ  Since ae - a t - See = 0,  ( A e + r Q e ) a -d + x  {  e  2ZqtjP?Pj(fXt  (  -a -S) = 0 }  (5.94)  tj  So  (Ae + p-e Qe+2ZqtjPe~ Pj)(Xt-zZqtjPePjaj = d + XqejP? Pj5 j*e j*e j*e A system of linear equations for a\,..., a^ is constructed by stacking a e ' s . l  l  e  (5.95)  {J  (  Ax+P?Qx  ''(  t  ~q^P- Pz  -qnP[ Pi  +zZj*mP\ Pj  X  X  x  A +P2  -qixP^Px  :'.!*«  x  Q  l 2  lZj*2q2jP^  +  1  ai  di +  ZjtiqijP^PAj  oc  d + 2Zj^q2jP - PjS  <*3  d3 + 2Z qijP^Pfrj  2  Pj  -qi3P- P3 l  2  1  2  2  2  2j  Ai+P-^Qi -qnP- Pi l  3  -qi PjPi  +2Z qyP~3 Pj [  jri  m  Chapter 6 Broke Storage Tank Level Control 6.1  Introduction  Sheet breaks cause the biggest upsets in paper machine operation. When sheet breaks occur, the paper sheet cannot be rolled into a jumbo roll and is routed to the broke storage tank after it is repulped with the addition of water. In the thesis, this incoming flow to the broke storage tank is called "broke flow" and denoted by Fb(f). The broke storage tank is the first process to receive the upset, and therefore to manage the broke tank level properly goes a long way towards stabilizing the paper machine wet end. Presently, very few broke tanks are under automatic control. Effective automatic broke storage tank level control would improve the paper machine operation greatly. In this chapter, the nature of the input disturbance to the broke storage tank is investigated. Then two control scheme, one linear and the other nonlinear will be presented. Utilizing the results of the last chapter, the probability distribution of the tank level can be evaluated reliably, thus each control scheme can be assessed without simulations.  6.1.1  Paper Machine  The paper machine produces paper continuously from a very thin pulp slurry (0.3 - 0.6% consistency) byfirstremoving water by a rotating mesh of plastic fabric (wire), then pressing and finally drying with steam. Approximately 98% of the water is removed through the wire, and is called "white water". The pulp slurry contains, in addition to pulpfibres,finesand various additives,  84  6.1. Introduction  85  such as clay, dye, starch, and retention aides. Non-negligible amounts of fines and additives pass through the wire and get into the white water. It is important to put these lost fines and additives back into the paper sheet to maximize resource usage and minimize discharges to the environment. This is achieved by a system of disk filters, called a "saveall". Also, some of the white water is used for dilution in the pulping process, and thus eventually comes back to the paper machine. A very simplified material flow in the wet end is depicted in Figure 6.1. During normal operation, the paper machine produces some waste paper in the form of trimmings, off-grade paper, and paper produced during sheet breaks, which cannot be incorporated into a paper reel. This kind of paper is called "broke", and is put back into the paper machine after it is reintroduced into the pulp slurry by adding water. WIRE J  ft  DRYER  W.W. SAVEALL MACHINE CHEST  Fresh Pulp BLEND CHEST  BROKE TANK  Figure 6.1: Simplified diagram of a paper machine wet end. Fines and short fibres in white water (W.W.) are recovered at the saveall, and mixed withfreshpulp and broke stock at the blend chest.  6.1.2  Benefits of Wet End Stabilization  Modern paper machines produce finely graded papers from a complex mixture of various pulps and additives at high speed. Different kinds of pulps including ground wood pulp, chemi-thermal mechanical pulp, kraft pulp, and recycled (deinked) pulp are utilized to attain the desired paper quality economically. In addition to these materials, saveall stock and broke are added to minimize raw material usage as shown in Figure 6.1. These pulps have different physical and chemical characteristics, including averagefibrelength, strength andfreeness.Dewatering and retention characteristics of the pulp slurry on the wire are  6.1. Introduction  86  in turn determined by the pulp slurry composition. For instance, kraft pulp, which is rich in long fibre, tends to be retained on the wire, andfineshelp to keep additives on the wire by clogging up spaces between pulp fibres. As the pulp slurry dewatering characteristics change, the amount of water carried over to the press and dryer sections changes. This causes moisture variations in the final paper until the moisture control system compensates, which takes longer than several minutes. Any composition change in the pulp slurry affects machine runnability and the quality of the final paper. At the same time, the amount and composition of the white water change. The white water is used in pulping processes and in the stock preparation area. Therefore, changes in white water composition come back to the pulp slurry through many routes with different time constants. Thus, a pulp slurry composition change starts a complex and long-lasting chain of events which affects the paper machine operation and the final paper quality. A number of studies on the white water network using dynamic computer simulation have been published ((Bussiere et al., 1987), (Croteau and Roche, 1987), (Kropholler et al, 1994), (Orccotoma et al, 19976) and (Orccotoma et al, 1997a)). In Orccotoma et al (1997a),finesdistribution in a white-water network was investigated for a single line 600-tons/day newsprint milk Simulation results showed that the effects of a step change in thefinescontent in the fresh pulp lasted more than 5 hours. Stabilizing the wet end operation brings about many benefits (Gess and Kanitz, 1996). First it stabilizes the pulp slurry composition, which keeps the final paper quality constant. Also, it is known that stable pulp slurry composition reduces sheet breaks, thus increasing paper machine efficiency. This not only improves the economy of the paper making operation, but also minimizes raw material usage, energy consumption, and off-grade products, which otherwise have to be reprocessed with additional energy. Variations of flows in the stock preparation area are minimized. This reduces overflowing and emptying of the storage tanks, which means reduced fresh water usage and reduced discharge of effluent. In Bonhivers et al. (19996), it is shown that coordinated control of a white water network may reduce fresh water usage and effluent discharge drastically.  6.1. Introduction  6.1.3  87  Broke Handling  During normal operation, approximately 3% of production is routed to broke storage as trimmings. During a sheet break, 100% of production is routed to broke storage. Therefore, assuming the same consistency for trimmings and broke, the incoming flow to the broke storage increases 33 times. Thus, the broke storage tank level increases suddenly when a sheet break occurs. As sheet breaks happen at random, it is not easy to keep a constant broke ratio without overflowing or emptying the broke storage tank. When a sheet break happens, not only does the broke storage tank level increase, but also the white water demand increases to dilute broke at the repulper, and the saveall load increases due to the increased cloudy flow from the broke thickener. In short, sheet breaks initiate large upsets in the white water network. In Bussiere et al. (1987), the effect of sheet breaks on the fines content in the fresh pulp was studied by a computer simulation. They found that a 40 minute long break caused 2.4 to 5 % increase in the fines content, and it took 5 hours to stabilize. Usually 10 to 20% of the total production is broke. Broke has different characteristics from other furnish components since it has already been processed by the paper machine. The amount of broke in the furnish is defined by the broke ratio. It is important to keep the broke ratio constant to keep the paper quality constant.  6.1.4  Literature Review  There are only a few studies published on automatic control of broke storage tank level. Khanbaghi et al. (2002) treated a network of a broke storage tank and a white water tank. Jump linear quadratic control was utilized because the brokeflpw;;wasmodelled as a finite state Markov process. The flow roughness was measured by the variance of the outlet flow rate change, Var[zi(0] = Var[v(x)]. First passage times of overflow were obtained to assess the performance of the resulting controllers. This thesis owes many ideas, including the broke flow modelling and application of jump system theory, to their study. In Dabros et al. (2003), the optimal way to change u(f) was investigated. A notable feature of the study is that changes of the pulp properties at the head box obtained by simulations, is used as the performance index. However, the time and the final value of u(t) are assigned by the operator. Thus, this study gives some hints on automatic control, but is not enough to construct an automatic  6.2. Characteristics of Broke Flow  88  control scheme by itself.  6.2  Characteristics of Broke Flow  The Poisson process, where waiting times distribute exponentially, is a convenient mathematical model for events occurring completely randomly (memory-less). If breaks start and end randomly (in a memory-less manner), broke durations and normal durations follow exponential distributions. Then the broke flow is modelled by an alternating Poisson process as shown in Figure 6.2. Both  x  2  Figure 6.2: Alternating Poisson process X and Yi distribute exponentially with different intensities. Khanbaghi et al. (1997) and Section t  7.2 show that this hypothesis holds well for industrial data. Thus, sheet breaks are modelled as a two-state continuous-time Markov process'%($)' 1 normal operation 2 sheet break  (6.1)  It is assumed that the normal operation duration T\ follows the exponential distribution with intensity X\ and the break duration T with A , that is, 2  2  P[7i < t] = 1 -  F[T  2  <t] = l -  e~ .  (6.2)  Xlt  Then the infinitesimal generator Q of £{t) is  Q =  (6.3)  A —A 2  2  A simple assumption is that the broke flow'F&(f)is a deterministic function of £(r), F (t) = F„(at)) = \ b  F i #0=1 F #0 = 2, b  b2  (6.4)  6.2. Characteristics of Broke Flow  89  where Fb\ and Fbi are constant. This assumption makes it possible to model Fb(t) as a two-state continuous time Markov process. From a simple material balance point of view, this assumption seems reasonable. Most of the published simulation studies on broke management assume that the broke flow is binary. It turns out that the industrial data used later on this thesis shows non-binary characteristics. However, the binary broke flow assumption is adopted in the thesis because (a) the powerful theory of Markov processes can be utilized, and (b) even though the industrial data is not quite binary, it can be approximated by a binary flow in such a way that the resulting controllers are conservative with respect to overflow probability. 6.2.1  Spectrum of B r o k e F l o w  To utilize linear optimal control theory, the power spectral density of the broke flow is obtained. Let a row vector p(t)  = [pi(t) P2(t)] denote the probability that £(t) is in state 1 or 2 at time t. It  is known that pit) evolves according to the following differential equation:  j p{t) t  Since p2{t)  = p(t)Q.  (6.5)  = 1 - pi(t), P\(t) =  + A2p2{t)  -Aipi(t)  = -A\p\(t) + A {\ - pi(t))  =  2  -(Ax + A )pi(t) + A 2  2  (6.6)  So p\(t) for the initial condition p\{0) = c is,  p (t) = cex  (Al+A2)t  + A f e-^ dr  (6.7)  +h)T  2  Jo  Let q\(t) denote the probability that £(t) = 1 when £(0) = 1. Substitution of c = 1 into (6.7) yields  A\ + A  A\ "1" + AA  2  2  (6.8)  Ai / l l + 1- AA 2  2  2  The following symbols are defined to streamline notations:  A Ai++AA, , Ab := '.- Ai b  2  2  Ai  n  —!—, i\li :-:=—• A] Ai  l  2  + A  :=  A  (6.9)  2  Ai + A  2  2  Then,  qi(t) = t e x  Xbt  + i  2  (6.10)  6.2. Characteristics of Broke Flow  90  Autocovariance of Broke Flow  The autocovariance of the broke flow Fb(f) is obtained using qi(t). The mean length of T\ is \/X  x  and T 1 /X . The meanflowrate F is 2  2  m  F  m  ^ A\ + A  =  ^ A\ + A  +  =  2  i  i  F  b  x  +  {  x  F  b  2  (  6  -  n  )  2  Let fb(f) denote the zero-meaned F (t), and fb\ and fb denote fb(t) when £(t) = 1 and 2 respecb  2  tively.  /(0 := F {t) 6  F  b  fti := F  M  -  m  F = (1 - fc)F m  ./M := F M - F  b2  M  =  £ (F x  - F ) = W « - /«) < 0  bx  x  2  (6.12)  b2  £ )Fb - l Fb\ =  = (1 -  M  where an identity f - f bX  - £,F  W  - fb\) > 0,  - F M ) = £ (fb2  2  2  = F - Fb is used. bX  2  Let y(t) denote the autocovariance function of F (t), b  7(0 := E[(F,(0) - F )(F (t) - F )] = E(f (0)f (t)] m  b  m  b  (6.13)  b  Here, y(t) is calculated as the expectation of the conditional expectation given £(0): 7(0 =  B [E[f (0)Mt)W)]] m  b  = mwmMtmo)  = i]+mifb(o)Mt)\m  = i\  (6.14)  where 71 (0 and 7(0 are the conditional autocovariances. For instance, 2  7,(0 := E[f (0)Mt)\a0)  = 1] =  b  When £(0) = l,f (t) is f b  bx  fbiE[f (t)\m b  with the probability p {t) and f x  b2  b  1].  (6.15)  with the probability 1 - p (t). Thus, x  + 0 -Pi(t))f fb2=pi(t)(f  7i(0 =Pi(t)f \  =  bi  -f f )+f f .  2 b l  bx  b2  bx  b2  (6.16)  Substitution of (6.10) for p (f) yields, x  7i(0  = (he  + t2)(f  Xbt  2  = £ e-^(f \ x  b  bx  - fbiM) + fbifbi  -,/*,,/„) + £ {fl - f f ) 2  x  bx  b2  + Mb2  = e- f e <Jb -Jb2)+Mt2<Jbx - / M ) + / M ] Xbt  bx  - Jb\  e  x  '  X  (6.17)  6.2. Characteristics of Broke Flow  91  where between the third and fourth lines, (6.12) is substituted. Similarly, y2(0  = fb22e Xb'-  Substi-  tution of j\(i) and y(0 into (6.14) yields, 2  7(0 = tiyxif) + ^72(0 = {hf + hf )e 2  hx  As  lifl + t\jl - E(Fb - fm) , 2  x  2  ht  (6.18)  bl  which is the variance of the broke flow, it is denoted with cr ,. 2  cr := Var[Fb]  = E[F - E(F )}  2  2  b  (6.19)  y ( t ) = oie- ,  b  0<t  Xbt  (6.20)  Since the autocovariance functions are even functions of time lag, the above expression is extended to include negative t, which is necessary for spectrum calculations. y(0 = oie- ,  -co < t < oo  m  (6.21)  Spectrum of Broke Flow  The spectrum of the broke flow S is the Fourier transform of y(t). b  S (oj )=  y(t)e- "'dt=  2  f oie 'e- dt  J  b  f  Xb  %J —CO  O— oo 2oiA b  oo + A 2  J  KJ—OO  2 B  2oi A  2 H  + £°  jwl  1  r°° -(h+Mt  e  dt  oie- e- dt Abt  T 2  Jojt  1  A B  1 A +  J'OJ  =(J  B  (6.22) JOJ  +  0  to + A\ 2  A  B  The spectrum of the broke flow is that of a first-order low-pass filter with a cutoff frequency A . B  For the linear optimal controller design, the broke flow (zero meaned fb(t)) can be treated as the output of a unity gain first-order low-pass filter G (s) driven by white noise of the intensity cr . 2  b  G„(s)  =  (6.23)  s +A' B  2owhite noise ——-  2  A  B  brokeflowf (s)  s +A  b  B  6.3. Flow Smoothness Requirements  6.3  92  Flow Smoothness Requirements  What is the appropriate flow smoothness or roughness indicator for the outlet flow F (i) of the u  broke tank? The immediate downstream process is the consistency control loop of the broke stock flow to the blend chest. However, the effects of F (f) reach much further, and the final paper quality u  and runnability of the paper machine are two major concerns for the paper maker. For paper quality, it is best to keep F (t) constant, but for most of paper machines, the size of the broke storage tank u  and break frequency make this impossible. The second best would be to minimize changes in the broke ratio, which mathematically Var[F (r)] can represent nicely. For some downstream processes u  that are regulated by a feedback controller, for instance, broke stock consistency,  Var[F (t)] = u  Var[v(f)] would be a better indicator as shown in Chapter 4. Some production people believe that sudden changes in the flow rate increase the probability of sheet breaks, although the wet end operators routinely make step changes in Fu(i). There has been no statistical study published on this subject. Since there are many causes of sheet breaks, it would be difficult to obtain clear-cut conclusions on correlations between the speed offlowrate changes and sheet breaks. However, to constrain Var[v(r)] would be beneficial for regulated downstream processes and acceptance of the control scheme in the field. For instance,.in Khanbaghi et al. (2002), Var[v(r)] is used as the flow roughness indicator.  ",;  Thus, it would be best to constrain both F (t) and v(t) := w(r). In the thesis, a weighted sum of u  Var[«(r)] and Var[v(r)] is used as the flow roughness indicator for linear controllers, and max u(t) and max |v(r)| are constrained for nonlinear controllers.  6.4  Linear Optimal Controller  Since the input disturbance (brokeflow)has afirst-orderlow-pass spectrum, the method described in Section 3.4 is applicable without modifications. For completeness, key definitions and equations  F denote the mean value of F (t), F := E[F (f)]. The input disturbance is zero-meaned, that is, F is subtracted from F (t). The state variable x(t) is defined as follows. are repeated here. Let  m  m  b  m  b  x(t) :=[x,(0 x (t) u(t)] , T  2  where  b  (6.24)  6.4. Linear Optimal Controller  93  x \ ( r ) : y{t) - y tank level error r  x (f):  zero-meaned input disturbance = Fb{t)  u(t):  F (f) - F  2  u  - F  m  outgoingflowdeviation  m  Actual outlet flow F (t) = u(i) + F . Since x {f) is the output of a low-pass filter with the cutoff u  m  2  frequency Ab, the system equation becomes,  *i(0 x (t) u(t)  d_  2  dt  K -K -A 0  0 =  p  0  0  x (t)  + 0  u(t)  1  2  b  0  0  Xl(t)  p  0  0 v(r) +  h  w(t),  (6.25)  0  "IT where as before K is the process gain and Mt) is white noise. For broke tank level control, both p  the amplitude of u(t) and the rate of change v(t) are constrained. So the performance index J is  J = Var[y(0] + P (Var[v(r)] + ^Var[ (0]), 2  M  (6.26)  which is expressed with the two matrices Q and R, 1 0  0  J = E{X(0' 0 0  0  v(t)}.  x(t) + v(t)  (6.27)  0 0 up  2  Q The optimal control v(r)* is a state feedback, v(f)* = - K x ( t )  = -kiXi(t) - k2x2(t)  -  hu(t)  (6.28)  The algebraic Ricatti equation (ARE):  P A + A P -PBR- B P + Q = 0. T  l  (6.29)  T  Then, the state feedback gain K is,  K = [h k k ] = B P/p = [p T  2  2  3  l3  p  p ]/p , 2  2i  33  (6.30)  where py is the ij-Hh element of P. Thus, K = [k\ k k ] is obtained as follows. 2  1  k\ = —,  P  Kplp{A + y/2K /p + Li) A + A j2K /p + L> + K /p b  k = —2 2  3  b  p  p  p  >  (6.31)  6.4. Linear Optimal Controller  94  The equation v(r) = -Kx{t) is expressed in its Laplace transform,  v(s) = su(s) = -k\X\(s)  - k x (s)  - k-$u(s).  2 2  (6.32)  Substitution of (3.24) into (6.32) yields,  u(s)  (j + k2  +k ) = xi (s) \[-ki - sk Kp). 3  (6.33)  2  Let C {s) denote the transfer function of the controller, L  u(s) = C (s)(y{s)-y ) L  Then  = C (s)xi(s).  r  -k\ - sk Ks + k +h  1  C (s) = L  = -k K-;  2  (6.34)  L  2  2  s + k\KJk l^Li s + k + k  =  2  2  s + b , s + a  K c  3  6.35  where  K = -k /K , c  2  a = k +k ,  p  2  b = k\K lk .  3  p  (6.36)  2  = -\jK lp and the sensitivity function S is  The closed loop natural frequency OJ  p  c  S =  .  (6 37)  S  + U> S ^ 2 + fJ./0J  2  + (O  2  2  C  The closed loop damping factor 77 is n = 0.5 ^ 2 + H / O J .  (6.38)  2  Thus, 77 increases from 0.5 VI towards 00 as more weight is given to Var[w(r)] (larger ji). The controller has two tuning parameters, p and /J. They are adjusted to obtain desired Var[y(0] and Var[w(r)]. An example applied for an industrial case is given in Section 7.3.2.  6.4.1  Jump System Theory  In Khanbaghi et al. (2002), linear jump quadratic control theory was applied to the broke management problem. The state vector is x(t) := [xi(t) *i(0  «(/)  =  0  -K  0  0  p  A  u(f)] , and the system equation is,  X](0  u{i)  T  +  0 1 B  v(0 +  K F (t) p b  0  (6.39)  6.5. Nonlinear Controllers  95  The performance index is (6.40)  o  The optimum control is a state feedback with bias,  v(t)* = -K(x(t)  + a ) = -k xi(t) t  The feedback gain K is generally a function of  x  £(0,  - k u(t) 2  K = K(£{f)).  -  Ka .  (6.41)  e  In this case, the system matrices  A and B and the performance index matrices Q and R, which may be a function of £(?), are constant. So K does not depend on £(£), but the bias term a does change by £(/)• The two v(t)* of (6.28) and (6.41) are compared. The equation (6.28) has a term £2*2(0 that seems to be a variable. However x (0 2  is constant for each state of £(t). The bias term ae in (6.41) serves the same purpose as xiif).  Thus, the two equations are the same. If the same Q and R are used, there is no advantage in the jump linear system method. With the new method of evaluating the variances for each mode presented in Section 5.5, it is possible to adjust Q and R for each mode to obtain some desired results. Then, the feedback gain K is a function of £(r), and the controller becomes variable gain. This path is not pursued further because for broke tank level control, the nonlinear controllers of Section 6.5 would be more practical.  6.4.2  Implementation Issues  Since the controller CL has no integrating action, the controller output u(t) must be base-loaded with the mean value of Fb(t), Fm as shown in Figure 6.3. The statistical nature of Ft,(t) may change with time due to grade changes or some other operational or equipment changes. Thus, occasional adjustments of F by some averaging or adaptive mechanism may be necessary. It is an important m  issue and could be a shortcoming in implementing CL, but is not pursued further in the thesis.  6.5  Nonlinear Controllers  In this section, the input disturbance does not have to be zero-mean, and, u(t) tank level control has the following special characteristics:  = F (f). The broke u  6.5. Nonlinear Controllers  96  e(t)  v  s +B  u(i)  •c  s+ g  Figure 6.3: Implementation of Q . The controller output w(f) is base-loaded with the mean incoming flow F. m  1. F (f) during breaks F b  b  is much larger than the mean F , and it is not practical to make the  2  m  outlet flow u(t) equal to F  b  2  to bring down the tank level. Usually, piping and pumps for the  broke stock flow are not geared to handle flow rates as high as F  b 2  . Also, from paper quality  point of view, most operations will not allow such a high broke ratio. 2. The difference between Fm and Fb\ u(f)  (F (t) during normal operation) is small, and to b  bring  equal to F M can be done with small u(t). Thus, preventing the tank from becoming  empty can be done easily with small flow roughness indicator. This means that the best the controller can do is to minimize the probability of tank overflows. In practice, for breaks of exceptionally long duration, normal operation of the paper machine is discontinued, and the broke tank level controller is put into manual mode.  6.5.1  Minimum Overflow Probability Controller  One way of dealing with the flow smoothness issue, is to put hard limits on both  u(t) and \u(t)\ =  |v(r)|, and design control strategies to minimize the overflow probability under such constraints. From the binary nature of  F (t), it b  is straightforward to design the minimum probability control  strategy. Let u(t) and v(r) be constrained as follows: Umin < U(f) < U,  The key condition is that u m a x  <F  b2  and u m i n  Vmin < V(t) < V, max-  <F , bl  (6.42)  so during breaks the level increases (although  we can control the rate of increase) and during normal operation, it is possible to prevent the level from going below the low limit by setting u(t) = F \ . Usually it would be satisfactory to have b  symmetrical constraints on v(/) as v m i „ = -  v  m a x  ,  but non-symmetrical constraints do not complicate  the controller design. To minimize the overflow probability,  Ci  6.5. Nonlinear Controllers  ill).  97  during normal operation: the level should be brought down to the low  l i m i t s as fast as possible  to maximize the storage capacity at the onsets of breaks, and during breaks: the level increase rate should be minimized. This means u(t)  should be increased  to its maximum value u m a x as fast as possible. as possible, u(t)  At the onset of a break, to make the level increase rate as low toward u m a x if  is increased by  u(t) < u . Thus, during normal operations, u{t) should be kept at u overflow probability low. If there is no limit on v(r), u(t) can be kept at u until y(t) max  max  max  u{t)  decreased toward  when  gets close toyz,- The  threshold value of y(t)  m a x  to make the  = yi, then  to keeper) at^. However, since v(r) is limited, u(t)  is brought down suddenly to FM  v  must be  to start decreasing  u(f)  is calculated below. Set the present time at 0, and denote the present level y(0) with yo and w(0) with «o-  Let  us assume that u(t)  is decreased by the rate of  v  < 0 to  m i n  FM,  u(t) = u + v t 0  The time to bring u(t)  to FM,  T is _  Then the level at T, y{T)  min  FM  - up  is,  y(T)=yo  + K  f (F  p  - u{i))dt = y + K  M  0  Jo  f (F  p  M  -u -  v t)dt  0  min  (6.43)  J o  K  = To prevent low  + j-Z-iFbi  ~  "o)  2  level limit violation, y(T) > y i must hold. Thus, a quadratic equation  y ( t ) = y  - ^ - ( F  L  .ZV / m  determines if u(i) must be decreased to prevent low  l  - u ( t ) )  2  (6.44)  w  level violation. Equation (6.44) is a parabolic  (u(t),y(t)) is above this graph, u(t) can be increased without violating the low level limit. When (u(t), y(t)) is below the graph, y(t) may become lower than y with u(t) being decreased at the maximum rate of v if normal operation lasts long enough. The graph on the  (u(t),y(t)) plane.  b  If  L  m i n  logic of the minimum overflow probability controller (MOPC) is summarized in Figure 6.4. 6.5 shows the phase plane where (u(t),y(t)) can move under the MOPC, wherey parabola of decreasing u(t)  corresponding to a in Figure 6.4. On a vertical line c,  Figure  = 0 and a is the u(t) = u , and  L  max  6.5. Nonlinear Controllers  normal:  98  ifv(0<^-^-(F -t/(0)  2  M  decrease u{t)  with the rate of v m i n <— parabola  a  else  if u{t) <  u  max  increase u(f) with the rate of  v  m a x  .  else keep u(t)  at  u  max  end end  break:  if u(t) <  u  max  increase u(t)  with the rate of  v  m a x  .  else keep u(t)  at  u . max  end  Figure 6.4:  b is the parabola of increasing u(i)  The  rninimum overflow probability strategy.  for a break starting from (Fbi,>x).  The point (w(0>.v(0) moves  in region ^ and line c . A variation of the minimum overflow probability strategy is to keep u(t) as is when a break ends and the operation becomes normal. This way, u(t) stays constant most of the time. This strategy is called "quiet minimum overflow probability strategy" (QMOPC), and shown in Figure 6.6.  6.5. Nonlinear Controllers  break:  99  if u(f) < u  max  increase u(f) with the rate of v  .  max  A  else  keep u(t) at,u . max  end  Figure 6.6: The logic of QMOPC  6.6. Optimal Change ofu(t)  6.6  100  Optimal Change of u(t)  In the MOPC,  u(t)  increase rate. So  is increased to u m a x with the maximum rate allowed v m a x to minimize the level  u(f)  changes linearly with a slope of  is as follows. The initial value of u(f),  v  m a x  .  The problem considered in this section  UQ and the final value u\ are given. Also given is the time  to move from UQ to u \ , T. What is the best way to change u(t) from UQ to u\ in a specified time T l Here, "best" means to minimize w(r)'s effects on the downstream process. A similar problem was treated in Dabros et al. (2003), where the performance index is the weighted sum of incremental changes in the pulp properties at the machine headbox. The performance index was calculated by simulations, and the best strategy was sought by adjusting parabolic curves. The downstream process is assumed to be a properly closed self-regulating loop as in Section 4.2.  This is a rough model of real processes, but makes it possible to apply analytical tools (Max-  imum Principle) and to evaluate various strategies in closed form. Let x\(t) be the downstream process variable. As in Section 4.2, the transfer function from u(s)  to xi(s) is a high-pass filter  with a cut-offfrequencya . The process gain does not affect the optimum control, so for simplicity, the process gain at highfrequenciesis set to 1.  ^4~r u(s)  = —— s + a =>  + axi(t)  = ii(t) =  v(t)  Without losing generality, the initial value is set to zero. Xl  It is required that at time T, u(T)  (0) = 0;  w(0) = w = 0 0  must be equal to u \ , and stay at u \ after T. It is desired to change  u(t) in a manner that minimizes its effects  on x\(t). The performance index is  J = £ °x {t) dt. 2  x  The control horizon is T, but / is integrated to oo. This is to account for the effects of u(t) u(t)  (6.45)  (6.46) after  stops changing. If the downstream process is properly closed, x\(t) should return to 0 within  a reasonable time. Thus, practically there is not much difference between a well chosen finite integral span longer than T and the infinite integral span. The infinite span is chosen because it  6.6. Optimal Change of u(t)  101  simplifies the mathematics. To keep u(t) constant after T, v(t) := u(t) = 0 for t e [T, oo]. Then  J J  fT  " pT  poo  xi(t) dt+  x (t) dt = \ x (t) dt + Jo JT  2  0  poo  2  JT  f>T  [x (T)e- '--2an°° ] dt  2  x  a(  x  x  T)  2  e  pT  poo  x (t) dt + x {T) I [e- ] dt= f x (t) dt + x (T) . o Jo Jo Y~ T r I = ( ) dt + —x (T) Jo 2or 2  x  2  a(t)  2  2  x  x  x  0  2a  2  Xl  2  t  x  The problem is solved by Pontryagin's Maximum Principle. The solution involves singular arcs, which demonstrate the full power of the Maximum Principle. Mathematical details are deferred to the end of this chapter, and the main results are shown here. PROBLEM: Change the outlet flow u(t) from 0 to u i in a given time T to minimize J =  x (t) dt, 2  x  where X\(t) is the downstream process variable, subject to xi (0  = -ax  xi (0) = 0  (t) + v(t),  x  u(t) = v(t).  The manipulated variable v(t) is bounded as -v < v(t) < v (v > 0). m  m  m  Theorem 6.1 The optimum outletflowu*(t) consists of three lines as shown in Figure 6.7. Th  slope of the first and the last line are the same and they are sgn(u\)v , and the length r is the ro m  of the following equation:  T - e~ (T - 2T) = —  (6.47)  aT  V  m  Quadratic programming would provide similar results after discretizing the problem, but cannot give the closed form solution given by the Maximum Principle.  6.6.1  Asymptotic Analysis  Here the behaviour of r and J is studied when v —> oo. Since r —» 0 as v —» oo, e~ « 1 - ar. aT  m  TO  Without loosing generality, it is assumed that wi > 0, then, (6.47) becomes T - e~ (T - 2T) * T - (1 - ar){T - 2T) = — aT  (6.48)  Vm  So, T is a root of the following quadratic equation: 2orr - (aT + 2)T + u fv = 0. 2  x  m  (6.49)  6.6. Optimal Change of u{t)  102  i  i  i  i  •  7/  \\  *>>  \\  /  f  ^ ^ ^ ^ 10  12  time  14  16  18 20  Figure 6.7: The graph of u*{t) at left and two responses of the downstream process variable x\(t); corresponds to u*(t), and the other corresponds to linearly changed u(f).  x* (t) {  Thus, aT + 2 ± yjiaT + T =  2) -8auJv„ 2  (6.50)  4a  We pick a root that tends to zero as v —> oo, so m  aT + 2- • /(aT + 2) -Sau /v aT + 2 (. _ 4or  l  n  4a _ 1 8aui/v 2 (orr + 2)  + 2) ^1  _aT + 2-(aT  2  s  m  - g^gfe  4or  (6.51)  aT + 2 4au\/v _  m  Ml  m  4a ' {aT + 2) " vJpTFTl) 2  2  Finally, As  v„  oo,  r =  Ui  v («r + 2) OT  0, "  Ml  and v r """ TO  aT + 2  ( 6 - 5 2 )  Thus, the optimum control v*(r) has a delta function of strength u /(aT + 2) at t = 0 and r = 7\ x  The delta function causes the jump in x\(t),Ax\, Axi =  e 'v df = v T =  lim  a  m  m  aT + 2  V -»OO,T-»0 m  With this v*(t), xi(0 has two jumps of u\/(aT + 2) at r = 0 and t = T. So, 0 xi(0  =  t =0 "1  (6.53)  o<r< r  <*r+2  Thus,  L  ar+2  ,  + 2/  =  , ' or(ai + 2)  (6-54)  The above expression provides the infimum of J, against which all other control can be measured.  6.7. Summary  6.6.2  103  Linear u(t)  In the minimum overflow probability strategy, u(f) is changed with a constant v(f) =  v  m a x  .  Let  u (t) c  denote the linearly changed u(f).  "(0 = Y  1  c  And  x (t) =-^(l-e-° ) aT t  (6.55)  l  Let J denote J[°° xi (tfdt with uc(i), c  = ( I T ) JX' 2  0  - ~ 2 e  a t + e  ~  2 a t ) d t +  h  { x  ~' e  }  aT)2  _ /_wi_\ a T - 1 + e" Vorr/ or and J . 2  Now compare J  a:r  c  min  _ / «i_\ orr - 1 + e- d(o;7' + 2) _ ( a T + 2)(a7 + e ~ / ,„ var/ aw a r /  ( 6 , 5 6 )  ar  2  T  c  2  2  a T  - 1)  5  2  m  When a T » 1, e -  dd Jmin ~  a r  « 0, so J /^,>, w 1 + ^ , and when orr « 0, e c  1 + 5<z7\ Thus, at either extreme of orT" range, J /J in  maximum value 1.139 at a T  c  m  _Qr:r  * 1-  + ^(orT ) , so 1 2  approaches 1 and it takes the  = 2.7 as shown in Figure 6.8. In any case, the difference between  the optimum u(f) and linear u(f) is small, and for practical purposes, the linear u(f) would be quite acceptable.  6.7  Summary  The importance of broke storage tank level control for stabilization of a paper machine wet end was explained. The input disturbance Fb(t) can be modelled as a two-state continuous-time Markov process. The spectrum of Fb(t) was obtained as the output of a first-order low-pass filter driven by white noise. Many factors are involved for the flow smoothness requirements of u(t), and it was decided to use both Var[w(r)] and Var[v(r)] or max u(f) and max |v(x)| for the flow roughness indicator.  6.8. Maximum Principle  104  1.15  oT  Figure 6.8: J /J in c  m  v s  <xT.  The optimal linear controller Q designed based on the input disturbance spectrum, was shown to be the same as the controller designed with jump linear system theory. From the nature of the input disturbance, it is not practical to expect a controller to keep the level below the high limit for exceptionally long breaks. Thus, a nonlinear controller that minimizes the overflow probability is proposed. The optimal change strategy of u(t),  where u(t)  is changed from its initial value wo to the final  value «i in a given time T, is investigated. The performance index is the integrated square error of the downstream process variable. With a simple downstream process model, the problem is solved with the Maximum Principle's singular control. A closed form solution is obtained, which makes it possible to compare the linear change strategy to the theoretical optimum.  6.8  Maximum Principle  The Maximum Principle (Pontryagin et a l , 1962) can be stated in various forms depending on the terminal conditions and fixed or variable control horizon. A simplified fixed control horizon version, where the end time T is prescribed, is used here following Loewen (2003). The control u(t)  must be in a control set U(t), which can be a complicated set, but here U(f) is a simple closed  6.8. Maximum Principle  interval, i.e. «i(r) <  105  u(t) < u (f). H  The following three equations are given.  system equation: x(t)  = f(x(i),u(f),t), J = £o(x(T))  performance index:  x(0) = xo +  I  (6.58)  Lo(t, x(t), u(t))dt  (6.59)  Jo  terminal condition:  Cj{x{T))  j = 1,... ,M  = 0,  (6.60)  PROBLEM: find a control u(f) e U(t), t € [0, T] that minimizes J, such that x(t) moves from x  0  to x ( T ) and satisfies the terminal condition (6.60).  The adjoint vector p(t) and the Lagrange multipliers Aj are introduced. adjoint vector: p(t)  Aj,j=  scalar Lagrange multipliers: Hamiltonian: H(t,x,p,u)  1,... ,M  := p{t) f(t,x,u)  (6.62)  -L (t,x,u)  T  Suppose a control-state pair  MAXIMUM PRINCIPLE:  (6.61)  same size as x(t)  0  (6.63)  (u*(t),x*(t)) gives the minimum  in the  problem; assume that u*(t) is piecewise continuous. Then there exist AjS and a piecewise smooth vector function  p(t)  such that H(t, x*(t),p(t),  (a) Adjoint equation:  u*(t)) is piecewise smooth, and one has  -* lTx t)=  u(tY e arg max H(t,x*  (b) Maximum condition:  veU(t)  (c) Transversality condition:  -P(T)  T  v , 5>-  +,  dUx*(T))  xr  (f),p(t),v)  dx  7=1  d£j(x*(T)) dx  In our problem, the state vector x{t) = [xi(t) u(t)Y and the control is v(r).  d dt  System equation:  X\(t)  -axi(t) + v(t)  u(t)  v(0  J=± (T) 2a u(T) - «!  +  2  Performance index:  X l  Terminal condition:  Jo x (t) dt 2  x  =0  So  k = ^-x (Tf, x  2ar  L = (t) , 2  0  Xi  h = u{T) - (M= Ui  1)  6.8. Maximum Principle  106  H = px(-ax\  + v) + p v - x\ = v(pi + P2) - ap\X\ - x\ 2  From (a),  -Pi(0 = ~api(t) ~ 2xi(t) -HO  =  0  From the transversality condition (c)  -p(T) Asp (t)  =  2  = [- (T) a  A\] => p\(T) = --xi(T), , a  Xl  p (T)  =  2  (6.64)  0,p (t)=-A . 2  l  The maximum condition (b) is  v*(0 € argmaxiv[pi(t) + p (t)] - apx(i)x (i) 2  veU  {  - x {t)  :  x  1  With/?(0 = -Ai, 2  v*(0 G argmax {v|>i(0 - Ai]} veU  As shown above, H is linear in v(f),  so v(t) must be bounded to be able to construct the finite  optimum control. We choose KOI < v  (6.65)  m  Then  v  if p i ( t ) - A >0 v*(0 = - any valued [ - v , v ] if p i ( t ) - A i = 0 -v if p\(f)-A\ < 0 m  {  m  m  (6.66)  m  The  state trajectory where p\(t) - A\ = 0 is called the singular arc. To investigate conditions for  singular arcs, equations for p\(i) and x,(0 are studied. pxit)  =  a/?(0 + 2xi(0  (6.67)  xi(0  =  -axi(0 + v(0  (6.68)  1  On a singular arc pi(f)  = A\ ~ constant  (6.69)  6.8. Maximum Principle  107  F o r p \ ( t ) to be constant, x\(t) = -aA\/2.  For x\(t) to be constant at -aA\/2,  v(f) = -a A\. 2  is a freedom to make v(t) to v or - v fromthe singular arc because both v and -v m  m  m  OT  There  are optimum  as p\{t) - A\ follows the sign of v(r).  v(0 -> v => xi(t) T=>  T=> MO -  m  >0  v(0 -> - v => xi(t) |=> pi(0 |=> /?i(0 - Ax < 0 m  However, once  and /?i leave the singular arc, they cannot come back to it. Thus, there is only  one singular arc in v*(f). So the optimum control v*(f) consists of three parts:  forte [0,/i],v*(0 = vj = ±v 2. for t e [t t ], v*(t) = v e [-v ,v ] 3. forte [t ,T],v*(t) = v = ±v 1.  m  u 2  2  2  3  m  m  m  Parameters t\, t , and v are determined to satisfy the following two conditions: 2  2  f v(f)dt = vih + v (t Jo pi(T) = —xi(T) ,  + v (T  2 2  - t ) = ui  3  2  b  Trivial solutions such as v*(t) = ±v  m  can happen when T and u\ are set up so that they are only  feasible solutions. It is assumed that the problem is set up so that there is freedom for v*(r) to assume more than one value. On the singular arc, x\(i) is constant. Let x i be that constant. As c  *i(0) = 0,  P e ds  xic  as  := x,(f,) = v, Jo  = -(1 a  e~ ) = x,(/ ) atl  2  To keep *i(0 constant on the singular arc, from (6.68), v = 2  v([t t ])  = ax = v,(l  u 2  Xc  - e^' ), 1  a n d to keep p \ ( i ) constant, from (6.67),  Pi([tut ])  = -2x /a  2  Since x\(t2)  - e" ")  =  lc  a  = x\ , for t e [7 , T] c  Xl  2  (0  = e" '-' x + v a(  2)  lc  3  f  *..e^=  e~ - \x - v a{t  t2  lc  3  Jo ' So for  t e [fe, 7/], p (t) = apxit) + 2 [e- ~ \x a(t  x  h  lc  - v /or) + v /or] 3  3  /or) +  v /a 3  (6.70)  6.8. Maximum Principle  Since p\(t )  =  2  108  -2x\ /a, c  pi(T) = -2e -' xlc/a + 2 f e - [e- - \xlc a(T  2)  a(T  = -2e - x la  - v /a)  t2  e  \  a( T ,2)  Xc  a{s  ( 1 _ -2a(T-t )  + 2e - -  a{T t2)  s)  2  + v /a]ds  3  3  (x - v /a) + lc  {2a  i _  (6.71)  -a(T-t )  e  2  3  v /a 3  a  The transversality condition (6.64) is  pi(T) = -- (T) a  => a (T) + x,(T) = 0  Xi  Let T = T - h, then from (6.71), api(T)  Pl  ;  = -2e x  + (e - e~ )(x  aT  aT  lc  aT  Xc  -  v /a) 3  + 2(e  ar  - l)v /or 3  And from (6.70),  xi(T) = e~ {x aT  Xc  -  Vila) + v /or 3  So  ap (T)  + xi(T) = -e x  Substitution of xi = (1 c  + v /a(e  aT  x  lc  - 1) = 0  aT  3  e~°" )vi/a yields, l  - e { \ - e~ )vi/a a T  + (e - l)v /a = 0  ati  aT  3  We have two cases, v = vi and v = - v i . 3  For v = +vi 3  F o r v = -v, 3  Since 2 - e~  ah  3  -(e ~ a -(-2e a a ( T  h )  -\) = 0=> +  aT  e  0  ^  T  = h  + I) = 0 => r =--  a  log(2 -  e~ ") a  > 1, log(2 - e ~ ' ' ) > 0. This makes r non-positive. Therefore, for r to be positive, a  t\ = r,  and vi = v  (6.72)  3  From the terminal condition u(T) = u\, V\t\ + V (T -ti-T)+ 2  V T = U\ 3  Substitution of v = v ( l - e~ ) and r = t\ and v = vi yields, ah  2  t  3  [h + (1 - e )(T - h - T) + T] = v [T - e \T - 2*0] = u . atx  Vl  at  x  x  6.8. Maximum Principle  109  So t \ = T is obtained as a root of the following nonlinear equation.  T - e - \ T - 2 t ) =— m  (6.73)  x  v  Since 0 < tx  < 0.5T, T - e~ (T - 2t ) > 0, thus wi/vi atx  {  Vi =  v  3  i  > 0. This means, U\ >  0  Mi <  0  = -v„  Now with (6.72), (6.73) and (6.74), Theorem 6.1 is proved.  (6.74)  Chapter 7 Simulation With Industrial Data 7.1  Introduction  In this chapter, two broke tank level controllers, the quiet minimum overflow probability controller (QMOPC) and the optimal linear controller Q which are introduced in the previous chapter, are tested against an industrial data set from a paper machine via simulation. First, the data is analyzed for break and normal durations. Then, the two controllers are designed based on the break flow model estimated from the industrial data, and tested by simulating with the same data.  7.2  Data  Most modern paper mills have multiple paper machines sharing more than one broke storage tank, some with complicated sharing arrangements. For this simulation study, a paper machine with a dedicated broke storage tank was selected to make it straightforward to estimate the incoming flow from the history of the storage tank level y(f) and the outlet flow u(i).  7.2.1  Process  A simplified diagram of the broke management system of the paper machine where the data was collected is shown in Figure 7.1. Broke is diluted with water at the repulper, and fed to the broke storage tank. The consistency of the broke flow F (t) tends to be lower than the target consistency b  110  111  7.2. Date  to the blend chest (around 3%). The consistency is increased by the thickener, and the thick broke is stored at the broke chest. From the broke chest, the consistency is controlled (not shown) and the controlled flow, u(t), is fed to the blend chest. Another stream from the thickener is water that is rich with fines and short fibres, called "cloudy water". The cloudy water is fed to the saveall, where fines andfibresare recovered and fed to the blend chest. The outlet flow u(t), which is controlled by the valve V I , directly influences the broke chest level. The broke chest level is controlled by the recirculation valve V 2 , through which u(t) influences the broke storage tank level y ( t ) indirectly. There are some dynamics involved between u(f)  and y ( t ) due to the broke chest level controller, but for a sampling rate of 5 minutes, it can be  deemed that u(f) directly influences y ( i ) as, (7.1) The paper machine produces various grades of light papers, and the broke contains substantial amounts of short fibres, fines and additives, which are recovered by the saveall. Thus,  Ff,(t)  in  (7.1) is not a real incoming flow, but a hypothetical incoming flow seen from u(t) disregarding the material flow through the saveall and water taken out at the thickener.  Fbif) water brok<  Figure 7.1: Simplified diagram of the broke management system. Broke is diluted with water at the repulper, and pumped to the storage tank. Water is removedfromlow consistency broke stock at the thickener. The high consistency broke stock is fed to the broke chest, which level is controlled by a recirculation valve V2.  7.2. Data  7.2.2  112  Data Set  Three process variables, the broke storage tank level y(t), the broke stock flow u(f), and the break detector signal, were collected for a 6 month period starting in February 2002 with a sampling period of 5 minutes. The pattern of breaks changes a great deal from day to day as shown in Figure 7.2. In both graphs, it may be seen that the state of the break detector is not a good indicator for estimating the incoming flow to the broke storage tank, which is not measured. During the 6 month period, there were a number of shutdowns, scheduled and non-scheduled. The operation log was not available. Therefore, shutdown-like periods were detected when u(f) stays close to 0 for a prolonged period of time, and were subsequently taken out of the data set prior to the following analysis and simulations.  Figure 7.2: Plots of tank level y{i) in % (bottom), outletflowu{i) in GPM (top) and break detector (middle) for two 10 day periods. There were no breaks registered between day 50 and 53.  7.2. Data  7.2.3  113  Incoming Flow Estimate  The incoming flow Fb(t) to the broke storage tank is estimated from measurements of the level y(t) and outlet flow u(f). From a theoretical material balance point of view, Fb(t) should be binary (one value for breaks and the other for normal operation). All the published studies assumed a binary broke flow, and the same assumption is used in Chapter 6. However, a cursory study of the data reveals that Fbif) is nowhere close to a binary flow. The binary flow assumption is discarded, but it is reasoned that Fbif) has distinct phases depending on the state of the paper machine, normal operation or break. Therefore, it is assumed that Fbif) is piecewise-constant, but not necessarily binary. The piecewise-constant Fb(t) assumption makes it sensible to obtain break duration statistics from Fbif). The sampled data value of Fb(t), y(t) and  u(t) are expressed with time argument / or similar integer expressions, such as y(i) when there is no confusions. The piecewise-constant Fbif) is reconstructedfrom_y(0and u(i) as follows. Suppose Fb(t) consists of & piecewise-constant segments. Let F be the constant value for the A:-th segment: k  b  F {f) = F ,  for / = i k , i +l,  k  b  b  ...,i +  n -\,  k  k  k  where i is the beginning of the k-th segment and n the length. The continuous time model (7.1) k  k  is discretized with a sampling period At for each segment where Fb(i) is constant.  y (h  +j)=/o  m  + K AtY/ [F p  m=0  - u{i + m)],  k b  j=  k  n -1,  0,...,  k  where y {i) is the model level and y^ is an initial value of the level for the Ar-th interval. The two m  parameters y^ and F and the length of the intervals n are determined by the least square method. k  b  k  Let e,- be the model error for given F and y^: b  e-x := y (ik  + i) - y(ik + i)-  m  Then,  yl + KpAtF* y* + 2K AtF  = y(i ) + K Atu(i ) + e = y(i +\) + K At}Z ou(i k  k  p  b  y\ + n K AtF k p  p  k  k  k  0  p  = y(i +n -\) k  k  m=  k  + m) + e  + Zmli^ik + m) +  l  e„ -i k  114  7.2. Data  The two parameters y \ and F \ are determined to minimize the squared error £ e . Thus, they are 2  the solution of the following normal equation.  1 1  y(i )  K At 2K At  +  k  p  y{i +1)  A'  p  k  +  K Atu(i ) p  k  KpAt £o w0'*+™)  .1. 1  n K At k  y(i +n -1)  p  k  + £pAJ £ 0 "(**+»»)  k  When the least square solution of y \ and F are obtained, the level errors e, c a n be calculated. b  e := y(i + i) - K AtYl [F  k  t  k  p  '  m=Q  b  - u(i + m)] k  i  The length of an interval is increased until the maximum absolute level error exceeds the limit eu . m  The procedure is as follows: 1.  k=  1 and i\  = 1 and «i  = 2.  2. Obtain the least square solution ofy^ and F and calculate the maximum absolute level error k  b  max,3. If e  max  4 . If e  max  5.  < eum, set n = n + \ and go back to step 2. k  > eu ,  k = k+l,i  m  k  k  roll back nk asn k = n k -\  = 4_i  +  and Fb is the prior solution.  n = 2, and goto step 2. k  The tank level is measured in feet. The tank height is 60 feet and inside diameter is 44 feet. The level ranged between 0 feet and 52 feet during the data collection period. Different e« were tried. w  The value of 0.5 feet appeared to provide the best compromise. Figure 7.3 shows Fb(f),  u(t) and  y{i) for the same time period of Figure 7.2. There are clear abrupt changes in Fb(t), which indicate breaks. Figure 7.4 compares y(t) and y (t) for a one-day period. It shows the piecewise-constant m  Fb(f) can explain y(f) quite well.  7.2. Data  115  10002  Figure 7.3: Graphs of estimated incomingflowFb(f) (top) and level (bottom) for the same period as Figure 7.2.  7.2.4  Statistical Analysis  The disturbance to the level controller is not breaks detected by the break detector, but changes in incoming flow. Thus, the break durations and intervals that are obtained from the estimated incoming flow Fb(t) are more meaningful. Figure 7.5 shows the histogram of Fb(t). It has a peak around 300 GPM, which is the centre value of normal operation. Any F (i) above 500 GPM or so b  would likely be for breaks. However, Fbijt) is widely distributed for breaks. A number of possible  116  7.2. Data  Figure 7.4: Comparison of the level measurement^/) and reconstructed level y (i) from Fb(i) and u(i) for a one-day period. The y-axis is level in feet. m  explanations are: • Off-grade rolls are put into a repulper from time to time. The increase in Fb{i) due to offgrade rolls can vary depending on the quantity and the way they are repulped. • The paper machine produces different grammage (grades) of paper. • The operation of the thickener may change, which affects the amount offibresgoing through the saveall.  10000  6000  4000  2000  Figure 7.5: Histogram of the estimated incoming flow Fbif). The n-th bin covers 100(w - 1 ) < Fb{i) inGPM.  <  100«  7.2. Data  117  The break durations and intervals (normal durations) are analyzed with the following rule: F (i)  >B:  b  break (7.2)  Fb(f)  <B:  normal  In the following analysis, B is set to 600 GPM. For break durations, equivalent durations were calculated assuming that F (f) is at some constant value Bf during breaks. The equivalent break b  duration r is obtained as follows. Suppose that F (t) > B for t = [ t , t ] , then b  b  a  b  r'  b  1  T := —  F (t)dt.  b  b  That is, the equivalent break duration is a length of a hypothetical break which produces the same volume of incoming flow if the flow rate is constant at Bf. In this study, Bf is set to 3000 GPM. As is shown in Figure 7.5, 3000 GPM is the high end of F (f) for breaks. So, for most of the b  breaks, r becomes shorter than actual ones. This makes the equivalent break characteristics more b  difficult to control than real incoming flow. Thus, controllers designed for the equivalent breaks are conservative, and less likely to cause overflows.  7.2.5  Probability Plots  Probability plots are graphical tool to assess the fit of data to a theoretical distribution (Rice, 1988). Let x be a random variable with a strictly increasing probability distribution function F (X) := X  P[x < X]. Let X,- be the i-th  element of order statistics of N samples of x . That is, the N samples  are arranged in ascending order, so that x \ < x  2  < • • • < x ^ - \ < x ^ . A random variable .y = F (x) x  distributes uniformly on [0,1]. Let v,- = F (XJ). It is known that E[y,] = X  This suggests that a  graph of Fx(xi) versus i/(N +1) should look linear if indeed x has the distribution function FX(X). In our case, F {X) = 1 - e'™. Then 7, = 1 - e ' ^ ' . So, X  1 - e~ ' Ax  x —-— => logfl  N+\  *\  —j « -Axi.  N+lJ  Thus, the plot of x versus log(l - i/(N + 1)) should look linear if x, distributes exponentially. t  The slope of the plots gives an approximation of A. Figure 7.6 shows graphs plotted with break or normal durations on the x-axis and log(l - i/(N + 1)) on the v-axis. Both graphs are close to a straight line. The least squares is used to obtain the slopes and offsets of the lines. The intensity  118  7.2. Data  minutes  minutes  Figure 7.6: The probability plots for break durations at left and for normal durations at right. Straight lines are obtained by least squares. parameter A's are obtained from the slope of the lines. These graphs support the exponential distribution hypothesis.  7.2.6  Chi-Square Test  The chi-square test is a numerical method to accept or reject the hypothesis x ~ F  x  based on the  difference between the sample distribution and its expectations. The range of the random variable  x is divided into n cells C„ i = \,...,n, the j'-th cell, Oj is counted. Let  where C, := [c„ c M ). The number of samples that lie in  be the expectation of 0,. If x ~ F , x  E = N\ (  F (x)dx. x  How far the sample distribution deviates from the expectation is measured by the following quantity, 2  _y  (Pr-Ejf  If the hypothesis is correct, the sampling distribution o f ^ is approximately the chi-square distri2  bution with degrees of freedom of n - 2. Then, the expectation of x is approximately n - 2 . 2  If  X is much larger than n - 2 , the hypothesis is discarded. Figure 7.7 shows the 0,s in bar graph 2  and plots of the corresponding model exponential distributions. The chi-square analysis is shown in Table 7.1. The break durations pass the chi-square test with P[y > 12.15] = 0.5151, but the 2  normal durations do not with P|jf > 37.98] = 0.0039. Although both break and normal data seem 2  good in the probability plots.  7.2. Data  119  1000 1500 duration (minutes)  100 150 duration (minutes)  Figure 7.7: The sample distributions (bar graphs) and corresponding model exponential distributions. At left is for break and at right for normal. Table 7.1: Chi-square test of break and normal durations cell 0 3 3 6 6 9 9 12 12 15 15 18 18 21 21 24 24 27 27 30 30 33 33 36 36 39 39 42 42 45  break durations expected observed 241 238.08 130.73 146 71.78 60 39.41 35 21.64 19 11.88 10 6.53 8 3.58 3 1.97 1 1.08 2 0.59 1 1 0.33 0.18 1 0.10 0 0.05 0  YX = 12.1538P = 0.5151 2  7.2.7  x  1  0.04 1.78 1.93 0.49 0.32 0.30 0.33 0.09 0.48 0.78 0.28 1.40 3.77 0.10 0.05  normal durations cell expected observed 134.32 168 0 100 109 100 200 100.15 74.67 62 200 300 300 400 55.68 33 41.51 33 400 500 24 600 30.95 500 17 600 700 23.08 19 700 800 17.21 18 800 •900 12.83 12 900 1000 9.57 7.13 8 1000 1100 5.32 3 1100 1200 4 1200 1300 3.97 2.96 3 1300 1400 2 1400 1500 2.20 1.64 5 1500 1600 1 1.23 1600 1700 0.91 0 1700 1800 1 1800 1900 0.68 1 0.51 1900 2000 Yjc = 37.9827 P = 0.0039  x  2  8.45 0.78 2.15 9.24 1.75 1.56 1.60 0.19 2.08 0.62 0.11 1.01 0.00 0.00 0.02 6.85 0.04 0.91 0.15 0.48  2  Summary  From the strength of the probability plots, the following model for the broke flow will be used for controller design.  7.3. Controller Design  120 i  t  Ai  1/(398/60)= 1/6.633 (mean normal duration is 6.633 hours)  A2  1/(26.2/60)= 1/0.437 (mean break duration is 26.2 minutes)  F  311 G P M (normal flow rate)  F  3000 G P M (break flow rate)  bX  b2  7.3  •  Controller Design  Although the estimated incoming flow Fb(t) is not quite binary, for controller design purposes, it is assumed that Fb(t) takes two distinct values, one for normal operation and the other for breaks. The level range of 0 to 50 feet is mapped to 0 to 100 % for the process gain K calculation. p  7.3.1  QMOPC  The M O P C and QMOPC of Section 6.5.1 have two design parameters, the maximum outlet flow Umax  and the maximum rate of change of u(t),  v  m a x  .  They are specified with the mean incoming  flow f as a unit. The mean incoming flow f is calculated as m  m  A = 477GPM. fm=F M' + F>blA\ + A A\ + A 2  l  2  2  The overflow probabilities for both the MOPC and QMOPC are calculated from the state probability distributions obtained by the integral equation method of Section 5.4 with various combinations of the maximum outlet flow  u  max  and the maximum rate of change v m a x , and is shown in Table 7.2.  The conditional probability densities of the level y(t), 0i(y) for normal operation and Table 7.2: Overflow probabilities for QMOPC and MOPC  Umax  Vmax  2f 2fm 2fm  2f fm 0.5f  1.5/;  2fm  1.5/* 1.5/  fm  m  m  m  M  0.5f  m  P[y(r) > 100]in % during breaks MOPC QMOPC 0.0297 0.0319 0.0358 0-0421 0.0500 0.0801 0.2887 0.3124 0.3065 0.3531 0.3421 0.4491  (f> (y) 2  for  7.3. Controller Design  121  breaks are numerically obtained. Then, overflow probability is calculated as P[y(r)>100]= f°° <f> (y)dy.  Jioo  2  During a break, u(t) < F , thus, y{t) is increasing. Therefore, y(t) has local maximums at the end b2  of breaks. When v(r) at the end of a break is below 100%, y(t) stays below 100% during that break. By Theorem 5.1, the conditional probability.distribution of.y(r) for breaks, is the same as that of y(t) at the end of breaks. Thus, for instance^ if P[y(f) > 100] = 1%, the level y(t) stays below 100 % on average 99 times out of 100 breaks. The overflow probabilities tabulated are the conditional probabilities on breaks. The average number of breaks per day is, 24/(6.633+0.437)=3.39. For instance, if P[y(0 > 100] = 0.8%, there will be, on average, 3.39 x 365 x 0.008 = 9.9 overflows per year. There are not much differences between the MOPC and QMOPC, and it was decided to use the QMOPC, which keeps u(t) constant most of the time during normal operation, and is superior to the MOPC in stabilizing the paper qualities. More combinations of u m a x and  v  m a x  are tried for  For simulation studies, a (umax, vmax) combination with the  the QMOPC as shown in Table 7.3.  Table 7.3: Overflow probabilities for QMOPC ttmax  Vmax  lAf lAfm  2fm  m  14f  P[y(0 > 100] in % during breaks 0.611 0.659  fm m  1.4/„ 1.4/ 1.3/. 1.2/ M  m  0.5f  ,  0.25/  .~,  m  0.801  1.237 2.920 1.377 3.532  J  m  o.i/.  ;'\">.1 :  2f 2fm m  overflow probability near 1 %, that is (\Afm,0.5fm) was selected. Suppose  Fbi to u  max  u(t) is changed from  with the rate of Q.5f m , then it takes 7 = 1.496 hours. Assume that the consistency loop  downstream is tuned to a time-constant of 3 minutes, so it has a cut-off frequency a = 20 hour . -1  Substitution of these values to (6.57) yields  A Jmin  =  (<*T 2)(aT + e-°r-l) +  O T 2  =  {  m  $  2  So, changing u{t) with a constant rate increases the consistency squared error 5 % over the optimal case. The probability distribution of the tank level is shown in Figure 7.8. The low level limit yL is set to zero. During normal operation, there is a probability mass at y = yL (y(t) stays at yL for  7.3. Controller Design  122  Figure 7.8: Probability distribution ofXO under the QMOPC. At left, a plot marked B is the conditional distribution for break, and N is for normal without a probability mass at 0. At right, the probability distribution for normal with the probability mass at 0. durations longer than zero). Thus, the graphs are not for the probability density functions, but the probability that y(t) e [iAy,(i  7.3.2  + I)Ay] (Ay<=  1).  Linear Controller  The optimal linear controller of Section 6.4 is designed to compare to the QMOPC. The performance index J = Var[y(0] + p (Var[w(0] 2  + pVax[u(t)]), where p and p are two design parameters,  to constrain both u(t) and u(t). The closed loop natural frequency o) is determined by p and K c  p  yjKpIp and p determines the closed loop damping factor r\-\ yjl + p/aj .. W h e n p = 0, n = 0.5 A/2, and the controller minimizes Var[«(0] with the largest Var[w(0]. Increasing n with higher p decreases Var[«(0] but increases Var[w(0]. Figure 7.9 shows changes in the standard 2  as o» = c  deviation of u(t), cr[u] and that of u(t), cr[u] for r\ between 0.5 V2 and 5. They are shown relative to the values for p  = 0, keeping cr[y] constant. Let o- [u] and <TQ[U] be the standard deviations of M(0 and w(0 for p = 0. The plots are cr[u]/cr [u] (lefty-axis) and o-[u]/o- [u] (rights-axis). It 0  0  0  is observed that cr[u] increases linearly within but o~[u] reduction tapers off quickly. As the best compromise, 77 = 2 was chosen since it yields most of the cr[u] reduction with a modest increase in cr[u]. The graphs were drawn for cr\y] = 20%, but they are almost identical for other values of cr[y]. It was found from simulations with the industrial data, cr = 20% is necessary to obtain y  7.3. Controller Design  123  damping factor  Figure 7.9: cr[u] decrease and cr[u] increase versus the damping factor JJ for a constant cr[y]  comparable performance with the QMOPC. The controller has the following transfer function, CL = K  C  .  s+a  Table 7.4 shows the controller parameters and o~[u] and o~[u]. Table 7.4: The controller parameter for cr[y] = 20  n  K  a  0.5 V2  0.0807  0.1316 2.444 24.48 2.344  1  0.1121  0:2204 2.446 22.35 2.521  2  0.3232  0.7524 2.452 20.61  5  1.8145 4.4790 2.471  c  b  <r[u] cr[w]  3.950  20.10 9.101  The probability density of y{t) and u(f) calculated by the method of Section 5.4 are shown in Figure 7.10.  7.4. Simulation  124  Figure 7.10: Probability density functions of y(t) at left and u(t) at right under Q,  7.4  Simulation  The QMOPC and the linear controller CL specified in previous sections are tried with the estimated incoming flow Fbif) obtained from the mill data. The controllers are designed assuming the binary incoming disturbance, but subjected to the estimated Fbif), which is not binary. The original data set is 180 day long, but after taking out shutdown days, the total length became 155 days. For both the QMOPC and CL, yif) and u{t) are reset to the original manual control value after each shutdown period. Then, u(t) are calculated according to each controller, and y(t) are updated based on the process model, which is a discrete time version of (7.1). Table 7.5 lists statistics of the three cases, manual, QMOPC and C . L  The linear controller CL has a very small Var[v(x)] compared to the other two. Figure 7.11 plots the outlet flow u(t) of the three cases for a short period (two days) to clearly show the flow smoothness of each case. Table 7.5: Statistics of simulations y(t) (%)  control  specification „  manual QMOPC C  L  Umax ~  Idfmy ^max  =  ^*^fm  n = 2, o-[y(0] = 20  i/(0 (GPM) <r[v(0]  max  min  max  min  GPM/H  104.4  0.06  1004  1.8  334.8  102.5  1.92  668  0  88.91  105.1  -23.7  875  154  13.7  7.4. Simulation  125  1000  Figure 7.11: Comparison of the QMOPC, CL, and manual control 7.4.1  Manual Control  The histograms of XO and u(t) under manual control are shown in Figure 7.12. The histogram of XO has a clear peak at a 35 % to 40 % range. Probably this range is preferred because there is no imminent danger of either overflow or emptying the storage tank. Thus, the operator does not have to pay close attention to the broke storage tank level. The histogram of u{i) has a broad peak ranging between 350 GPM and 600 GPM. However, u{t) spends substantial time at both high and low ends reflecting the operators' reaction towards high and lowXO- The manual control yields the highest o"[v(0] as a result of the operators making step changes in u(t).  7.4.2  Linear Controller  The histograms by the linear controller CL have a similar pattern to those by manual control. However, the percentage of time that u{t) spends in high and low ends are much smaller compared to manual control. This shows some non-linear characteristic of manual control. CL needs the level setpoint jv, which was set to 30 %. From Figure 7.10, ay of 40 % seems a safe choice, but 30 r  % was selected to give CL an advantage in controlling high X0S  As a result, X0 spends some  time below zero. In practice, it is possible to add low level constraining logic similar to one for the  7.4. Simulation  126  .On  100  JJL  200 300 400 500 600 700 800 900 1000 u(t) (GPM)  Figure 7.12: Histograms of y(i) (left) and u(t) (right) by manual control  QMOPC. However, CL was run as a pure linear controller to see the limitations of a linear scheme. CL needs its calculated control output base-loaded with the mean incomingflowf as mentioned m  in Section 6.4.2. It was found during the simulations, that the chance of low level limit violation is sensitive to the value offm. This could be a problem in practice, which must be solved by some adaptive control logic adjusting the value of the base load, or the low level constraining logic. C  L  has very small cr[v(r)] with more swing in ii(f) than manual control or QMOPC. As was shown in Figure 7.9, cr[u(f)] cannot be made much smaller with higher <x[v(r)]. This is a limitation of linear controllers. Although <x[v(0] is very small, u{t) is changing all the time as shown in Figure 7.15. The effects of this kind of broke stock flow changes on paper quality and machine runnability are not known.  8000 7000 6000 6000 4000 3000 2000  D  1000  Da level (%)  100  200 300 400 500 600 700 u(l) (GPM)  Figure 7.13: Histograms ofy(0 (left) and u(t) (right) by  C  L  900 1000  7.5. Summary  7.4.3  127  QMOPC  100  200 300 400 500 600 u(t) (GPM)  700  800 900 1000  Figure 7.14: Histograms of y(t) (left) and u(t) (right) by QMOPC The QMOPC shows a very different pattern of behaviour from either manual control or C . For L  y{i), it is concentrated at a low range to maximize the tank capacity at the onsets of breaks. This can be observed from time series plots of Figure 7.16. It is difficult to keep y{t) this low without emptying the tank by either linear or manual control. To make the QMOPC robust for low level limit violation, the low level limit y was set to 5 %. So y(f) is unlikely to go below zero even L  for very small incoming flow Fb(t). For u(t), it is concentrated at the high limit u , which is max  considerably lower than the maximum value of both manual control and Q . Compared to CL, the QMOPC has much higher cr[v(r)], however still much lower than that of manual control. If the rate of change in u{f) has some influence on break frequency, the QMOPC should yield less frequent breaks than manual control. However, only way to test this hypothesis is mill trials. For practical matters, the QMOPC does not require the level setpoint, and also is not sensitive to the value of f  m  for low level violation due to the built-in logic. These characteristics of the  QMOPC make it easy to implement.  7.5 Summary The incoming flow to the broke storage tank Fb(f) was estimated from industrial data from a 6 month period. The break and normal durations were obtained from Fb(f) and they follow the exponential distribution well. The optimal linear controller CL and hard constrained nonlinear  7.5. Summary  128  days  Figure 7.15: Comparison of u{i) by the QMOPC and manual (upper) and the QMOPC and CL (bottom). controller, QMOPC were designed for the statistical characteristics of Fb(i). Their overflow probabilities were calculated from the integral equation approach of Chapter 5. The optimal linear controller produces very smooth u(t), but the maximum value of u(t) is approximately 50 % higher than that of the QMOPC. Also, selection of the level setpoint and base load could be difficult problems in practice. The QMOPC can control the level well with a smaller maximum u(t), but the rate of change (Var[v(x)]) is higher than C , although much smaller than L  manual control. It has none of the implementation difficulties of CL and is recommended for field •i : , i • 1  tests.  7.5. Summary  129  31  32  33  34  35 days  36  37  38  39  40  31  32  33  34  35 days  36  37  38  39  40  Figure 7.16: Comparison of y{i) by the QMOPC and manual (upper) and the QMOPC and CL (bottom).  Chapter 8 Conclusions 8.1  Summary  Averaging Level Control  A design methodology for stochastic averaging level control is proposed. The problem is formulated to mirrimize an outlet flow roughness indicator r ( u ( t ) ) subject to the target level variance  Var[y(0]- When r ( u ( t ) ) is a linear combination of Var[w(Y)] and Var[w(0], standard linear quadratic optimal control theory is applicable, which produces a state feedback controller. With a noisefree observer, an easily implementable scalar transfer function for the controller is obtained. The importance of input disturbance modelling and proper selection of the roughness indicator is highlighted. • The random-walk model for the input disturbance, which is popular for regulatory control design, has a few shortcomings when applied for the averaging level control problem. Since  Var[w(0] may not exist, it cannot be used in  r(u(t)).  When r ( u ( i ) )  = V a r [ u ( t ) ]  for the random-  walk input, the optimal controller is a proportional+integral action (PI) controller. • A first-order low-pass process, which is the output of a first-order low-pass filter driven by white noise, is proposed as a more realistic model for the input disturbance. When  r(u(t))  is a weighted sum of Var[w(0] and Var[w(0], the optimal controller is a lead-lag network { C ) . The performance improvements by C L  L  over the PI controller could be substantial if  the input disturbance cut-off frequency is high relative to the closed-loop level control loop band width.  130  8.1. Summary  • When r{u{ij)  131  = Var[u(t)], that is, it is desired to minimize the swing of u(t), and the input dis-  turbance is afirst-orderlow-pass process, the optimal controller is a proportional+derivative type. However, compared to a proportional only controller (P controller), the performance improvement is not substantial, and for practical purposes, the P controller would be a good choice. Since the purpose of the flow roughness indicator r(u(tj) is to express adverse effects of u(t) on the downstream process variable yiif), the case of r(u(t))  = var{y2(t)]  is investigated. The  downstream process is modelled as afirst-orderhigh-pass filter (cut-off frequency = a> ) driven p  by u(f). The optimal controller for thefirst-orderlow-pass input disturbance, is a second-order lead-lag network with one of its zeros at -a) . The performance improvements over CL can be as p  much as 40 % reduction in Var[y2(0]- When iop/cod » 1 (o»</ is the cut-off frequency of the input disturbance), the performance improvement is small. Markov Jump Systems  A continuous-time Markov jump system is a hybrid system with a continuous state x(t) and a discrete state £(r), where £(r) is a continuous-timefinite-stateMarkov chain. A new theorem, which shows that the conditional probability distribution of x(t) given £(f) = £ is the same as the probability distribution of x(f) sampled just before £(t) leaves state £, is presented. From this theorem, two practically important applications are derived: calculations of state probability distributions of jump systems, and calculation of the state moments of linear jump systems. The probability density function of afilteredbinary random signal is obtained analytically. Unlike approaches based on simulations, the new method does not have time-discretization approximations nor randomness due to computer generated random numbers. This makes jump Markov system control theory more applicable to practical problems. Broke Storage Tank Level Control  Sheet breaks cause the biggest upsets in the paper machine operation. The broke storage tank is thefirstprocess to receive the upsets, and to manage the broke tank level properly goes a long way towards stabilizing the paper machine wet end. The incoming flow to the broke storage tank Fbif) can be modelled as a two-state continuous-time Markov chain. The spectrum oiFb{t) is obtained,  8.2. Future Research  132  which is a first-order low-pass process. It is difficult to find a good flow roughness indicator r(u(t)) because there is no identifiable single downstream process. However, a weighted sum of Var[w(0] and Var[zi(f)] is adopted as r(u(t)). Then, the .optimal controller is a lead-lag network CLIt was realized that since F (i) during breaks F 2 is so much larger than normal flow F , it is b  b  bX  not practical to design a controller that can keep the level below the high limit with probability 1. A practical nonlinear controller with hard constraints on the outlet flow u(t) and its rate of change it(t) is presented. The controller, which is called MOPC, minimizes the overflow probability and guarantees that the level will stay above the low limityL- The new method for calculating the state distribution of Markov jump systems makes the design process straightforward without relying on simulations. The actual incoming flow to a broke storage tank F (i) was estimated from the tank level and b  outlet flow measurements of a paper machine. It was confirmed that the Markov hypothesis for F (t) does hold for the industrial data. The MOPC is compared to the optimum linear controller b  and manual control with the industrial data. The MOPC controls the tank level better than both the linear controller and manual with smaller maximum outlet flow. It is recommended to conduct a mill trial for the MOPC. The optimal change strategy of u(f), where u(f) is changed from its initial value UQ to the final value «i in a given time T, is investigated. The performance index is the integrated square error of the downstream process variable. With a simple downstream process model, the problem is solved with the Maximum Principle's singular control. The closed form solution is obtained, which makes it possible to compare linear change strategy to the theoretical optimum. For process parameters for usual broke storage tank systems, it is shown that linear change strategy is not much different from the theoretical optimum.  8.2  Future Research  Constrained Stochastic Control  In Chapter 3, the problem was formulatedtQi'minimize some outlet flow roughness indicator r(t), such as Var[zi(0], subject to a given Var[y(*)]- To reflect the true nature of the averaging level control problem, the problem should be formulated to minimize r(u(t)) subject to yL < y(t) < yH-  8.2. Future Research  133  However, there are a few obstacles for this approach. • To guarantee y  L  < y(t) < y  H  with probability 1 with finite u(t), the input disturbance must  be modelled as a bounded stochastic process. This should not be a problem because all the actual input disturbances are bounded. However, random walk or Gaussian random process, which are unbounded, are popular choice due to their mathematical tractability. • The problem itself is very difficult as existing optimization techniques, such as quadratic programming, are not directly applicable. The present constrained control assumes steplike deterministic disturbances. In some process control applications, this assumption is quite different from reality and a performance improvement would be possible if a controller could be designed for a more realistic disturbance model. The problem is tough, and probably needs a theoretical breakthrough or two before effective solutions become available. Markov Jump System • The input disturbance is modelled as a piecewise constant random process. Inclusion of continuous noise (Wiener process) in the disturbance term will make the model more useful for some applications. More sophisticated mathematics is necessary, but the extension should be straightforward. • The integral equation for the state distribution is solved with a simple discretization method. A more elaborate method that makes impossible to obtain accurate solutions of the integral equation with fewer grid points is desirable. This would shorten calculation times and lessen memory requirements for higher dimensional systems. • It would be helpful if the error bounds of numerical solutions of the integral equation were known. Then the grid size could be selected according to the required accuracy. Broke Storage Tank Level Control Although the M O P C has been tested against the industrial data, a number of improvements may be necessary to make the M O P C accepted in the field.  8.2. Future Research  134  • In the simulation, the incomingflowis estimated with the least square method. This approach is not applicable in real time control because the future measurements cannot be used. A reliable method to estimate F (f) from the measurements available up to the present time b  must be devised. More challenging'would be a case where multiple paper machines share one or more broke storage tanks. Not only the controller but also the incomingflowestimator must be designed for a MIMO system. • It is assumed that the statistical characteristics of the input disturbance are constant, and the industrial data set shows good stationarity. However, non-stationarity has been reported in plastic film processes by Taylor and Duncan (2002), and an "avalanche of breaks", which suggests that the machine operation has bad spells, as mentioned in Lindstrom et al. (1994). Therefore, the MOPC may have to be equipped with some adaptive mechanism to change the maximum outlet flow u  max  or the ramp rate based on the situation.  The above two problems do not seem insurmountable with the present technology, and field tests would provide a definite answer about the practicality and effectiveness of the MOPC.  Bibliography Allison, B. J. and M. Khanbaghi (2001). Model predictive optimal averaging level control with state estimation. In: Proceedings  of the IASTED  International  Conference:  Control  and  Ap-  plications. Banff, Alberta, Canada, pp; 88-93. o  Astrom, Karl (1970). Introduction  to Stochastic Control Theory. Academic Press. New  York, NY.  Batina, Ivo, Anton Stoorvogel and Siep Weiland (2002). Optimal control of linear, stochastic systems with state and input constraints. In: Proc. 41st IEEE  CDC. Las Vegas,  NV.  Bonhivers, J-C, M. Perrier and J. Paris (1999a). Management of broke ratio in an integrated paper mill. Post-Graduate Research Laboratory Report PGRLR725. Pulp and Paper Research Institute of Canada. Bonhivers, J-C, M. Perrier and J. Paris (19996). Model predictive control for integrated management of Whitewater tanks. Post-Graduate Research Laboratory Report PGRLR724. Pulp and Paper Research Institute of Canada. Bussiere, S., A. Roche and J. Paris (1987). S,tudy of broke handling and white water management using a dynamic simulation. Pulp  & Paper Canada  88(11), 74-77.  Campo, P.J. and M. Morari (1989). Model predictive optimal averaging level control.  AICheE  Journal 35(4), 579-591. Cheung, T. and W.L. Luyben (1979a). A further study of the pi level controller. ISA 18(2), 73-77.  135  Transactions  136  Bibliography  Cheung, T. and W.L. Luyben (19796). Liquid-level control in single tanks and cascades of tanks with proportional-only and proportional-integral feedback controllers. Ind. dam. 18(1), 15-21.  Eng. Chem. Fun-  p . . u  Costa, O. L. V., E.O. Assumpcao Filho, E.K. Boukas and R. R Marques (1999). Constrained quadratic state feedback control of discrete-time markovian junmp linear systems. Automatica 35, 617-626. Crisafulli, S. and R.D. Peirce (1999). Surge tank control in a cane raw sugar factory.  Process Control  Journal of  9, 33-39.  Croteau, A.P. and A. A. Roche (1987). Study of broke handling and white water management using a dynamic simulation. Pulp  & Paper Canada  88(11), 74-77.  Cutler, C.R. (1982). Dynamic matrix control of unbalanced systems. ISA  Transactions 21(1),  1-6.  Dabros, Michal, Michel Perrier, Fraser Forbes, Martin Fairbanks and Paul Stuart (2003). Improving the broke recirculation strategy in a newsprint mill. In: Proc.  PAPTAC 89th Annual Meeting.  Montreal, PQ. Doss, J. E., T. W. Doub, J. J. Downs and E. F. Vogel (1983). New directions for process control in the eighties. In: AIChE Meeting. Washington, DC. Feller, William (1968). An  Introduction  to Probability  Theory  and Its Applications  Theory  and Its Applications  I, Third  Edition.  John Wiley & Sons. New York, NY. Feller, William (1971). An  Introduction  to Probability  II, Second  Edi-  tion. John Wiley & Sons. New York, NY. Fleming, W. H. and R. W. Rishel (1975). Deterministic  and Stochastic Optimal Control.  Springer-  Verlag. New York, NY. Foley, M. W., K. E. Kwok and B. R. Copeland (2000). Polynomial lq solurtion to the averaging level control problem. In:  ries of the Future.  16th International  Conference  Port of Spain, Trinidad, West Indies.  on CAD/CAM,  Robotics  and  Facto-  Bibliography  137  Gess, J.M. and R.A. Kanitz (1996). Monitoring the stability of a paper machine.  Tqppi Journal  79(4), 119-126. Grimmett, G. R. andD. R. Stirzaker (1992). Probabiliy  and Random Processes, 2nd edition. Oxford  University Press. Oxford, UK. Harris, T. J. and J. F. MacGregor (1987). Design of multivariable linear-quadratic controllers using transfer functions. AIChE Journal 33(9), 1481-1495. Ji, Yuandong and Howard Jay Chizeck (1990). Controllability, stabilizability, and continuous-time markovian jump linear quadratic control. IEEE Trans. Kao, Edward P. C. (1997). An Introduction tOiStochastic  Automat. Contr. 35(7), 777-788.  Processes. Duxbury Press. Belmont, CA.  Kelly, J. D. (1998). Tuning digital pi controllers for minimal variance in manipulated input moves applied to unbalanced systems with delay.  The Canadian Journal of Chemical  Engineering  76, 967-975. Khanbaghi, M., R. Malhame and Perrier (2002). Optimal white water and broke recirculation policies in paper mills via jump linear quadratic control. IEEE Tans.  Control Systems Technology  10(4), 578-588. Khanbaghi, M., R. Malhame, M . Perrier and A. Roche (1997). A statistical model of paper breaks in an integrated tmp-newsprint mill. Journal  of Pulp and Paper Science  23(6), 282-288.  Khanbaghi, Maryam (1998). Un Formalisme de Systemes A Sauts Pour La Recirculation Optimale Des Casses Dans Une Machine A Papier. PhD thesis. Ecole Polytechnique De Montreal. Kropholler, H., Wiseman N. and Harris E. (1994). Theory and practice in modelling paper machine white water systems. Appita 47(4), 301-304. Lindstrom, R., W.H. Manfield, A.F. Tkacz and J. Mardon (1994). Coping with an avalanche of breaks. Appita 47(2), 163-172. Liu, Zikuan and Fengsheng Tu (1999). Single sample pathe-based sensitivity analysis of Markov processes using uniformization. IEEE  Trans. Automat. Contr. 44(4), 872-875.  Bibliography  138  Loewen, Philip D. (2003). M403 lecture notes. University of British Columbia Mathematical Department. Luyben, W.L. and P. S. Buckley (1977). A proportional-lag level controller. Instrumentation  nology 18(1), 65-68. Mariton, Michel (1990).  Tech-  •  Jump Linear Systems in Automatic Control.  Marcel Dekker, Inc.. New  York, NY. Marlin, Thomas E. (1995).  PROCESS CONTROL.  Chap. 18 Level and Inventory Control.  McGraw-Hill, Inc.. New York, N.Y McDonald, K.A. and TJ. McAvoy (1986). Optimal averaging level control.  AICheE Journal  32(1), 75-86. Morari, M. and E. Zafiriou (1989). Robust Process  Control. Prentice-Hall, Inc. Englewood Cliffs,  N.J. Newton, G. C , L. A. Gould and J. F. Kaiser (1957). Analytical  Design of Linear Feedback Con-  trols. John Wiley & Sons, Inc. New York, NY. Ogawa, Shiro (1995). PI controller tuning for robust performance. In: Proc. 4th IEEE  on Control Applications.  Conference  Albaby, N.Y.  Ogawa, Shiro, Bruce Allison, Guy Dumont and Michael Davies (2002). A new approach to averaging level control. In: Proc. Control Systems 2002. Stockholm, Sweden. Orccotoma, J.A., D. Stiee, J. Paris and M. Perrier (1997a). Dynamics of fines distribution in a white-water nework. Pulp e> Paper  Canada 98(9), 77-80.  Orccotoma, J.A., J. Paris, M. Perrier and A. A. Roche (19976). Dynamics of white water networks during web breaks. Tappi Journal 80(12), 101-110. Pontryagin, L. S., V. G. Boltyanskii, R. V. Gamkrelidze and E. F. Mishchenko (1962).  matical Theory of Optimal Processes^.  Interscience Publishers. New York, NY.  The Mathe-  Bibliography  139  Rice, John A. (1988). Mathematical  Statistics and Data Analysis.  Wadsworth & Brooks/Cole Ad-  vanced Books and Software. Pacific Grove, California. Shinskey, F. G. (1967). Process Control Systems. McGraw-Hill, Inc. New York, NY. Shinskey, F. G. (198 8).  Process Control Systems, Third Edition. McGraw-Hill, Inc. New  Shinskey, F. G. (1994). Feedback  Controllers for the Process Industries.  York, NY.  McGraw-Hill, Inc. New  York, NY. Shunta, J. P. and W. Fehervari (1976). Nonlinear control of liquid level. Instrumentation  Technol-  ogy 23, 43^18. Sidhu, Manpreet S. (2003). Master thesis, multivariable averaging level control. University of British Columbia. Stewart, W. J. (1994). Introduction  to the Numerical  Solution of Markov Chains.  Princeton  Uni-  verstiy Press. Princeton, New Jersey. Sworder, David D. (1969). Feedback control of a class of linear systems with jump parameters.  IEEE Trans. Automat.  Contr.  14(1), 9-14.  Sworder, David D. (1976). Control of systems subject to sudden change in character. Proc.  IEEE  64(8), 1219-1225. Sworder, David D. and Lin L. Choi (1976). Stationary cost densities for optimally controlled stochastic systems.  IEEE Trans. Automat. Contr.  21(4), 492-499.  Sworder, David D. and Ronald O. Rogers (19.83). An LQ-solution to a control problem associated with a solar thermal central receiver. IEEE  Trans. Automat. Contr.  28(10), 971-978.  Taylor, A. R. and S. R. Duncan (2002). A comparison of techniques for monitoring process faults. In:  Proc. Control Systems 2002.  Tijms, H. C. (1994). Stochastic West Sussex, England.  Stockholm, Sweeden.  Models An Algorithmic Approach.  John Wiley & Sons. Chichester,  Bibliography  Wonham, W. M. (1970). Probabilistic  140  Methods  in Applied  Mathematics,  A. T. Bharucha-Reid  ed..  Chap. Random Differential Equations in Control Theory, pp. 191-199. Vol. 2. Academic Press. New York, NY.  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065478/manifest

Comment

Related Items