UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Application of the extended Kalman filter to enzyme reactions Lai, Henry K. H. 1993

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1993_fall_lai_henry.pdf [ 3.71MB ]
JSON: 831-1.0079750.json
JSON-LD: 831-1.0079750-ld.json
RDF/XML (Pretty): 831-1.0079750-rdf.xml
RDF/JSON: 831-1.0079750-rdf.json
Turtle: 831-1.0079750-turtle.txt
N-Triples: 831-1.0079750-rdf-ntriples.txt
Original Record: 831-1.0079750-source.json
Full Text

Full Text

APPLICATION OF THE EXTENDED KALMAN FILTERTO ENZYME REACTIONSByHenry Kwok-Hei LaiB Sc. (Physics) Imperial College, University of London, London, EnglandA THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinTHE FACULTY OF GRADUATE STUDIESDEPARTMENT OF MATHEMATICS, INSTITUTE OF APPLIED MATHEMATICSWe accept this thesis as conformingto the required standardTHE UNIVERSITY OF BRITISH COLUMBIAAugust 1993© Henry Kwok-Hei Lai, 1993In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.(Signature) Department of  m zafiginta;.The University of British ColumbiaVancouver, CanadaDate ^2 gDE-6 (2/88)Abstract In monitoring biological processes, measurement of key variables is oftenimpeded by the lack of reliable on-line measurements of biomass, substrate,and product concentrations, and the difficulty to properly model biologicalactivity in processes that are nonlinear and time-varying.One approach to solving this problem involves the development of stateestimation techniques that use available measurements to reconstruct theevolution of state variables or to estimate the bioprocess parameters. The useof filtering theory for state estimation provides a means of incorporating adeterministic model into a method of forecasting future states which includesthe probabilistic uncertainties of both the system and the measuring devices.The state estimation technique we use is the discrete extended Kalmanfilter. This method allows the use of an approximate model and partialmeasurements of process variables. To study the application of the discreteextended Kalman filter to a biological problem, we used two different enzymereactions as model systems. The first model problem is a simulation ofcellulose hydrolysis with enzyme inactivation. The second model problem is asimulation of the separation of a chiral substance into its respectiveenantiomers. In the first problem, the hydrolysis of cellulose, a slowly varyingparameter is tracked after correctly choosing the model error covariancematrix. In the second problem, we reconstruct the complete composition of thesystem from a partial set of measurements. Although these model problems aresimple cases involving only single enzymes, the extended Kalman filter can beapplied to more complex systems of enzymes.Abstract^Table of Contents ^List of Figures viiList of Tables^Acknowledgement xi1. Introduction ^  11.1 Enzymes  11.2. Biological Processes ^ 22. Derivation of the Rate Equation  52.1. The Law of Mass Action ^ 52.1.1 Single Enzyme-Substrate Reaction^  52.1.2. Two Enzyme System ^ 72.2. Quasi-Steady-State Approximation 92.2.1. Single Enzyme-Substrate Reaction^ 92.2.2. Two-Enzyme System^  102.3. Singular Perturbation Treatment  112.4. Validity of the Quasi-Steady-State Approximation^ 142.5. Enzyme-Substrate-Inhibitor System^  163. Filtering Theory^ 203.1. Probability Preliminaries^ 213.2. Kalman Filter Theory 253.2.1. System Description ^ 263.2.2. Noise Description 263.2.3. Derivation of Kalman Filter Algorithm^ 27iv3.2.3.1. Lemma 1 ^ Lemma 2 Lemma 3^ Lemma 4 303.2.3.5. Lemma 5^ 333.2.3.6. Lemma 6 343.2.4. Kalman Filter Algorithm^ 363.2.5. Alternative Method of Derivation of Kalman Filter^ 373.2.5.1. Innovation Process^ 373.2.5.2. Lemma 7^ 373.2.5.3. Theorem 5 393.3. Extended Kalman Filter Theory^ 403.4. More About Error Covariance Matrices ^ 413.5. Exact Filter^ 424. Enzyme Deactivation 454.1. Reactions and Kinetic Equations ^ 464.2. On-Line Estimation of Enzyme Activity Using the ExtendedKalman Filter^ 474.2.1. System model^ 484.2.2. Noiseless Measurements^ 484.2.2.1.Generation of Measured Data^ 484.2.2.2. Numerical Results ^ 494.2.3. Small Noise Measurements 504.2.3.1. Generation of Measured Data^ 504.2.3.2. Numerical Result^ 514.2.4. Large Noise Measurements 524.2.4.1. Generation of Measured Data^ 534.2.4.2. Numerical Result^  534.2.5. The Choice of Model Error Covariance Matrix^ 544.2.5.1. Noiseless Measurement^ 564.2.5.2. Small noise measurement 574.2.5.3. Large Noise Measurement^ 584.2.5.4. Conclusion ^ 604.3. Iteration of Extended Kalman Filter^ 604.3.1. Numerical example^ 625. Separation of Enantiomers ^ 655.1. Enzyme-Substrate reaction rate equation^ 655.2. Enantiomeric Ratio^ 675.3. Conversion ^ 685.4. Perturbation Analysis^  685.5. Construction of Difference Equation From 1st OrderPerturbative Solution ^  715.5.1. Forward Propagating Equation^ 725.5.2. backward propagating equation 735.6. On-Line Estimation of Substrate Concentration byExtended Kalman Filter^ 765.6.1. Defining the problem^ 765.6.2. Generation of Measured data^ 775.6.3. Numerical Result ^ 775.6.4. Filter Divergence 795.6.5. Correction of Filter Divergence^ 815.7. An Example Of Filter Divergence 845.7.1. Example^ 845.8. Estimation by Exact Filter Algorithms^ 86vi5.8 A . Dynamical^and Measurement Model in OneDimension ^ 865.8.2. Numerical Results^ 876. Discussion and Future Research 896.1. Discussion^ 896.2. Future Research ^ 91Appendix 1^ 95Appendix 2 101Bibliography^ 105viiList of FiguresFig. 1 Noiseless measurements of concentration of substrate A^ 49Fig. 2 Error in estimated values of A from exact measurement of A^ 50Fig.3 Error in estimated values of rm^ 50Fig. 4 Small Noise measurements of concentration substrate A^ 51Fig. 5 Error in estimated A using small noise measurements 52Fig. 6 Estimated rm and exact rm in small noise measurement^ 52Fig. 7 Large noise measurements of concentration substrate A 53Fig. 8 Estimated A and exact A in large noise measurement^ 54Fig. 9 Estimated rm and exact rm in large noise measurement 54Fig. 10 Absolute percentage error of estimated rm as function ofcovariance^ 57Fig. 11 Relative percentage error of estimated rm as function ofcovariance^ 57Fig. 12 Absolute percentage error of estimated rm as function ofcovariance^ 58Fig. 13 Relative percentage error of estimated rm as function ofcovariance^ 58Fig. 14 Absolute percentage error of estimated rm as function ofcovariance^ 59Fig. 15 Relative percentage error of estimated rm as function ofcovariance^ 59Fig. 16 The estimated values of rm for the original system model (4.5)and the modified system model (4.6) ^Fig. 17 Numerical solution of rate equation of the enantiomer^ 6718 Exact and perturbation solutions of A^ 71FIR. 19 Exact and perturbation solutions of B 71viiiFig. 20 Numerical solutions of substrate A from the forwardpropagating equations and the 4th-order Runge-Kutta scheme^ 73Fig. 21 Numerical solutions of substrate B from the forwardpropagating equations and the 4th-order Runge-Kutta scheme^ 73Fig. 22 Numerical solutions of the forward and backward propagatingequations for substrate A^ 75Fig. 23 Numerical solutions of the forward and backward propagatingequations for substrate B^ 76Fig. 24^Enantiomeric ratio calculated from the estimatedconcentration of substrates A and B^ 78Fig. 25 Exact and estimated concentrations of substrate A^ 79Fig. 26 Exact and estimated concentrations of substrate B 79Fig. 27 Residue of A and predicted standard deviation from the EKF ^ 80Fig. 28 Residue of B and predicted standard deviation from the EKF^ 80Fig. 29 Exact and estimated concentrations of substrate A^ 82Fig. 30 Exact and estimated concentrations of substrate B 82Fig 31 Residue of substrate A and predicted standard deviation fromthe EKF^ 83Fig. 32 Residue of substrate B and predicted standard deviation fromthe EKF^ 83Fig. 33 Estimated enantiomeric ratio vs. conversion ^ 84Fig. 34 The estimated concentractions of substrate B obtained by theexact filter and the EKF^ 88Fig. Al Numerical solutions of the forward and backward equationsusing (A.2.11) and (A.2.12) ^  103ixFig. A.2 Numerical solutions of the forward and backward equationsusing (A.2.11) and (A.2.13) ^  104List of TablesTable 1 Absolute and relative percentage errors of estimated A and rm^ 64xiAcknowledgeFirst, I would like to extend my thanks to my supervisor Prof. Robert M.Miura. Thoughout my research, he guided me at every step and spent hours onreading my thesis. My thanks also go to Dr. Tom Stokes, he not onlyintroduced enzyme kinetics to me, but helped me with the technical aspects ofenzyme kinetics thoughout my research. I am grateful to Professor BrianSeymour for his valuable time in reading this thesis. I am also grateful forfinancial support from the B.C. Science Council and Pacific Al in the form ofthe G.R.E.A.T. award.11. Introduction1.1. EnzymesEnzymes are highly specialized proteins that catalyze the chemical reactions inall living cells. They operate as organic catalysts by lowering the energybarrier for chemical reactions. Indeed, in the absence of enzyme catalysts,many biochemical processes would proceed far too slowly to maintain life. Notall enzymes are within cells, e.g. enzymes aid in the digestion of food in thestomach and intestines. For more details the reader is referred to [1].Since enzymes are proteins, they are sensitive to temperature.Increasing the temperature generally increases the reaction rate. However,above a certain temperature, the enzyme becomes damaged or denatured, andloses its catalytic ability.All chemical reactions are reversible. If an enzyme can break a substrate(a substrate consists of the molecules that the enzymes act upon) A into itsproducts B and C, then the same enzyme is equally capable of catalyzing thereverse reaction. Enzymes are true catalysts and do not alter the chemicalequilibrium. The enzyme catalytic action merely reduces the time needed toreach this state.There are several theories to explain why enzymes are specific and howthey operate. All of these theories focus on the three-dimensional structure ofthe enzyme. The simplest theory assumes the so-called "lock-and-key"hypothesis. It postulates that a substrate molecule (the "key") attaches itself toan active site (the "lock") of an enzyme molecule, forming a temporarycomplex. The active site has a particular shape, so only a substrate with thecomplementary shape can attach itself to this site. Similarly, a lock onlyaccepts a key with the correct shape. Hence, molecules with different shapescannot attach to the active site. Excessive heat causes the structure of enzyme2to change. This distorts the active site cause the structure to change. Thisdistorts the active site causing a loss of enzyme activity. (Note: a denaturedenzyme may still bind to the substrate.) That is why the catalytic ability of anenzyme is decreased if the temp erture is raised too much.However, there may be other compounds which are close enough inshape to the substrate to fit into the active site of the enzyme. These aliensubstrates react slowly and compete with the true substrate for active sites.This inhibits the enzyme's activity - the ability of the enzyme to combine withthe substrate to form an enzyme-substrate complex (enzyme and substrate areheld together by physical forces).There are two types of inhibition: reversible inhibition and irreversibleinhibition. In reversible inhibition, an alien molecule only forms a temporarybond with an enzyme. In irreversible inhibition, an alien inhibitor moleculeeither permanently blocks an enzyme's active site or affects this site by bindingto a site elsewhere on the enzyme. Many inhibitors, particularly of thenoncompetitive type, act as poisons. Cyanide, which binds with an enzymenecessary for cellular respiration, is a good example.1.2. Biological ProcessesMonitoring and controlling biological processes is often impeded by the lackof reliable on-line measurements of biomass, substrate, and productconcentrations, and by the difficulties in properly modeling biological activityin processes that are nonlinear and time-varying. One approach to solvingthese problems is to develop state estimation techniques that use availablemeasurements to reconstruct the evolution of state variables or to estimate thebioprocess parameters. The use of filtering theory for state estimationprovides a means of embedding a deterministic model into a method of3forecasting future states which incorporates the probabilistic uncertainties ofboth the system and the measuring devices.The state estimation technique we use is the discrete extended Kalmanfilter. This method allows the use of an approximate model and partialmeasurements of process variables. To study the application of the discreteextended Kalman filter to a biological problem, we use two different enzymereactions as model systems. The first model problem is a simulation ofcellulose hydrolysis with enzyme inactivation. The second model problem is asimulation of the separation of a chiral substance into its respectiveenantiomers.In the problem of hydrolysis of cellulose, the value of a slowly varyingparameter is estimated from a set of measurements that assumes the modelequations of the dynamical system contain errors. For this problem, we havestudied the effects of different amounts of noise in the measurements anditeration of the extended Kalman filter. The accuracy of the estimated value ofthe slowly-varying parameter decreases as the noise in the measurementsincreases and, cannot be improved by a better choice of model errorcovariance matrix (the variance of the error in the dynamical equations).Secondly, we find iteration of the extended Kalman filter on its outputdoes not improve the accuracy of the estimated value of the slowly-varyingparameter. However, we show in Chapter 4 that iteration of the modelequations from the output of the extended Kalman filter could improvesignificantly the accuracy of the estimated value of the slowly-varyingparameter.In the second problem, we reconstruct the composition of a chemicalsystem from a partial set of measurements using model equations of thedynamical system which contain no error. Both the extended Kalman filter and4exact filter are applied to this problem. In applying the extended Kalman filterto monitor this process, we find that if the model error covariance is chosen tobe too small, then it will cause the filter to diverge. This phenomenon isknown as filter divergence which is discussed in Chapter 5.In applying the exact filter to this process, we find a faster rate ofconvergence to the exact value than the extended Kalman filter when there areerrors in the initial concentration of the enantiomers. However, it is feasible tocompute the exact mean and variance of these estimated concentrations only inscalar problems.Although these model problems are simple cases involving only singleenzymes, the extended Kalman filter can be applied to more complex systemswity several enzymes.52. Derivation of the Rate EquationEnzyme kinetics is a tool for studying enzyme mechanisms and the rateequations, which represent the behavior of the enzymatic reactions, aregenerally expressed as a set of nonlinear differential equations. To obtain theanalytical solution of these rate equations is generally impossible. This makesit necessary to introduce approximations in order to analyze the experimentaldata in enzyme kinetics. One of the most common approximation schemes isthe quasi-steady-state approximation. In this chapter we will look at the quasi-steady-state approximation from the singular perturbation point of view [2]and examine the validity of this approximation scheme [3].2.1. The Law of Mass ActionChemical kinetics deals with the rate of chemical change of moluclues in areaction system. A chemical reaction in this system leads to an instantaneousdifference in concentration of reactant molecules. A chemical reaction occurswhen two or more molecules collide under appropriate conditions. Accordingto [4], trimolecular reactions are unlikely. The law of mass action states thatthe rate of molecular collisions of two chemical species in a dilute gas isproportional to the product of the concentrations of the two chemical species.As a result a rate equation of a chemical reaction can be derived based on thelaw of mass action.Throughout this chapter, we adopt the following notation. Romancapital letters represent chemical species and a square bracket around a Romancapital letters represents the concentration of that chemical species.2.1.1. Single Enzyme -Substrate ReactionThe study of the single enzyme-substrate reaction dates back to 1902 whenBrown (see Fersht [1]) published his work on the enzyme-catalyzed hydrolysisd[C] = k1[E][S] — k 1 [C] — k,[C],dt6of sucrose by invertase. In 1988, Segel [3] derived a validity condition for thequasi-steady-state approximation of this reaction. According to Michaelis andMenten in 1913 (see [1]), the basic enzyme-substrate reaction is described bythe reactionsk,E+ S^C k2 >E+Pwhere E is the enzyme, S is the substrate, C is the intermediate enzyme-substrate complex, P is the product, and kb k_1, k2 are the rate constants.The rate equation obtained by using the law of mass action ared[E] = k1[E][S]+k_1[C]+k2[C],dtd[S] = k [E][S] + k„[C],dtd[P] = k2[C],dt(2.1a)(2 .1b)(2.1c)(2.1d)with initial conditions[E](0) =E0, [S](0) =S0, [C1(0) = 0, [P1(0) = 0.We note that the system (2.1) is nonlinear and cannot be solvedanalytically. However, there are two conservation laws given by[NW +[C](O = E0,^ (2.2a)[S](t)+[C](t)+[P](t)= So. (2.2b)Employment of (2.2) to eliminate [E] in (2.1) yields the following simplifiedsystem of differential equations,d[S] (E0 — [CP[S] + k_, [C],dtd[C] ki (Eo — [C101— (1c-1 k2)1C1,dtIS1(0) = S0, [C](0) = 0.(2.3)7Note that (2.1d) can be solved fpr [P] once [C] is known, or [P] can beobtained algebraically from (2.2).The system (2.3) is still nonlinear and cannot be solved analytically.However, in the next section a quasi-steady-state approximation can simplifythis system to a manageable form.2.1.2. Two Enzyme SystemAn example of a two enzyme-substrate reaction is cellulose hydrolysis whichproduces glucose from cellulose and in which there are at least two differentenzymes involved in the reaction.To simplify the discussion, we consider the following schematicreaction.A + E k.,  B + Ek3 B+E^BEki 1_13_4X2 k2 > C E' < k_,3^CE'B +E'where A, B, and C are the substrates; X1, X7, BE, CE', are the enzyme-substrate complexes; kl, k..1, k7, k3, k_3, k'1, k'.1, k'2, k'i, k'_3 are the rateconstants.The rate equations obtained by using the law of mass action ared[A] = —ki[A][E]+ k_i[Xi],dt(2.4a)8d[E] = k,[E][A]+k i[Xi] + k21X1]— k3[E][B1+ k_3[BE],^(2.4b)dtd[X1] —ki[A][El— (k_, + k2)[X,],^ (2.4c)dtd[B] k,[Xi] + k 3[BE] —k3[BE]— k;[B][El+ k'1[X2],^(2.4d)dtd[E'] = k[0][B]+k'1[X,]+k;[X,]— le3[C][E']+ k3[CE1,^(2.4e)dtd[X,1 k, [Elm _ k'^— k; [X2],^ (2.40dtd[C] k;[X2]+1e_3[CE1— k[C][E'], (2.4g)dtd[BE] = k3 [B][E] — k 3 [BE],^ (2.4h)dt d[CE'] le3[C][E']— k' 3 [CET (2.4i)with the initial conditions:[A](0) = Ao, [B](0) = B0, [C](0) = 0,[E](0) = Eo, [E1(0) =[BE](0) [CE1(0) = [X1](0) = [X2](0) =0.Although the system (2.4) is a set of complicated nonlinear equations,there are two conservation laws which can be verified by a direct applicationof the differential equations (2.4). Adding (2.4)(b) ,(c), (h) and integrating, weget the first conservation law[E] + [X,]+ [BE] = Eo.^ (2.5)Similarly, adding (2.4)(0, (g), (i) and integrating, we obtain the secondconservation law[El -F[X2]+[CET= Po.^ (2.6)Since (2.4) have many parameters and is very complicated, we will apply thequasi-steady-state approximation to simplify it.dt92.2. Quasi-Steady-State ApproximationThe quasi-steady-state approximation was proposed by Briggs and Haldane in1925 (see [1]). The steady-state treatment is based on the hypothesis that theconcentration of certain reaction intermediates, such as the enzyme-substratecomplex does not change rapidly during the course of the reaction.Mathematically speaking, if [ES] is the concentration of the intermediateenzyme-substrate complex then d[S]idt))dIESVdt 0. In the next section, asingular perturbation technique will be applied to justify this statement for asingle enzyme-substrate reaction. Below we will apply the quasi-steady-stateapproximation to the rate equations for single and double enzyme-substratereactions developed in the last section.2.2.1. Single Enzyme-Substrate ReactionSince the rate of the change of the enzyme-substrate complex is close to zero,as a first-order approximation, we can set d[C]/dt=0 in (2.1c) and use theconservation law (2.2a) to express [E] in terms of [C] and E0. Then [C] can beexpressed as[Cl ^k, E0 [S] (k + k2) + k, [S]Substituting (2.7) into the rate equation (2.1d), we get(2.7)dill _  V [S] ^dt^K. + [S]where K. = (k2 + kilki and V =k, Eo. Finally combine d[C]idt=0 and theconservation law (2.2b) to obtain^d[S] ^d[P]^V[S] ^dt^dt^K. +[S] •(2.8)(2.9)This equation can be solved exactly using the method of separation ofvariables and yields the implicit solutionEo[E] =1+ [A] + [B]K,(2.13)101t = --{K lnV(SV^m^, [S])+ (se —LS-1)}.(2.10)2.2.2. Two-Enzyme SystemTo apply the quasi-steady-state approximation to (2.4), we have to set therates of change of the concentrations of all intermediate enzyme-substratecomplexes to zero, i.e.,d[X1] = d[X2] d[BE]d[CE'l 0dt^dt^dt^dt(2.10The trick is to express the concentrations of all the intermediate enzymecomplexes in terms of [A], [B], [C], [E], and [El. From 2.4(c), (f), (h), (i),and (2.11), we get[X1] = 1^^[A][E],k_, +k2k; [X21=^ [B][E],' +k',[BE[ = k3 [B][E],k_3k'[CE']= 3 [C][E'[.(2.12a)(2.12b)(2.12c)(2.12d)If we combine the conservation law (2.5) with 2.12(a), (c), then [E] can beexpressed in terms of [A], [B], and the rate constants to givewhere K. (k_, +k2)/k, and Ki = k_3/1c3 .^Similarly,^combining^theconservation law (2.6) with 2.12(b), (d) gives11[E] = [B1 [C1^(2.14)1+ ^+ -[Cm K:where K'm (k'^k;)/k; and K: = k'3/k; . To complete OUT calculation,substitute (2.13), 2.12(a) and (2.14), 2.12(b), into 2.4(a) and 2.4(d),respectively. The result isd[A]^I-. [A]dt[A]+ K (1+ [13]^(2.15a)Ki)d[C]^[13] dt(2.15b)[13] + K' 11 + ^ 'K'where rm. = k2 Eo, and rr:, = k; E.Finally, applying the two conservation laws (2.5) and (2.6) as well as(2.12) to 2.4(d) gives another conservation law for the substrates, namely[A] + [B]+ [C]=constant.^ (2.16)This completes the derivation of the simplified rate equations. However, theconservation law (2.16) is distinct from the other two conservation laws inthat it is a consequence of the quasi-steady-state approximation.2.3. Singular Perturbation TreatmentThe implication of using the quasi-steady-state approximation to deriveexpressions for the enzyme mechanisms can be examined by considering thevalidity of the approximation when it is applied to the simplest enzyme-catalyzed reactionE + S^C k2 >E + P (2.17) 12Even if the quasi-steady-state approximation works for (2.17), there is noguarantee that the approximation will be valid for a more complex mechanism.However, if the quasi-steady-state assumption is a poor approximation for(2.17), then it wll probably be a poor approximation for more complex enzymemechanisms.Mathematically speaking, the quasi-steady-state approximation changesthe character of the mathematical problem from a system of first-orderordinary differential equations to a system of first-order ordinary differentialequations coupled to an algebraic equation. It seems prudent to apply a carefulanalysis to the rate equations in order to determine when this approximationcan be applied.In 1967, Heineken, Tsuchiya, and Aris [2] justified this assumption byusing singular perturbation theory. In the following, we borrow their idea butwe use a different non-dimensionalization scheme. Applying the conservationlaw (2.2a) to (2.1b) and (2.1c) one obtainsd[S] k (E0 — [C])[S]+ k [C],^ (2.18a)dt(2.18b)with the initial conditions [S](0) = So, and [C](0) = 0.We nondimensionize of (2.18) by introducing the followingdimensionless variables[S][C]^E0S=^, C=^6=^ = kiEot.K Eo^K(The dimensionless substrate concentration s, according to Heineken et al. [2],is s=[S]/So. Our transformation, results in only one parameter, a, in thedimensionless equation in contrast to two parameters obtained in theird[C]^(k^k2)1C1+ ki(E, - [C])[S],dt13dimensionless equation. However, their initial condition on s is normalised to1, but our initial condition depends on the parameter so. In conclusion, there isstill a total of two parameters in both dimensionless formulations, but for thepurposes of deriving the quasi-steady-state approximation, the dimensionlesstransformation employed in this chapter simpifies the algebra.)In terms of the dimensionless variables, (2.18) becomes—ds = a c – s (1– c),dr6 dc = s – c (1+ s),drwhere^a =^K. , s(0) so, and c(0) = 0. In the language of singularperturbation theory, we assume (2.19) governs the long time behaviour anddefine a short time scale, T, by T=6 T. In terms of T, (2.19) becomesds^ = c(a c – s (1– c)),^ (2.20a)dTdc = s – c (1+ s). (2.20b)A complete perturbation solution can be obtained by the method ofmatched asymptotic expansions. To derive the quasi-steady-stateapproximation, we need only the leading-order perturbation solution of (2.19)which is obtained by setting c=0 in (2.19). Solving the resulting algebraicequation, we obtainc= (2.21)1+sSubstituting c into (2.19a), we getds k,2 s (2.22)k, +k l+sdrdTIf the definitions of s, c ,and T are used in (2.22) and (2.21), then they become14[C] = E0 [S] + [S1,.d[S]^V [S] dt^K+ IS] •(2.23)This system is a good approximation of (2.18) provided e—E0/K,, is small.(The small parameter obtained by Heineken et al. [2] is 6=E0/S0.Consequently, the criterion for the quasi-steady-state appeoximation isE0/S0<<1. However, in the next section, we show that both criteria can beinterpreted as a first-order perturbation expansion of the criterion+ So) «1 if either So or IC, is small.)This perturbation argument is difficult to apply to more complicatedenzyme systems. In the next section, another approach based on the specialcharacter of singularly perturbed systems is employed to determine a conditionfor the validity of the quasi-steady-state approximation.2.4. Validity of the Quasi-Steady-State ApproximationAccording to the singular perturbation theory [5] used in the last section, thetwo different time scales in (2.20) and (2.22) correspond to the inner andouter regions, respectively. In this problem the inner region is associated witha fast initial transient period in which the substrate concentration does notundergo any significant change. On the other hand, the outer region isassociated with the steady-state period in which the enzyme-substrate complexconcentration does not undergo any significant change.The basic hypothesis is that for an enzyme-catalyzed system, thesubstrate concentration does not change during the fast initial transient periodand the enzyme complex does not change during the steady-state period [18].We can apply this hypothesis to a more complicated enzyme-catalyzedreaction, for which the perturbation argument is difficult to apply, to15determine the condition for the validity of the quasi-steady-stateapproximation. An example is the enantiomer reaction described in Chapter 5.For now, we apply this hypothesis to derive a condition for the validityof the quasi-steady-state approximation for the simple enzyme-catalyzedreaction (2.17). From the system (2.18), since the substrate concentration, [S},does not change during the fast initial transient period, 0<t<tc, we canestimate the order of of magnitude of tcs by putting [S]=So in (2.18b). Thismakes (2.18b) a linear equation with one unknown, [C]. The solution is givenby[C] = S^0 (1 exp(—X t)),K. +Sowhere X k1(S0 +K.). Consequently, the fast initial transient period, tc, isgiven by tc=X•1, i.e.,1tc k1(S0 +K.)^(2.24)Having obtained te, we can proceed to estimate the decrease in substrateconcentration, AS, during the fast transient period. The criterion from thehypothesis can be formulated mathematically as,«1^ (2.25)where So is the initial substrate concentration.However, if the inequality !IASI. /Sol «1 is satisied, then (2.25) is trueas well. ButIASI. = tc • idS/dtl...(1AS I is the order of tc •IdSidtlin.). For the classof solutions that d[S]idt 0, (2.20) imply IdSldtl. Eo So. (For theexistence of this class of solutions, see Appendix 1.) Combining theseequations with (2.25), we obtain the criterion for the quasi-steady-stateapproximation, namelyASSo16E0^« 1.K. +So(2.26)The criterion (2.26) indicate that the quasi-steady-state approximation is validwhen EQ/K. «1 is not true provided that So is large enough or whenEo/S,, «1 is not true provided that Km is large enough. Furthermore,E °^ «1^E^«1.K. +SoThus the criterion E0/(K. + S0) «1 takes into account both the criterionderived by Heineken et al [2] and the one derived in the last section.2.5. Enzyme-Substrate-Inhibitor SystemIn this final section we derive the approximate equations describing thedecrease in enzyme activity due to the presence of an inhibitor and show thatthis mechanism of enzyme deactivation is not equivalent to the unknownmechanism involved in the simulated enzyme deactivation in Chapter 4.Roughly speaking, inhibition is the inactivation of a enzyme in a majormetabolic pathway by another substance. Inhibition of an enzyme can be eitherreversible or irreversible. When inhibition is reversible, the activity of theenzyme can be regained by removal of the inhibitor by physical or chemicalmeans. Irreversible inhibition, however, is characterized by complete loss ofenzyme activity over a period of time even in the presence of lowconcentrations of the inhibitor and the enzyme activity cannot be regained byany physical treatment.Consider a competitive inhibition reaction,E + S^C, k2 E+P k3 E+1^C„d[E] k, [E][S] — k_3[E][I] + k_,^+ k_3[C,] + k2[C1],dt17The rate equation obtained using the law of mass action ared[S] = —ki[E][S]+ k_1[C1],dtd[11 — k,[C,],dtd[C1] = [Nisi _ ki^— k2[C, ],dtd[C,i = k3[E][il— k_3[C2],dtd[I]^k3[E]ri±k3ic2i.dtwith the initial conditions[S[(0) = So, 111(0) = 0, [I](0) = Io,[C,[(0) = 0, [C2](0) = 0, [E](0) = E0.(2.27a)(2.27b)(2.27c)(2.27d)(2.27e)(2.270Combining 2.27(b), (d), (e) and (e), (0, respectively, give the twoconservation laws[E]+ [C1] + [C2] = constant,^ (2.28a)[II+ [C2] = constant. (2.28b)We apply the quasi-steady-state approximation, i.e., set d[C,]/dt = 0 andd[C21/dt = 0, to obtainE0[S] [C,J=K 11+ ^)+[S]K(2.29a)[E] = K m [C,] [S]where K. = (k + k2)/k1^K, = k3/k3. Combining (2.29) and 2.27(c) gives,(2.29b)18d[P]^V[S] dt K1+^ 1+ [S]K(2.30)where V = k, E0. Substituting 2.29(b) into 2.27(a) and using the definition ofKm, we have another conserved quantity[P] + IS]=constant.Using d[C21/dt=0 in the second conserved quantity in (2.28b), we have [I] = Ieso thatd[S]^V[S] dtK. 1+  ° j+[S]K,(2.31)Comparing (2.31) to (2.9), we find that the only difference between these twoequations is the additional multiplication factor in the denominator. SinceKin(1 +Io/K,) > Km, the rate of decrease in the substrate concentration isslower when inhibitor is present.The simulated enzyme deactivation phenomena that is discussed inChapter 4 can be understood in terms of an inhibition mechanism. However,the mathematical equations in these two cases is not consistent and deny theinterpretation that the mechanism of the enzyme deactivation discussed inChapter 4 is competative inhibition. In indeed, consider I0/K1 as a smallparameter, 6, then (2.31) can be rewritten asd[S] _ ^V[S] dt^K.(1+)+ [S][S] K. + [S1'(2.32)where(11+ 6K^mK. +IS] ,The differential equation for V is given bydV ç72  Km1S1 dt^+1SlY19(2.33)Compare this with the differential equation drin/dt = -6 rin the inhibition in thecellulose system discussed in Chapter 4. It shows that the first-order systemcomprising (2.32) and (2.3) is not equivalent to (4.4), cf. Chapter 4, so weconclude the underlying mechanisms of the enzyme deactivation of these twocases are different.203. Filtering TheoryIn general terms, "filtering" means to remove something that we do not wantto pass through a barrier, e.g., water purification. When the entities involvedare signals, such as an electrical voltage, the barrier becomes a filter in thesense of signal processing.To determine whether a system is performing properly, and ultimatelyto control the system's performance, we must know what the system is "doing"at any instant of time. In other words, we must know the state of the system;for example, in navigation, the state consists of position and velocity of thecraft in question. In a bioreactor, the state may be substrate concentrationsand enzyme activity. When we build measuring devices and take measurementsof a system, the measurements are generally contaminated with noise causedby the electronic and mechanical components of the measuring device. Theproblem of determining the state of a system from noisy measurements iscalled estimation, or filtering.The statistical approaches to filtering postulate that certain statisticalproperties are possessed by the useful part of the signal and by the unwantednoisy part of the signal. Measurements are the sum of the useful part and thenoisy part of the signal, and the task is to eliminate as much of the noise aspossible through processing of the measurements by a filter. The earlieststatistical ideas of Wiener [6] and Kolmogorov [7] concern processes withstatistical properties which do not change with time, i.e., stationary processes.This assumption that the underlying useful signal and noise processesare stationary is crucial to the Wiener and Kolmogorov theory. However, atheory that does not require this assumption was developed by Kalman in hiscelebrated paper in 1960 [8]. The new theory is called the Kalman filter and isan optimal state estimation process which is applied to a dynamical system that21involves random perturbations. More precisely, the Kalman filter consists of alinear, unbiased, and minimum error variance recursive algorithm to optimallyestimate the state of a dynamical system from noisy data taken at discrete real-time intervals. It has been widely used in many areas of industrial andgovernmental applications such as laser tracking systems, satellite navigation,ballistic missile trajectory estimation, radar, and fire control.In this chapter, we are concerned only with Kalman filter theory. Thistheory will be applied to enzyme systems in the next two chapters. Theunifying theme in this chapter is the probabilistic or Bayesian approach. Toderive the Kalman filter of a linear dynamical system a Hilbert space approachis possible [9], but it is difficult to generalize this approach to a nonlineardynamical system.In the next section, we will summarize some results from probabilitytheory that are required to understand and interpret the Kalman filter. Furtherinformation on probability theory can be found in [10], [11].3.1. Probability PreliminariesOne of the fundamental concepts in the Kalman filter is the gaussian process,so we will start with it.1. Gaussian Random VectorLet X be a random n-vector with expectation denoted by X = E[X] andcovariance matrix E = ENX — XY[X — X1)]. If E has a nonsingular covariancematrix, i.e., det(E) 0, then we say that X is gaussian or normal if and only ifits probability density function is of the formPx (x) = 1^1 ^F(27cr idet Er2 exPL^E-1(x22For a gaussian random vector, X, with mean X and covariance matrix E, wewrite X—N(X, E). If we denote (¢x to be the characteristic function (theFourier transform of the probability density function) of X, then (I)„ isevaluated to be(1)x = exp(j sT X — —1 sT E s),2where j is -sri.2. Theorem 1Let X and Y be random n-vectors with Y=f(X). Suppose f-1 exists and both fand f I are continuously differentiable. Then the probability density functionfor X and Y are related byPy (Y) Px (f-1(Y)) det(a-1(0)ay )(3.1) where det is the determinant.(See [12] for the proof of Theorem 1.)This result is important in Bayesian probability theory to compute theconditional probability density function conditional on a set of measurements.It is used in the proof of Theorem 4 which is used extensively in the derivationof the Kalman filter algorithm and is used in deriving the exact filter.3. Jointly Gaussian DistributionTo say that random n-vector X and random m-vector Y are jointly gaussianrandom variables is the same as saying that the random vector ZT-IX, YIT isa gaussian random vector. The probability density function and thecharacteristic function are given by1^p(z) =^ x,,, e^—(27t)(m+n)l2 det(Rz r(3.2)23(1)z(u) = exp(juTz — .1uTRzu),^ (3.3)where ZE91m+1, ZT E{ZT = [E{X11,E{YT^[XT^andRz = ERz — 2)(z — 2)1[E{(x — R)(x — TOT] Eit(x — g)(y —E[(y — V)(x — ERy — V)(y —Rx Rxy4RYX Ri.4. Theorem 2If the jointly gaussian distributed random vectors, X and Y, are uncorrelated,they are independent.(See [12] for a proof of Theorem 2.)Theorem 2 and Theorem 3 below, are used in the proof of theorem 4.5. Theorem 3Consider the p-vector^Rz), and let w=CZ+a, where C is a qxpconstant matrix, a is a constant q-vector, and w is a random q-vector. then .w—N(C- Z+a,C-Rz.CT).^ (3.4)(See [12] for a proof of Theorem 3.)This theorem essentially means that linear operations on gaussianrandom vectors produce gaussian random vectors.6. Theorem 4Let X and Y be jointly gaussian distributed random vectors of dimension n andm, respectively . Then the conditional density of x given Y=y is gaussian withmeanX + Rxy R-y1(y — Y)^ (3.5)and covariance matrixRx — Rxy 12-y1 Ryx^ (3.6)24This theorem is essential in the derivation of the Kalman filter and so wepresent its proof below. In the next section, we will show how to obtain theKalman filter algorithms by repeated application of this theorem.ProofThis proof combines elements found in different sources [12], [13].Consider the transformation= X — RxyR-y1Y,W, = Y.From Theorem 3, WT [WiT,W21matrix of W1 is(3.7)is gaussian distributed. The covariance— E[Wi])(W, — E[', DT I= Rx — Rxy Ry'R, .An easy computation shows that W1 and W2 are uncorrelated. Hence, they areindependent by Theorem 2. As a result, their joint probability density functionis the product of their individual densitiesN(X — Rxy Ry^Rx — Rxy Ry'RyxW, NI(Y,Ry).The joint probability density function is given byPw1,w2(w1,w2) [(2 det(Rx — R R Dxy )11•exp{-4-(w1 — X — Rxy Ry'^(Rx — Rxy Ry'Ryx )(w, — X — Rxy Ry-1Y)]^(3.8)• [(27t)m12 det(Ry )1/2 exp[— (w, —^Ry-1(w2 —The jacobian of the transformation (3.7) is 1, and so according to Theorem 1,we obtain the joint probability density function px,y(x,y) by eliminating W1and W2 in (3.8) in favor of X and Y via (3.7). As a result we get25px,y (x,Y) = {(271)212 de Rx — Rxy Ry-lRyx )1121exp{— (x — X — Rxy Ry-1(y — Y))T (Rx — Rxy Ry'Ryx ) 1(x — X — Rxy Ry-1(y — Y))1• [(27c)m12 det(Ry )1/2]^„(y — Y)T Ry-1(y — 17)1^(3.9)The second probability density function in (3.9) is the probability densityfunction of y. The conditional probability density function of x on y is definedbyPxtY (xIY) = P" (X' Y) PY (Y)(3.10)Comparing (3.9) and (3.10) the conditional probability density of x on y isgiven byPx (x ly) = [(27cf2 det(Rx — Rxy Ry YX 1/21 1-exp. 1- (X - X Rxy Ry-1(y — 17))T(Rx —12„yRy1Ryx) 1(x — X — R„yRy-1(y—y))1.(3 1)3.2. Kalman Filter TheoryIn this section, the linear Kalman filter algorithm will be derived. Before thederivation, we define the notation used in the filter equations:X denote the mean of a random vector X.Zk-ldenotes the set of measurement vectors^,} where Zi for i=1,2,...,k-1, is the q-vector representing the measurements at time ti,denotes the mean of the conditional probability density of Xk conditionedon Z', e.g., Xk,,_, predicts the state at time tk from measurements up to timet1_1, whereas )(km updates the predicted state at time tk using all measurementsup to time tk,26Rx, denotes the covariance matrix of random vectors X, Y, with12„., = E[(X — 'R)(YP, denotes the covariance matrix of a random vector X, conditioned on themeasurement set Z'.3.2.1. System DescriptionWe shall restrict our attention to discrete-time systems, or, equivalently,systems where the underlying system equations are difference equations ratherthan differential equations. In some cases, the difference equations correspondto discretizations of differential equations. The reason for studying a discrete-time system is that observations are made and control strategies areimplemented frequently at discrete times in a real system.The dynamics of the state vector X is described by the following systemof difference equations:F„ X, +^k=0,1,2,3,...,^ (3.12)where Xk is a random p-vector, Fk is a pxp matrix independent of Xk, andis a random p-vector representing the model system error. The statisticalproperties of k are described in the next section.The measurements of the system at time tk are described by thefollowing system of algebraic equationsZ, = H, X, +^ (3.13)where Hk is a qxp matrix independent of Xk, Zk is the vector of quantitiesmeasured at time tk, and flk is a random q-vector representing the modelsystem error. The statistical properties of 1k are described in the next section.3.2.2. Noise DescriptionTo derive the Kalman filter (KF), we have to make assumptions about thenoise processes k and Th.27Assumptions on the noise processesThe noise processes {k} and {Ilk} are white noise, i.e., E.k and ilk are gaussianvariables with zero mean andEN,A1= ö ^= ö ^E[rik0 =^0^(3.14)where Sk and Qk are qxq and pxp matrices, respectively, and Su is theKronecker delta.In most of the literature, the covariance matrix, Qk, is called the modelerror covariance matrix and the covariance matrix, Sk, is called themeasurement error covariance matrix, respectively.3.2.3. Derivation of Kalman Filter AlgorithmIn this section, we derive the KF algorithm using Theorem 4 which is based onBayesian probability theory. This derivation is quite long and we will break itup to into several lemmas. Because of the recursive nature of the resultingformulas, i.e., the index appears only in the subscripts, it suffices to prove thelemmas for k=0 and 1. Each lemma summarizes an important result.In breaking up the derivation of the KF, our objective is to use theknown measurements of a system to both update its present state and predictits future state. In Lemma 1, we develop formulas to update the mean andcovariance of the system state at time to based on measurements up to time to.The distinction between the formulas in Lemma 1 and the updating formulas inLemma 5 is that the formulas in Lemma 1 require us to supply the initial valuecovariance matrix. Lemmas 2, 3„4, and 6 give results predicting the state attime tk when the measurements up to time tk_1 is given.3.23.1. Lemma 1The mean X010 and the covariance matrix P010 of the conditional probabilitydensity function of X0 conditioned on Zo are, respectively,28X010 = X0 +Rx0x0 HoT(H0 Rx0x0 HoT +s) '(z0 —Ho XJ,^ (3.15)\-1H^P010 ^—R^HOT(H R^HT+S)^Rcuo^XoX0^X0X0 ^0^X0X 0^0^0^X0X0whereXo is the mean of the probability density of Xo.ProofApplying Theorem 4 from Section 3.1, the conditional meanconditional covariance matrix are given symbolically asX0/0 and theX010 — X0 + RX0Z0 R;0 (z0 20 )9P = R — R^R0/0^XoX0^X9Z0^ZOZO^Z0X0We need to evaluate R^and Rzoz,^xozo(3.17)(3.18)from the dynamical and measurementmodel equations (3.12)and (3.13).Firstly, let us evaluate Rx0z0 from its definitionRx0z0 = E[(X0 — X0)(z0 — 4)1Applying the measurement model equation we getRx0z0 = E[(X0 —^ )(H0(X0 — X0) +ri )1' .Since X0 and 10 are independent, Rxozo can be simplified toRx0z0 Rx0x0 H°TSimilarly, if we apply the definition of R 0 , measurement modelzoz(3.19)equation andE[rioriol = So, we obtainRz0z0 = Ho Rx0x0 HoT +So.^ (3.20)Finally, 12,0x0 is the transpose of Rx0,012,0,0 = Ho Rx0x0.^ (3.21)Substitution of (3.19), (3.20), (3.21) into (3.17), and (3.18) will complete theproof. Lemma 229The mean of the probability density function of X1 conditioned on Zo is givenbyX„0 = F0 X 010 .^ (3.22)ProofApplying Theorem 4, the mean of the probability density function of Xiconditioned on Zo is given byX,10 =X + Rxizi R;lozo (Zo — Z0).^ (3.23)Firstly, X, is the mean of X, and can be shown to be equal to Fo X° from thedynamical model equation. Secondly, applying definition of Rxizo to thedynamical and measurement model equations. Calculations similar to those inthe proof of Lemma 1 show thatR^= F0 RXoZo HT '^ (3.24)XiZo 0 Substituting (3.24) and X, = Fo Xo into (3.23) and using Lemma 1 gives thedesired result. Lemma 3The covariance matrix of the probability density function of Xi conditioned onZo is given byPito Fo P010 FOT + Q0'ProofApplying Theorem 4 the covariance matrix of the probability density functionof X1 conditioned on Zo is given byPuo = R — R^RXiZo^ZOZO^ZOX1 •^ (3.26)To get an expression for Puo, we have to calculate Rx1x1 since the other termsin (3.26) have been calculated in Lemma 2.From the definition of Roc, and the dynamical model equation,xRxix, = E{(Fo X0 + —F0 X0)(F0 X0 + —F0 Xo)Ti.(3.25)30Rearrangement of terms and the independence of X1 and 40 impliesRx1x1 Fo Rx0x0 FT +Q0,whereQ0 = E[001.Substitution of Rxi„,. R10, R;lozo into (3.26) and application of Lemma 1 givethe desire result. Lemma 4Let X1, Z1, Zo be jointly gaussian, then the mean of the probability density(X 1/0function of the random vector (Xi) conditioned on Zo has mean^and‘, Z1/0)covariance matrixP1/0 TH, Pito HiT +(3.27)where S, =ProofFrom Theorem 4, the mean of the probability density function of Z jconditioned on Zo is given by=^+ Rzizo 1Z-zioz0 Rz0z1.(X,Also, consider I " 1 as a random vector, then from application of Theorem 4,the mean of the probability density function ofrX1) conditioned on Zo isZ,given byErzii(3.28)zo31X,^X,zi —^11{Zo —11.However, from our notation R(::) is equal to E[{(zo^ )JHence,(X — X )(Z — Z^(RwoR^= E[^°(P°^(Zi RzizoSubstitution of (3.29) into (3.28), we obtainErz: zo^(Xz,i )1_ (RR:,z0)1Z-z10,0(Z0 —20)izo(Xi) rxizo R-,10zo(Z0 — Z0)Rzizoll-z'ozo(Zo —Comparing the above expression to X110 and Z110, we obtain(3.29)z 0 ="4110(3.30)By assumption for the KF, rz: , Zo are gaussian. Hence, the probability(X,density function of^conditioned on Zo is also gaussian. The mean of thisZ,conditional probability density function is given by (3.30).Now we calculate the covariance matrix of the probability density ofconditioned on Zo. Applying Theorem 4 again, and denote. the covariancezi(Xmatrix of the probability density function of^1 conditioned on Zo by a,Z,32fRx zoSimilar to the argument for deriving Rri) =^,zoz,(R^Rx,z1)xix,R (XzitXzi) = Rzixi Rzlzi)Substitution of the above expression into a and after simplification, a isshown to be[R^ —R R-1 R )x,x,^Rx 120^Z020^ZoX 1R.-1^R^R xiz,^x,zo^zozo zoz,a —From^Lemma^3,^the^first^diagonal^element^of^a,^isR11 Rx1z0 Rzo Rzox,xxFrom Theorem 4 the covariance matrix of the probability densityfunction of Z1 conditioned on Zo, 0„ is given by0„0 — Rz1z1 Rz1z0 Rziozo Rz0z1.Consequently a can be rewritten asR —R R-1 Rx,z0^zozo^zoz,z1z0 zozo 7ox1 R71z1 —R z1z0 R-;R — R R go Rzoz,R^12-1 Rz1z0^zozo^zoxi•01/0Application of the measurement model equation and from the definition of thecovariance matrix, 01,0 can be shown to equal to111 13;f0 HIT ±SI•Calculation similar to the proof in Lemma 3, the off-diagonal elementRxizi — Rxizo R-ziozo Rzozi is equal to P110 H,T.Finally the other off-diagonal element Rzlxi — Rz170 Rzo Rz11x1 is just thetranspose of Rx1z1 — Rx1z0 R-zlozo Rzoz, and hence equal toRzix, — Rzizo 12;104 R zox = 111 11/0 .Combining all these results a can be rewritten to be33PI/0[^PI/0 HIT I .a^111 1) ^H, P110 HI T S 1 jAnd this is the desired result. Lemma 5Let X1, Z1, Zo be jointly gaussian, then the probability density function of Xiconditioned on (Z 1, Zo) is gaussian. The mean and the covariance matrix ofthis conditional probability density function are, respectively,= X„o +Puo H1 (H1 P 111 T^) 1( Z1 H, X10),^ (3.31)PI/1^PI/0^131/0 H1T (HI P110 111T + SI ) 1 H1 P IN. (3.32)ProofThe probability density function of X1 conditioned on (Z1, Zo) according toBayes' Law is given by,f(X,IZ„Zo)= ,f(X^\Z1Z°fkZilZ0)The probability density function f(X1,Z1IZ0) is gaussian and the mean andcovariance matrix are given by Lemma 4. Also from probability theory, we canassociate to this probability density function f(X„Zi Z0) a random vector.Notationally, we write the random vector as (X1lZ0,Z1lZ0). Then in terms of(X Zo,ZilZo) the probability density function f(X„ZdZo) can be rewritten asf(X1lZ0,Z1lZ0). Thus,f(XilZ0,Z,14)f(X„Z114) ^0,14)Hence, we can apply Theorem 4 to get the mean and covariance of f(X,1Z1,Z0)from the mean and covariance matrix of f(X, \ Zo,Z, \ Z0).34Firstly, from Theorem 4 the mean of the conditional probability densityfunction of f(X11Z1,Zo) is given asX111^X110 +^H1 (H1 P„o HIT +S1) 1(z1 \ zo —Z110),where z1 \ zo is the measured value of the random vector Z1, so we will writez, \ zo as 2,. On the other hand, Z110 is the mean of the probability densityfunction of Z1 conditioned on Zo. By Theorem 4,Z110^Z, +Rz1zo 12-zolz0(Z0If we apply dynamic and measurement model to evaluate R10, Z110 can berewritten asZ110 HilXi +(Fa Rxoxo HOT) R Z1020 (ZO ZO )1 •From Lemmas 2 and 3, the expression inside the curly braces is just X110. As aresult,Z110^H, X„0.Consequently, this result impliesX1n X„0 + P 111T (H, PI/0 HIT + SI ) 1(21 HI x110Secondly, the conditional probability density function f(X, \ Zo,Z, \ Zo) hascovariance matrix given by Lemma 4 asI/0[ PH, PPI/0 HITP ^HiT(Rxilzo,xii, zoRzpo,x1IzoR^,xivo,zuzoRziizo,zilzo])(3,33)On the other hand the covariance matrix P ^the conditional probabilitydensity function f(X,14,Z1) = f((X,IZ0) (ZilZo)) is given by Theorem 4 as Rzlizoxilzo•^ (3.34)Hence, substitution of (3.33) into (3.34) gives the desire result. Lemma 6The probability density function of X, conditioned on (Z1, Zo) has meanx2z0—R^Rz'ozo R7ox2R„ =— Rx2z0 Rzozo Rzaz,Rz,z, Rz1z0zoz0 R zoz,^021(3.37)Rz1z0 Rzo R2ox135X2/1^F1 )(1,^ (3.35)and covariance matrixP2/1^Fl P111 FIT ± Q1^ (3.36)ProofFirstly, let us consider (X2,Z1) conditioned on Z0. From Theorem 4, theprobability density function of (X„Zi) conditioned on Z0 has mean (X2,0,Z110)and covariance matrix a, whereCT = R1,1„ —R^R;1zRroozo^zo2).From arguments similar to those in Lemma 4, a can be rewritten asFrom the argument in the proof of Lemma 5, we obtain immediately asimplified expression of 022,022 =}11 P ^+S1-Applying Theorem 4 again with (3.37), we obtain the mean, X211. andcovariance matrix, P2/1, of X? on (Zi, Zo).The mean and the covariance matrix can be written symbolically asX„, = X210 + 012 0(Z, — H, E[Z, \ Z01),P ^0„ —0,2 0221 021,where E[Z, \ Zo I =H1 X110. So the proof is reduced to calculating000(= 0T), which in turn is further reduced to calculating Rx221,Rz0z1, Rx2x2.Application of the dynamical and measurement model equationstogether with Lemmas 1 and 3, 0„,02„0,2(= 02,T) can be shown to be021^Fl P1/0 H1T 136011 F, P„o F,T +Q, where QSubstitution of 011,O21,012(— (3211 into P21, = 011 —012 0;1 021 with Lemma 5 resultsinP2„ = 0„ —0,2 02-; 021Finally, substitution of 0„,02„0,2(= 021T) intoX2/I^X210 ± 012 (322 (Z1 HI E[Z1 \ Z0 1)and applying Lemma 5 results inX,„ = F, X,„.Since X210 can be shown to be (from Lemma 2)X„0 F,This complete the proof.We can see that (3.35) in Lemma 6 are the same as (3.22) in Lemma 2except each index is increased by 1. However, this difference is the same asobserved for Lemmas 1 and Kalman Filter AlgorithmGeneralizing the indices on the formulas in Lemmas 1-6, the following KF isproposed for the dynamical and measurement model equations given by,Dynamical model Fk • Xk^7 where ERk^— °kV Qk^ (3 .3 8)Measurement model Zk Hk .Xk + Ti„ where E[rik riTk] = °kV Sk^ (3.39)The KF algorithm consists of several different pieces. To predict the state andcovariance matrix of the system at time tk using measurements up to time t",we have the results(1) state prediction(3.40)Xk/k-1 = Fk . X k-1/k -1,37(2) propagation of covariance matrix'k/k-1 = Fk Pk-1/k-1 FT Qk •^ (3.41)To update the state and covariance matrix of the system at time tk usingmeasurements up to time tk , we compute the gain matrix(3) Gain MatrixKk^Pk/k-1 ITIk[Hk 'k/k-1 HkT Sk 1,^ (3.42)and insert it intio the state update and covariance matrix update given by(4) state update= Xkik^Kk (Zk ' k Xk/k^ (3.43)(5) estimated error covariance matrix updatePk/k^Pk/k-1 Kk Hk Pk/k-1•^ (3.44)The KF algorithm is applied in Chapters 4 and 5 to simulated enzymedeactivation and separation of enantiomers.3.2.5. Alternative Method of Derivation of Kalman Filte414]The Bayesian approach used to derive the KF algorithm in Section 3.2.4 hasthe advantage that it can be easily generalized to nonlinear filtering problems.However, the KF has the property that the covariance matrix of the error isminimized. This property is not apparent using the Bayesian approach.Therefore, we give an alternative proof in which this property is moreobvious.The proof of the KF algorithm given by (3.38)-(3.44) requires theconcept of innovation. We start with the definition of an innovation process. Innovation ProcessThe innovation un is defined byu = Z Hn^n^nwhere Zn,Fin A./. are defined in Section Lemma 7The process un is a gaussian process such that= 0,Erunuml^ H 4-S.)where S.,H.,P./. are defined in Section 3.2.4.ProofFrom the measurement model equationo= H.(X. — X.,.)+Therefore,E[unIZn1 = E[H. (X. — X111._1) Zn-1 + E[rinIT-1] = 0. (3.45)Also from Theorem 3, un is gaussian since it is a linear combination ofgaussian variables X. From the conditional expectation in probability theory,we haveE[A] = E[E[AIB]].^ (3.46)Combining (3.45) and (3.46) results inqu = 0.Moreover,\^TE[On Un = E[Ifin(xn —x„,„„)+TinlIfi jxn —^fSince i is independent of Xn, we deduce immediatelyE[lin^] Hp. Pnin- 1 HnT Sn •Furthermore, for k=1,2,...n-1,E[On Zk T 1Zn-1] E[Un z11-1]z: =0.From (3.46), we have,39But,E{on^= E[u]Xk/kl= 0.Therefore, we concludeE{un 0,T J- O, for k=1,...,n-1.which completes the proof. Theorem 5For the dynamical and measurement model equations (3.38) and (3.39), theformulas (3.40)-(3.44) hold.ProofThe results (3.40) and (3.41) follow from the application of the definition ofXkik and the dynamical equation, i.e., for (3.40) we haveXk+1/k E[Xk+i 1Zk^E[Fk Xk^I Zk=F Xkik .A similar argument yields (3.41).Moreover, using the innovation, we can assert thatXk/k Ekk IZ„ Z2^Zk „ Uk 1.Using the fact that X„,„ is also the best linear estimate of Xk, givenZ„Z„--- ,Zk_i,uk, we have the formulaXk/k^Xk/k-1 Kk D kwhich corresponds to (3.43), where K, is a gain matrix to be determined. Itremains to fix Kk, knowing that it minimizes the covariance of the error.The error of the estimated state can be written as= X - Xkik= (X - Xkik - Kk 0k .Hence, the covariance matrix of Ek', E[eki ek+ ^is given by40Pk/k Pk, , Kk(fl, P^HkT + Sk )Kk TKk E[uk(xk —X fr/k — E[(xk — xk,k_i)UkT] Kk THowever,E[Uk (Xk Xk/k 1)] E[Zk (XkE[Hk Xk (Xk X„k IHk 1k/k-1 •Therefore, we obtain by completing the squarePk/k Pk/k-1 Kk (Hk Plak-1 HkT + SiKkT Kk Hk Pk/1,-1^HkT K kT= Pk k + [Kk Pk HkT (HkPkHkT + Sk I ](HkPkHkT + )X [KIT (HkPkHkT + Sk) 1HkPk — Pk/k-lHkT (11kPkHki. Sk 1HkPk •^(3.47)It follows immediately that the best value of Kk isKk^Pk/k 1 HkT [Hk 'k/k-1 HkT Sk 1,which is (3.42) and (4.47) reduces to (4.44).Thus the proof is complete.3.3. Extended Kalman Filter TheoryThe extended Kalman filter (EKF) is a generalized version of the Kalman filterwhen the dynamical or measurement model equations are nonlinear. Thedynamical and measurement model equations areX, = fk(Xk)+ tk,^ (3.48)Zk hk(Xk)+ (3.49)where Xk is a random n-vector, Zk is a random m-vector, 4k and Tik aresequences of white noise processes of dimension n and m, respectively, fk(Xk)is a nonlinear n-vector function, and hk(Xk) is a nonlinear m-vector function41It is possible to derive the EKF in a way similar to the derivation ofLemmas 1 to 6 by Taylor expansion of f„(X„), and h„(X„) about x„,k andneglecting the moments of order higher than two. However, the simplest wayto derive the EKF is to linearize the dynamical model equations about X„,, andmeasurement model equations about Xkik_„ and apply Kalman Filter algorithmsto the linearized equations.The nonlinear functions t(X„) and h„(X„) can be expanded in Taylorseries about the conditional means Xki, and X„,„ „ respectively, asf(X„) = f(X„„)+F„ (X, — X„,,)+---,^ (3.50)h(X,) = h(X„,„_,)+Hk (Xk —^ (3.51)where Fk -afk (Xk/k and H — ^OX^k^k/k-1) are nxn and mxn matrices,respectively.Neglecting higher-order terms and assuming we know X„,„ and Xkik_i enablesus to approximate the dynamical and measurement models asXk±i^Fk Xk +^+ (3.52)Zk -^Xk + Vk + lk,whereu, = fic(X„,,)— F, Xkik ,(3.53)(3.54)Vk^hk (Xkik^Hk Xkik^. (3.55)Now if we apply the KF to the linearized dynamical and measurement modelequations (3.52) and (3.53), the EKF will be obtained. The extended KalmanFilter algorithms applied to (3.48) and (3.49) is identical to (3.40)-(3.44)except that F, = Of„(X,„)/ax and Hk = ahk(xkik Nax.3.4. More About Error Covariance MatricesTo apply the EKF or the KF algorithms, derived in previous sections, we haveto specify the model error covariance matrix, Qk, the initial state error42covariance matrix, Po, and the measurement error covariance matrix, Sk.There are no systematic methods to determine these covariance matrices. Inmost of the literature, the values of these covariance matrices are determinedby trial-and-error based on the experience of the user. However, there is a ruleto help us to decide the range of these covariance matrices.Let us start with the model error covariance matrix, Qk. Since thediagonal elements of the model error covariance matrix are the squares of thestandard deviations of noises, then, the square roots of the diagonal elementsof the model error covariance matrix should be large enough to account forthe error between the exact model equations and the system model equations.Similarly, the square roots of the diagonal elements of the initial state errorcovariance matrix should be large enough to cover the difference between theexact initial values of the state and the initial values of the state specified bythe EKF or the KF. Finally, the values of the measurement error covariancematrix are dependent on the instruments that we use to perform themeasurements and good values of the measurement error covariance matrixcan be found in the manuals for the instruments.3.5. Exact FilterSince a nonlinear transformation can transform a gaussian variable into anongaussian variable, the fundamental assumption of the KF theory can beviolated. Some other techniques have been proposed to improve theperformance of the EKF, for example, the gaussian sum technique [15] [16].However, if a dynamical system is simple enough, then the conditionalprobability density function can be computed. In this section, a recursionprocedure will be developed to compute the conditional probability.The algorithm is composed of two parts:1. Evolution of the conditional probability density function.432. Update of the conditional probability density function from the measureddata.Consider the dynamical and measurement model equations of a system is givenby (3.17) and (3.18).Lemma 8 (Evolution of conditional probability density function) Let pk(xklZk ) denote the probability density function of xk conditioned onmeasurements Zk. If r1 exists then the probability density function of xk+1conditioned on measurements Zk is given byPk+1(xk+11Zk - Pk (fk I (xk+1)1ZIC). cid( afk (xk+1 ) aXk+1 ProofThis result is a consequence of Theorem 1.Lemma 9 (Update of conditional probability density function from measured data) Let pk (x.k It-I) denote the probability density function of xk conditioned onmeasurements Z", thenPk (Xk lZk =^(Zk 1Xk ) Pk (Xk iZk 1)(Zk i(P) Pk (PIZ" ) thpProofFrom the definition of Zk ={z,,Z2,•••,Zk}Pk (XklZk^Pk (xk ,Zk 1) •From Bayes' rulek(zk I xk ,Zk 1) pk (xk )Z" )Pk (Xk IZ =p,(zkiZk-1)Since the measurement noise is white noise, i.e., qv, =0 for i#j,44p,k(zklxk,Zk 1)=pzk(zk xk).Furthermore,PZk (zk^= fPz, (zk IT)Pk (PlZkld(P,Combining the above results, we getPk (xk I 13,k (Zk^pk (xklzk-Jj Spzk (zk IT) pk (y1Z/C-1) chp (QED)In the following, we show how to calculate p,k(zkixk) from Theorem 1and the probability density function of rik• Since zk -,-hk(xj+iik, then byTheorem 1pzk (Zk^- Pylk (14 hk (Xk )).Since flk is gaussian^N(0,Sj)pzk(zkixk) ((2701/2111 Vdet(Sk)) lexp(--;[zk — hk(Xk)]T^hk (Xk )]).In particular in a one-dimensional problem, rik 1■1(0,(52 ), where a is thestandard deviation,pz,(zkixi^1  = ^ exp1/27ta2^2^(72The exact filter algorithm will be applied to the enantiomer separation problemin Chapter 5.We conclude this section with the remark that it is not feasible tocompute the exact mean and variance except for one- and two-dimensionalproblems.(zk - h(x3) 2\454. Enzyme DeactivationIn a realistic experiment or bioreactor, the activity of the enzyme maydecrease as time progresses. However, the concentration of substratedecreases at a slower rate than that predicted by the enzyme-substrate reactionrate equation. It is often difficult or impossible to determine exactly the modelequations for the enzyme deactivation process because there are many factorsinvolved. An example of such a process is the enzymatic hydrolysis ofcellulose, cellobiose, and glucose [17], [18]cellulose —> cellobiose —> glucoseAcquisition of data for the key variables and their control for the abovereactions are often impeded by the lack of reliable on-line measurements of thesubstrate and product concentrations and the difficulties in properly modellingbiological activity in processes that are time-varying. One approach to solvingthis problem involves the development of state estimation techniques that useavailable measurements to reconstruct the evolution of the state variables (i.e.,the variables of the rate equation) or to estimate the parameters of the rateequation.Enzymatic deactivation in the enzymatic hydrolysis of pretreatedcellulose was studied by G. Caminal et al. [15] using the extended Kalmanfilter (EKF) on a model by G. Caminal et al. [16]. Traditionally, the methodsused most often to study thermal deactivation of enzymes require anevaulation of activity losses by incubating the enzyme without a substrate fordifferent periods of time and then testing its activity [19]. This methodologyrequires that a specific set of experiments be carried out; it does not accountfor the possible effects of the substrate on the enzyme's stability. The power of46the EKF is that even if we do not know the exact equation modelling theenzyme deactivation process, we can still apply the rate equation for thesubstrate obtained from a quasi-steady-state approximation. All otherparameters are treated as constants. If we can measure the substrateconcentration, then the values of all the other parameters can be predicted bythe EKF during the same time period that the measurements were taken.Application of the EKF to identify and track the slowly varyingparameters of a multiple enzyme system for pretreated cellulose has thedifficulty of being able to choose the system model error covariance matrixappropriately. To study the application of the EKF to an enzyme deactivationproblem, a model problem is set up which simulates a single enzymedeactivation in which the parameter that represents the activity of the enzymeis slowly varying.The EKF algorithms is developed in Section 3.3. In the following, themodels consist of coupled nonlinear differential equations. The EKF algorithmis applied to Euler discretizations of these models.4.1. Reactions and Kinetic EquationsTo simplify the mathematics, we employ a single enzyme reaction in which asubstrate A is turned into product P.A+E^C  k2 >P+Ewhere A is the substrate, E is the enzyme, P is the product, C is theintermediate enzyme-substrate complex, k_i, k 1, k2 are the rate constants. Weuse the same notation, i.e., A, E, P. C, for the concentrations of thesequantities. The differential equations for the rate of change of concentration of47each element in the above chemical reaction can be derived by applying themass-action law to obtiandAdt^lc, A E + k_, C,dEdt^kiAE+k ,C+Ic,C,dCdt^k2C+k1AE—k1C,dPdt = k„ C .(4.1)If we apply the quasi-steady-state approximation to the exact equation(equivalent to setting dC/dt to zero ), the exact equations can be simplified todA r A dt^K. + A '(4.2)dP dAdt^dtwhere I-. k,, E0, Km (k2 + k_1)/k„ and Eo = total initial enzyme concentration.We simulate the enzyme deactivation by allowing rm to decrease slowlythrough an exponential law, i.e., rm is modelled bydr.^= —6^ (4.3)dtwhere 6 is an artificial small parameter that controls the rate of deactivation.In conclusion, we simulate the enzyme deactivation process by thefollowing system of differential equationsdA _^I-. Adt^K+ A'dr^ = —6 rmdtWe will refer to (4.4) as the exact model.(4.4)484.2. On-Line Estimation of Enzyme Activity Using the Extended KalmanFilterTo apply the extended Kalman filter to monitor the activity of the enzymewhich is governed by (4.4), we have to specify a system model which is a goodapproximation of (4.4), and then generate measurements which represent theobservations of our artificial enzymes system. The measured quantity in thissimulation is the substrate concentration, A.4.2.1. System modelIn a realistic problem, it is often not possible to obtain an exact mathematicalmodel that describes the physical phenomenon. Therefore, we have to employa system model which is an approximation to the (real) system that generatesthe observations. Sometimes, such an approximation is intentional. Forexample, it may be desirable to employ a system model of lower dimensionthan the real system in order to gain computational speed and simplicity. Inthis study, the system model is given bydA^rt. Adt^K+ A'dr. 0dtdKm O.dt(4.5)4.2.2. Noiseless Measurements4.2.2.1. Generation of Measured DataThe measured values of substrate concentration, A, are obtained by solving(4.4) using a 4th-order Runge-Kutta method with initial conditions A(0)=3,rm(0)=10, Km(0)=100, and parameter c=0.02.The result is shown in Figure 1.2 4^6A^ timeFig. 1 Noiseless measurements of concentration of substrate A. Numerical ResultsTo apply the EKF we also have to specify the error covariance matrix of thesystem model, Q, the error covariance matrix of the measurement model, R,and the initial state error covariance matrix, Po. Using the guidelinesdiscussed in Chapter 3, one choice of Q, R, Po obtained by trial-and-error isQ= 0 8 x 10' 0 , R = [101, P0 = 1 0 10' 00^0^10'^[0^o 10'-^ -The results of running the EKF on the above choices of error covariancematrices are shown in Figures 2 and 3. Since the estimated values of A and rmare so close to the exact values, it is difficult to observe them in a graph.Therefore, the errors in the estimated values of A and rm are plotted from theexact values.10 ^0^0 10' 0^04922. 752. 52. 2521. 751. 5A_errtime50 Fig. 2 Error in estimated values of A from exact measurement of A. A_err =estimated value of A — exact value of A.rrn_errtimeFig. 3 Error in estimated values of rm. rm_err = estimated value of rm — exactvalue of rm.In conclusion, the behavior of rm is being tracked correctly using theEKF with suitable choices of Q, R, and Po. This tracking occurs in spite of thefact that the rm predicted by the system model is not varying with time andthat the real rm is exponentially decreasing.4.2.3. Small Noise Measurements4.2.3.1. Generation of Measured Data51As in the previous case of noiseless measurements, the state that can bemeasured is the concentration of substrate A. However, exact measurementscannot be obtained in a realistic bioreactor. Thus, the measurements aresimulated by adding white noise of zero mean and standard deviation 0.005 tothe measurements in Section (see Figure 4).A32. 752. 52. 251. 751. 5Fig. 4 Small noise measurements of concentration substrate A. Numerical ResultUsing the guidelines discussed in Chapter 3, choices of model error covariancematrix Q. initial state error covariance matrix Po, and measurement errorcovariance matrix R areQ=10'0[004x10200010'R = {0.0052}, Po =1000010"00010'The results of running the EKF on the small noise measurements using theabove choices of error covariance matrices are shown in Figures 5 and 6.Again the estimated A is so close to the exact values that it is difficult toobserve these differences. Hence, the errors in the estimated values of A fromthe exact values are plotted.2 4exact rm-estimated rm!Pi^11-9- 3,■eVirtimea 5MIA10time52Fig. 5 Error in estimated A using small noise measurements. A err =estimated value of A — exact value of A.Fig. 6 Estimated rm and exact rm in small noise measurement.As expected, the error in the estimated concentration of substrate A is quitesmall. The average absolute percentage error of the estimated concentration ofsubstrate A is 0.083%. On the other hand, the average absolute percentageerror of the estimated rm is 1.92%.4.2.4. Large Noise Measurements534.2.4.1. Generation of Measured DataAs in Section, the measurements are simulated by adding white noiseof zero mean and standard deviation 0.05 to the measurements obtained byexact solution of the exact model (see Figure 7).A22. 752. 52. 2521. 751. 5. . •.•e^• .. ' ^ time2 4^6Fig. 7 Large noise measurements of concentration substrate A. Numerical ResultBy trial-and-error, choices of model error covariance matrix Q, initial stateerror covariance matrix Po, and measurement error covariance matrix R arefound to be10-7^0^0^10-8^0^0Q= 0 8 x 10-2^0^R = [0.052],Po^0 10 ^00^0^10-9 0^0 10'The result of running the EKF for the above choices of error covariancematrices are shown in Figures 8 and 9. For this case, the errors in theestimated concentration of substrate A are large enough to make the differencebetween the estimated and the exact concentration of substrate A observable.Fig. 8 Estimated A and exact A in large noise measurement.solid line - estimated rmdashed line - exact rmrm— time2 4 6Fig. 9 Estimated rm and exact rm in large noise measurement.The average absolute percentage error of the estimated concentration ofsubstrate A is 0.56%. On the other hand, the average absolute percentageerror of the estimated rm is 4.69%.4.2.5. The Choice of Model Error Covariance Matrix55So far there is no systematic method to determine the model covariancematrix. However, experience shows that the components of the model errorcovariance matrix corresponding to the state variable which is governed by amodel equation with larger error have to be relatively larger. In the example ofenzyme deactivation, the uncertainty in the model equation arises in the statevariable rm. In this section, we are going to examine the change in theestimated rm when the model error covariance matrix with a different (rm,rm)component, i.e., the second diagonal element is employed.Before doing the numerical computation, we need to introduce twoquantities which indicate the accuracy of the estimated rm.Definition:The absolute percentage error, Abs%, is defined byAbs% =100 x xN n=, n th^- estimated value — nth exact valuen th exact valuewhere N is the total number of measurements. The relative percentage error,Re1%, is defined byI^N,=^x x^(nthestimated value — nth exact valueRel% 100 — yN nthexact valueRoughly speaking, absolute percentage error measures the amplitude of thefluctuation in the estimated values of a state variable. For example, largeabsolute percentage error means that the amplitude of the fluctuation in theestimation is large. Relativity percentage error can be positive or negative, androughly speaking, it determines the best fitted curve of the estimated values.The absolute percentage error and relative percentage error of theestimated rm are calculated with the following sequence of model errorcovariance matrices, Qn,564 x 10 0 0Q. 0 covariance 00 0 10'where covariance=20 x n x 10-2.With this choice of the model error covariance matrix, we again will examinethe cases of noiseless measurement, small noise measurement, and large noisemeasurement. Noiseless MeasurementThe sequence of model error covariance matrices for the EKF is specifiedabove and the measurement error covariance matrix, R, and initial state errorcovariance, Po, are given by10' 0 0R = [101, P0 = 0 10' 00 0Figures 10 and 11 show the dependence of the absolute percentage error andrelative percentage error on covariance defined in Section 4.2.5.Abs % error570. 260. 250. 240. 220. 22 covariance2^4 6^seFig. 10 Absolute percentage error of estimated rm as function of covariance.Rel % error^  covariance-0. 22-0. 22-n. 24-0. 252^4^6^0^10-0. 26Fig. 11 Relative percentage error of estimated rm as function of covariance. Small noise measurementFor the small noise measurement case, the measurement error covariancematrix, R, and initial state error covariance, Po, are given by10-8 0 0R = [0.00521Po = 0 10-8 00 0 10-8a 2covariance4^6^2^102-0. 2-0. 4-0.5-0.-158Figures 12 and 13 show the dependence of relative percentage error andabsolute percentage error on covariance.Abs % error14121064 ^ covariance102^4 6Fig. 12 Absolute percentage error of estimated rm as function of covariance.Re! % errorFig. 13 Relative percentage error of estimated rm as function of covariance. Large Noise MeasurementFor the large noise measurement case, the measure error covariance matrix, R,and initial state error covariance, Po, are chosen to be5910' 0 0R = [0.052], P0 = 0 10' 00 0 10'Figures 14 and 15 show the dependence of relative percentage error andabsolute percentage error on covariance.Abs % errorcovariance2^4^6^ _toFig. 14 Absolute percentage error of estimated rm as function of covariance.Rel % error1. 510. 525201510 covariance102^4^6-0. 5-1-1. 5Fig. 15 Relative percentage error of estimated rm as function of covariance.604.2.5.4. ConclusionFrom the above figures, we discover that as covariance increases the absolutepercentage error always increases. This is indicative of the fact that a largeruncertainty in the system model leads to larger uncertainly in our estimation ofrm. However, the relative percentage error decreases from positive values tonegative values as covariance increases. This result shows that the value of thecovariance at which the relative percentage error is zero would be a goodchoice. For example, for the large noise measurement case the relativepercentage error of the estimated rm is zero when the covariance is about 2.25as shown in Figure 15, and Figure 14 indicates that the absolute percentageerror at this value is 17%. To reduce the absolute percentage error of theestimated rm, we can adopt a more sophisticated filter algorthim such as the J-estimator invented by Jazwinski in his 1974 paper (see [20]). However, asimpler way is to sacrifice the relative percentage error. From Figure 14, thecovariance decreases to about 1 when the relative percentage error of theestimated rm increases to 1%, and the absolute percentage error of theestimated rm drops to about 11%.Unfortunately, this method cannot be applied to a realistic enzymesystem because the exact values of certain state variables are not available.However, this calculation shows that a good choice of the model errorcovariance matrix exists.4.3. Iteration of the Extended Kalman FilterIn an off-line analysis, we can apply the EKF again to the estimated stateobtained by the application of the EKF to a set of measurements.Unfortunately, if the model error covariance matrix for the second applicationof the EKF is chosen to be the same as the first EKF, then the relativepercentage error and the absolute percentage error of rm will become larger61than that obtained from the first EKF. On the other hand, if the model errorcovariance matrix of the second EKF is chosen to smaller than the model errorcovariance matrix of the first EKF, then the relatively percentage error of rmwill become larger. As a result, it is not practical to iterate applications of theEKF to the output obtained from the EKF.However, a regression analysis can be done on the set of estimatedvalues of rm which is obtained from the first EKF. For example, if rm is givenin terms of a quadratic function of time, e.g., = a +13 t +7 t2, where a, 13, 7 areconstants determined by the estimated values of rm, then the modified modelequation for rm isdrm^dt =13+7t.Hence, the modified system model isdA^rm Adt^Km +Adrm^t,dtdKm =0dtNow we can apply the EKF to this modified system model. The followingschematic diagram summarizes the iteration procedure.62system modelExtended KalmanFiltermeasurementsd Extended KalmanFilter^ modified systemmodelmeasurementsestimated stateestimated state4.3.1. Numerical exampleWe apply the above iteration procedure to the case of the large noisemeasurements since the estimated rm has both larger absolute percentage errorand relative percentage error. Firstly, an estimation by the EKF based on thesystem model (4.5) is performed. Having obtained a set of estimated values ofrm, a linear regression analysis is performed and the best straight line fit iscalculated to be= 10.1388— 0.1799 tHence, the model equation of rm is changed todr.^ = 0.1799.dtConsequently, the system model (4.5) is changed to:63dA^I-. A dt^K + Adr.^ =-0.1799.dtdK^ =0.dt(4.6)Since the system model has been changed, we have to modify the model errorcovariance matrix. The modified model error covariance matrix. Q, is4 x 10-5 0 00 80 x 10-4 00 0 10-9The absolute percentage error and the relative percentage error of theestimated values of rm based on system model (4.6) are improvedtremendously. Figure 16 is a plot of the estimated values of rm for the originalsystem model (4.5) and the modified system model (4.6). The following tablesummarizes the performance of the first and second iterations of the EKF interms of the absolute percentage error and the relative percentage error.642 4 6dashed --exact rmsolid black^estimate rm withmodified systemmodelgrey --estimated rm with rawsystem modeltimermFig. 16 The estimated values of rm for the original system model (4.5) and themodified system model (4.6).Table 1 Absolute and relative percentage errors of estimated A and rm.1st Iteration 2nd IterationAbsolutePercentageErrorRelativePercentageErrorAbsolutePercentage ErrorRelativePercentage ErrorA 0.6954% 0.3162% 0.5034% 0.2711%I rm 5.8456% 1.9369% 0.9008% 0.3168%655. Separation of EnantiomersIf a molecule is not superimposable on its mirror image, then it is chiral. Theseimages are called enantiomers or chiral. Living organisms often show differentbiological responses to one of a pair of enantiomers in drugs, pesticides orwaste compounds. For example, sodium L-(+)glutamate tastes deliciously infood, whereas its mirror image D-(-)glutamate tastes bitter or flat.Furthermore, there are a number of cases where one isomer is largelyresponsible for the desirable biological action while the other predominates inthe toxicity/undesirable action. Therefore, industry develops methods ofseparating enantiomers. In this section, we are mainly concerned with theenzymatic technique of separation and the application of the EKF to monitorthe process of separation of enantiomers.5.1. Enzyme-Substrate Reaction Rate EquationConsider the following coupled enzyme-substrate chemical reactionA + E^Ci k2 >E + P,ki B +E^C, k2 >E + P2<where A and B are substrates, E is the enzyme, 131 and 132 are the products, C1and C, are the intermediate enzyme-substrate complexes, k_1, kl, k,, k1, k'1,k'2 are the rate constants.If we apply quasi-steady-state assumption, the following rate equationis obtained [3] ( cf. Chapter 2)66dA_ ^VniA K.B Adt^K.„ A + K.A B +(5.1)dB^Vni„ KT, Bdt^K.B A + K.„ B +^KmBwhere A and B are the concentrations of the substrates A and B, respectively,and the parameters of the first-order system differential equations are definedbyK = k' +k2 .mA^k,K^1(11InA k;VifiA = Eo k„Vm], =E0k.where E0 is the initial concentration of the enzyme.A numerical solution of the coupled first-order differential equationobtained from using the 4th-order Runge-Kutta scheme is plotted in Figure 17for the following initial conditions and parameter valuesAo = 0.05, Bo = 0.05, VinA = 1 00 V.B = 5, K niA = 0.001, and Kn. =0.0025.67concentrationO. 050. 040. 020. 020. 01^ time0. 0002^0. 0004^0. 0006^0. 0009^0. 001^0. 0012Fig. 17 Numerical solution of the rate equation for the enantiomer.5.2. Enantiomeric RatioIf the rate equation of substrate A is divided by the rate equation of substrateB, then the denominator is canceled. A first-order ordinary differential of A asa function of B is obtaineddA _ V„,A Kn.,B AdB VmB^B •An exact soultion is given byVmAInA — ink =^K  ( lnB—lnBc),VinB KmA(5.2)(5.3)where Ao and Bo are the values of A and B at time equal to zero. The solutioncan be rearranged asIn(E^(A °^VmABIn ^Bo(5.4)68where E is called the enantiomeric ratio and is a constant made up of theparameters in the rate equations. The enantiomeric ratio, E, is a measure ofthe relative rates of the two completing reactions. The fact that E=constantmakes the enantiomeric ratio useful for comparing the effectiveness ofdifferent enzyme systems for separating substrate A and B.Moreover, A can be solved in terms of B, E, Ao, and Bo as,A = Ao(—BE(5.5)Boand B can be solved in terms of A, E, Ao, and Bo as1T.,B - BOA-—E (5.6)A 0)5.3. ConversionThe conversion, C, is a measure of the fraction of how much substrate is leftin a chemical reaction and is defined byC1A + B= Ao +Bo(5.7)If we know the enantiomeric ratio and the conversion, we can predict theconcentrations of A and B by solving the following simultaneous algebraicequationsA +B =(1-C)(Ao +BO,A Ao(5.8)However, in general, since the second equation has the exponent E, ananalytical solution cannot be obtained.5.4. Perturbation Analysis69The graph of the solution of the enzyme reaction equation indicates that thedecrease in concentration of substrate B is relativity small compare to thedecrease in concentration of substrate A. As a result, the concentration ofsubstrate B can be considered constant to a leading-order approximation. Weshow this by application of a perturbation method. First, we rescale theenzyme-substrate reaction rate equation and express it in dimensionless form.Define the following dimensionless variables(5.9)If the rate equations are expressed in terms of these dimensionless variables,they becomecfA(it^Ä+11+1'dfi=ch^A+B+1'where E = -1 .(5.10)The lower limit of the enantiomeric ratio is usually between 25 to 50, soc is between 0.02 to 0.04. Therefore, we can apply a perturbation method to(5.10). The leading-order approximation is obtained by setting E equal to 0and solving the resulting system of first-order differential equations withsuitable initial conditions. The leading-order equations become70dA A++1^ (5.11)(ITThe initial condition^at t = Tic is A = A, B = Bk . The leading orderapproximation is= f3, ,(A— Ak) + +^ink-A = —(c — Tic).AkInstead of setting B = ilk, we used (5.6), then the following solution isobtainedA((5.13)\(A _ A ± Ei in ^ = —(t — tk).In Figures 18 and 19 are plotted A and B as functions of time for both theperturbation solution (5.13) in terms of the original variables A, B. t, and thenumerical solution obtained from the 4th-order Runge-Kutta scheme with theinitial condition t=0, A0=0.05, B0=0.05. Note that there is excellentagreement between the perturbation and numerical solutions.(5.12))71Fig. 18 Exact and perturbation solutions of A.Fig. 19 Exact and perturbation solutions of B.5.5. Construction of Difference Equation From Leading-OrderPerturbation SolutionHaving obtained the leading-order perturbation solution of the dimensionlessenzyme-substrate rate equations, we can employ them to construct thecorresponding difference equations, namely the forward and backwardpropagating equations. The importance of the forward propagating equations72is that they are discrete equations and we can use them to describe thedynamics of the state variables in the EKF algorithms. The importance of thebackward propagating equations is that they can be used to derive an exactfilter as shown in [12] (see Chapter 2 as well).5.5.1. Forward Propagating EquationsWe apply (5.13) to find out the values of -A and 11 at^tk, . The result is aset of coupled difference equations,hic+1 Ak(Ak, - Ak) + (1+ hiln A-k+1^—(tk+i tk)-Ak )(5.14)If I+1 tk is small then ikk±i fikk and Ek,iihk will be close to but less than 1.lcHence, both (1— ÄkFirikk) and (1— iLiii3k) are small and positive. Since both1n(Ak±ii-Ak) and 1414k±irBk) can be rewritten as ln(1 — (1 — Ak±lAk)) and111(1 - (1 - hk+iiiik)) , respectively, the first-order Taylor expansions of them aregiven byin(AkjAk) (-I A, +,/Ain(hk+iihk) (-1 + hk+1/k)From the above two equations, (5.14) is reduced toAk+1 = Ak eCk+1^Ak Ak ijk +1=^6 erk+1 tk) hk Bk,Ak Bk +1(5.15)Note that (5.15), is equivalent to Euler discretization of (5.10), and is calledthe forward propagating equations.73When (T1,1— -c,) is small, the numerical solutions of the above differenceequations are very close to the numerical solutions obtained using the 4th-order Runge-Kutta scheme (see Figures 20 and 21).Fig. 20 Numerical solutions of substrate A from the forward propagatingequations and the 4th-order Runge-Kutta scheme.Fig. 21 Numerical solutions of substrate B from the forward propagatingequations and the 4th-order Runge-Kutta scheme.5.5.2. Backward Propagating Equations74A, andBk+lare expressed in terms of .Ak and -Ek in the forward propagatingequations On the other hand, equations expressing A, and Bk in terms ofit„ and ifl„ are the backward propagating equations. To find the backwardpropagating equations, we can treat the forward propagating equations assimultaneous algebraic equations and consider A, and fi„ as the unknowns.However, the algebraic equations may not have an analytic solution if it isnon-linear in A, and fi,, so we have to find an approximate solution of thesimultaneous algebraic equations. A general method is discussed in Appendix2, but in this section, we will employ the perturbation solution of thedimensionless enzyme-substrate rate equation to derive the backwardpropagating equations. It turns out that the equations obtained using these twoapproaches are the same.We apply (5.13) to determine the values of A and 14 at '1^andrename the subscripts k and (k-1) as (k+1) and k, respectively. Then thefollowing system of coupled difference equations is obtainedBk+l = fik ( A.,z- :1)6,^(5.16)(Ak — A„)+ (1+ ii„,)14^ I = -(tk tk+,A,If tki-i tk is small then ikk ritk, and ik ii3-„ will be close but large than 1.Hence, (Ak/Ak+i 1) and (Bk/Bk^l-1) are small and positive. Since bothln(i1k rk„) and In(fik/f3,,) can be written as MO +(AkiAk —1)) and+ (11„M„ —1)), respectively, the first-order Taylor expansions of them aregiven by1n(A„/A„) —1+ Ak/A„,A0.06forward propagating0.04backwarpropagating0.030.0275From the above two equations, (5.16) is reduced toAk+.1 ^tk+1)Akk+1 Ak4 13,„ + 1(t^)i3-hk = ttk-fi 6 -"2"+1 •A, + B„ +1(5.17)Equation (5.17) is called the backward propagating equations.To determine the initial conditions for the backward propagatingequations, we solve the forward propagating equations (the results wereshown in Figures 20 and 21). This determines the values of À and 11 at theinitial point for the backward propagating equations. The result is shown inFigures 22 and 23,^  time0.0002 0.0004 0.0006 0.0008 0.001 0.0012Fig. 22 Numerical solutions of the forward and backward propagatingequations for substrate A.76^ thne0.0004 0.0006 0.0008I 0.001 0.0012ackward propagating0.04950.0490.04850.0480.04750.047orward propagatingFig. 23 Numerical solutions of the forward and backward propagatingequations for substrate B.We observe a small gap at the beginning of Figures 22 and 23, which occurbecause the backward propagating equations are not the exact inverses of theforward propagating equations. The error obtained in the general theory inAppendix 2 is 0(At2).5.6. On-Line Estimation of Substrate Concentration by Extended KalmanFilter5.6.1. Defining the problemIn a realistic chemical reaction separating enantiomers, we have to determinethe individual concentration of the enantiomers so that we can monitor therelativity purity of the product and decide when to stop the reaction.However, it is very difficult to measure the concentration of individualsubstrates, and it is relatively easier and much cheaper to measure the totalconcentration of the enantiomers. Thus, the problem is to estimate theindividual concentration of each enantiomer from the sum of concentrations ofthe enantiomers. To study the problem in more detail, we simulate measured77data and apply the EKF technique to filter the measurement noise and estimatethe concentration of each substrate.5.6.2. Generation of Measured dataAs discussed in last section, the quantity that can be measured easily is thesum of concentrations of substrate A and B. For the purpose of this study, wegenerate the measured data by solving the enzyme-substrate rate equation(5.1) using the 4th-order Runge-Kutta scheme with initial conditionAo = 0.05, Bo = 0.05 and parameters value \CA = 100, K 0.001, V., = 5K.„ = 0.0025. (With this set of parameter values, the enantiomeric ratio, E, isequal to 50.)Since a set of exact measurements cannot be obtained in a realisticreactor, we simulate the noise with white noise. The white noise assumption isonly an approximation; however, the Central Limit Theorem states that thelimit distribution of a sequence of random process tends to a gaussiandistribution, which motivates the gaussian distribution in the white noiseassumption. The gaussian distribution employed in this simulation has zeromean and standard deviation Numerical ResultsTo apply the EKF, we have to specify the dynamical model of the statevariables, the measurement model, the error covariance matrices of the systemmodel, and the measurement model. The system model is given by (5.1). Themeasurement model is determined by the algebraic relationship between statevariables and the quantities which we measure. Mathematically, themeasurement model for this simulation isS=A+Bwhere S is the sum of the concentrations of the substrate enantiomers A andB.78So far there is no systematic method to determine the error covariancematrix of the dynamic model and the measurement model, all we can say isthat the larger the error in the model the larger the entries in the covariancematrix should be. The determination of a good choices depends on experienceand using a trial-and-error method.Choices of the model error covariance matrix. Q, the measurement errorcovariance, R, and the initial state error covariance matrix, P0, isQ=[1.610-9^08.510-ill^-9^0R = [0.0012],^= [100 0^10-12]•The result of running the EKF, for the above choices of covariance matrices,are shown in Figures 24, 25, and 26.Enantiomeric Ratio50.650.450 2et*i.:0.1^0.2^0.3^0.4^0.5ConversionFig. 24 Enantiomeric ratio calculated from the estimated concentrations ofsubstrates A and B.A79time0.0002 0.0004 0.0006 0.0008 0.001 0.00120.050 . 25 Exact and estimated concentrations of substrate A.0.04950.0490.04850.0480.04750.047--At-A.09..20.00040.00060.0008 0.0010.0012 time• Fig. 26 Exact and estimated concentrations of substrate B.5.6.4. Filter DivergenceThe estimated values of the substrate concentrations are extremely goodcompared to the exact values of substrate concentrations obtained by solvingthe rate equation by the 4th-order Runge-Kutta scheme. However, thestatistics obtained from the EKF are inconsistent with the actual statisticswhen the residues (the residue of a state variable is the difference between theexact value and the estimated value of the state variable) of A and B areplotted.residue and s.d. of A0.00010.00005residue of A60020.00040.00060.ms o. 111-19-reol2 time—0.00005—0.0001standard deviation predicted from EKFFig. 27 Residue of A and predicted standard deviation from the EKF.80residue and s.d. of B0.000030.000020.00001—0.00001—0.00002—0.00003°:1112'2 -"I:1'7M 4 0.0JTtl00t1.001 0 . 0012timeresidue of Bpredicted standarddeviation from EKFFig. 28 Residue of B and predicted standard deviation from the EKF.In fact, as shown in Figure 27, a significant portion of the residue of substrateA lie outside the two curves representing the standard deviation. This factcauses the gain matrix of the filter to become too small to sufficiently amplifythe subsequent data to correct the values of the state predicted by the system81model. If the system model is not a good approximation, it could lead to adivergence between the subsequently estimated state and the measurement. Inthe next section, we will show how to correct this problem.5.6.5. Correction of Filter DivergenceA number of approaches to deal with the divergence problem have beensuggested. In fact, these approaches are useful when there is an inconsistencybetween the actual statistics and the predicted statistics from the EKF. One ofthe simplest approaches is to increase the model error covariance (for amotivation of this method, see the next section). Roughly speaking, theincreased model error covariance would hopefully keep the gain matrix awayfrom zero and the subsequent measured data will not be ignored.Therefore, the model error covariance matrix, Q, is increased to[4 -10-2 00 9.10'and we recalcuate the estimated concentrations of the substrates A and B withthis new model error covariance matrix. Then we calculate the enantiomericratio and the conversion. The results are shown in Figures 29 to 33.The statistics of substrate A has been improved as shown in Figure 31,but, unfortunately, the accuracy of the enantiomeric ratio has to be sacrificed.However, the changes in the estimated values of the concentration of substrateA and B are so small that the changes are not observed. The reason theenantiomeric ratio has been changed is that E is dependent on 1nB0ilnB and asmall change in B will produce a large change in E.0.3^0.40.04950.0490.04850.0480.04750.0470.0465conversion0 . 5A820. 0.2^0.3 0.4^0.5Fig. 29 Exact and estimated concentrations of substrate A.Fig. 30 Exact and estimated concentrations of substrate B.residue and s.d. of Aresidue of Astandard deviation of A0.00060.000483—0.0004•0.0002.D.9.4020.00040. pan 0.0008 10.001 0.0012 time—0.0002-1^•Fig. 31 Residue of substrate A and predicted standard deviation from theEKF. residue of Bstandard deviationresidue and s.d. of B0.00020 00010.00020.00040.%fkgN..0008 1:061 0.0012 time—0. 0001—0.0002Fig. 32 Residue of substrate B and predicted standard deviation from theEKF.Enantiomeric ratio5251.55150.5 ......^•...840.1^0.2 0.3^0. 4 conversion0 . 5Fig. 33 Estimated enantiomeric ratio vs. conversion.5.7. An Example Of Filter DivergenceThe fundamental problem in filter divergence is that the dynamics of the stateare subject to error and if the EKF is constructed on the basis of an erroneousmodel and the system noise which is much smaller than the model noise, thenthe gain matrix of the filter will become very small when it operates on manydata and the subsequent measurements will be ignored. As a result, the filterwill follow the wrong state predicted by the dynamics of the system. Wepresent an example from Schlee et al. [21] to illustrate this. A morefundamental illustration of the divergence problem is given by Fitzgerald [22].5.7.1. ExampleConsider the problem of estimating the altitude h of a space vehicle fromaltimeter data. Suppose that the filter is based on the assumption that thealtitude of the space vehicle is a constant, whereas, in fact the space vehicle isclimbing at a constant speed.Define the following notationy: measurement at instant tk.hk: actual altitude of space vehicle at instant tk.85hk: the altitude variable of the space vehicle in the dynamical model of theEKF at instant tkPk/k: the update estimated error covariance matrix at instantThe measurement model isyk hk + vk , where vk is the white noise at instant tk.The filter dynamical model ishk+i^hk•The actual dynamical model ish, = hk +S.Substituting the above equation into the EKF, a system of difference equationsfor kik and Pk/k are obtainedPk-14-1^(fikik^fik-1/k-1)) (5.18)p^R Ykk-l/k-1 +(Pk-1/k-1 )2P = P (5.19)k/k^k-lik-1k-lik-i + R.Equation (5.19) has the solutionPoR (5.20)PkIk •k Po + RSubstituting (5.20) into (5.18)iik+1/k^kik^P°yieldskik ).(5.21)(Yk11 (k^1)P0 + RThe actual estimation error, Fik, is defined by' hk^hkik •Thus, the difference equation for the actual estimation error isfik-11/k^(kik ±k P + R P0 yk+1 (5.22)°'(k +1)P0 +R) (k + OP0 + RThe solution of (5.22) isRRk — Ok/2]Po + k R^Po^kkik =^ 110/0 s^ EykPo+R^kPo +R^kPo +R86(5.23)The first term of (5.23) goes to zero as Ic-->00. Furthermore, the last term of(5.23)^goes to^zero^in the^mean square sense, i.e.,1.i.m.xn = x^if limE[Ix„ —n__00 = 0. However,^the second term of (5.23)involving the rate s is unbounded in k and goes to infinity as (k-1)/2. As aresult, the estimation error is unbounded and the filter is diverging. Finally, ifthe speed s is zero then the middle term vanishes and the filter becomes stable.5.8. Estimation by Exact Filter AlgorithmsWe now apply the exact filter algorithms developed in Chapter 3 to estimatethe substrate concentration from the rate equation for the enantiomer in onedimension. This will be compared to the results obtained by the extendedKalman filter when there is error in the initial value of the concentrations ofsubstrates A and B.5.8.1. Dynamical and Measurement Models in One DimensionSubstituting (5.5) into the rate equation (5.1) reduces the system of twocoupled first-order ordinary differential equations into a first-order ordinarydifferential equationdB ^Vm„ Km,„ Bdt^K ^+Km, B +Km, Km, •(5.24)To apply the discrete EKF, we have to discretize the rate equation (5.24). IfEuler's discretization scheme is employed, the following difference equation isobtainedVrni, K BkEtic+i = Bic At^Km, Ao(Bk/Bo)E +Km, B, + KmA Km, •(5.25)87However, according to Lemma 8 in Section 3.4, we need to solve Bk in termsof Bk+i in order to calculate the evolving conditional probability density. Aleading-order approximation of the solution is given by the backwardpropagating equationVrnB KBk = Bk±i ± At^Kinn Ao(Bk±i /Bo )E^Bk±i K ^•(5.26)The measurement model isSk = Ak Bic + v„where vk is white noise with standard deviation a.Combining the above equation with (5.5) results in a measurementequation with one variable Bk ,Sk = Ao ^ +Bk Vk •BoFrom Theorem 1 in Section 3.1 and the above equation, the propability densityof Sk conditioned on Bk is given byp(SklBk) =1 (S^Ao (Bk /B0 — Bk )2cs2^ where Ao, Bo are the initial concentrations of the substrates A and B, and a isthe standard deviation of the noise of the measurements.5.8.2. Numerical ResultsWe employ the dynamical and measurement models derived in the last sectionand apply the exact filter algorithms (Lemmas 8 and 9 in Section 3.4)recursively to calculate the conditional probability density at the times whenthe measurements are taken. The estimated state of the dynamical model is the88mean of the conditional probability density which can be calculated bynumerical intergation.In conclusion, if there is an error in the initial value of the state, theestimated state obtained using the exact filter will converge to the exact curveof the state at a faster rate than the estimated state obtained using theextended Kalman filter.The following choice of parameters were chosen for the exact filterInitial state (A0,B0) = (0.0502,0.0502).Initial state error covariance matrix =0.01.The covariance matrix of noise-0.0012.The estimated concentractions of substrate B obtained by the exact filter andthe EKF are shown in Figure 34.Fig. 34 The estimated concentractions of substrate B obtained by the exactfilter and the EKF.896. Discussion and Future Research6.1. DiscussionThe enzyme kinetic rate equations are developed in Chapter 2 with specialemphasis on the quasi-steady-state assumption. We demonstrate that thekinetic rate equations obtained using the quasi-steady-state assumption are theleading-order equations of a system of coupled singularly perturbed first-orderdifferential equations. Thus, the kinetic equations contain error terms that maygive some guidelines for choosing the order of magnitude of the model errorcovariance matrix in a real enzyme system. However, the method using thedifferential equation is not an efficient way to get the quasi-steady-state rateequation for a complicated multi-enzyme system. Another method is the KingAltman algorithm (see [1]).One common feature of the two simulated enzyme systems in Chapters 4and 5 is that the evolving states of a partially observable system arereconstructed using the EKF algorithm. Estimations of the states of theenzyme system without and with proper model equations are studied inChapter 4 and 5, respectively. In both cases, different amounts of noise wereadded to the measurements.The accuracy of the estimation using the EKF algorithm decreases whenthe noise in the measurements increases. There is a lower bound in the error ofthe estimated states that cannot be further reduced by changing the parametersof the EKF, e.g., the model error covariance matrix, the measurement errorcovariance matrix, and the initial value error covariance matrix. So far there isno systematic method to determine the values of these covariance matrices. Inmost of the literature, the values of these covariance matrices are determinedby trial-and-error based on the experience of the user. However, there is a rule90to help us decide the ranges of these covariance matrices. The square roots ofthe diagonal elements of the model error covariance matrix should be largeenough to cover the error between the exact model equations and the systemmodel equations. Also the square roots of the diagonal elements of the initialstate error covariance matrix should be large enough to cover the differencebetween the exact initial values of the state and the initial values of the statespecified in the EKF or the KF.The largest error in the estimated state of the simulated enzymedeactivation system in Chapter 4 is about 6% (cf. Table 1). In the off-lineestimation, this error can be reduced to 0.9% by iteration on the dynamicalmodel equations (cf. Chapter 4). To drop this error from 6% to 0.9%, onlyone iteration is needed. However, it is difficult to generalize this method toon-line estimation. To reduce the error in an on-line estimation, we need toemploy more sophisticated techniques such as the Lestimator developed byJazwinski in 1974 [20].In the simulated enzyme system in Chapter 5, we found that thestatistics obtained from the EKF are inconsistent with the actual statisticsobtained when the residues of the states are plotted. This phenomenon isknown as "filter divergence". The explanation most often offered for thisphenomenon is that the estimated error covariance matrix becomesunrealistically small, so that too much emphasis is put on the value of thecurrent estimated state and subsequent measurements are effectively ignored.Fortunately, this divergence in our simulated system can be corrected byincreasing the values of the entries of the model covariance matrix. However,in general, it is not true that the divergence can always be removed byincreasing the intensity of the process noise assumed in the filtering model.Indeed, a more fundamental illustration of the divergence problem is given by91Fitzgerald [21]. He suggests that there are two types of divergence, truedivergence and apparent divergence. True divergence occurs when the mean-square error can actually be shown to approach infinity with increasing time.Apparent divergence occurs when a steady state is reached but the associatederrors are too large to allow the estimated state to be useful. The divergencein our problem is apparent divergence. Indeed, the apparent divergence can beremoved by more sophisticated algorithms, for example, by the limited memoryoptimal filter [23] which computes the best estimate based on a "movingwindow" over the most recent data.Finally, an exact filter algorithm is used to compute the conditionalprobability density function of the substrate concentration in Chapter 5. Themean of this probability density functions, computed by numerical integration,is compared to the values of the estimated concentration obtained from theEKF when there is error in the initial values of the concentrations of thesubstrates. We find that the exact filter has a faster rate of convergence thanthe EKF. However, the computational time taken by the exact filter is muchlonger. Thus, it is not feasible to apply the exact filter algorithms in on-lineestimation.6.2. Future ResearchCentral to the estimation of the state variables of a dynamical system is thedetermination of the probability density function of the state variablesconditioned on the set of measurements. If the dynamical system is linear, thestate variables that are gaussian initially will evolve to a set of gaussianrandom variables. To determine the probability density function of a gaussianrandom vector, it is sufficient to determine their mean and covariance matrix.The Kalman filter gives an exact solution for the linear system. As shown inthe derivation of the EKF algorithms in Chapter 3, the dynamical equations are92linearized at the estimated state variables. This will introduce a significanterror when the intervals between measurements are large. In the simulatedenzyme system in Chapters 4 and 5, the intervals between the measurementsare chosen to be sufficiently small to minimize this error. This requirement canbe realized for the measurements taken by a radar, but may be not be realizedfor some enzyme systems.Although the probability density function conditioned on the set ofmeasurements evolving in time can be described with a system of differentialor difference equations, an exact solution is difficult to obtain. Moreover, thestate variables in a nonlinear estimation problem are non-gaussian. The non-gaussian conditional probability density function cannot be characterized by afinite set of parameters, e.g., mean and moments. Thus, the unknowns of theexact nonlinear filter are infinite (cf. the unknowns in the linear Kalman filterare the means and the first-order moments). Therefore, it is natural toinvestigate the approximation schemes for the conditional probability densityfunction. Motivated by Fourier series, an approximation scheme based onexpanding the conditional probability density function using orthogonal series(e.g., the Edgeworth series [24]) is proposed [25]. However, this approach hasa distinct disadvantage in that, when truncated, the resulting series is not avalid density function. Therefore, it is necessary to keep a large number ofterms in the series to minimize the error. This will make the schemecomputationally unattractive. However, there are other expansion schemes toovercome this difficulty. One of them is the gaussian sum approximationscheme proposed by Sorenson and Alspach [15], [16]. Roughly speaking, thisapproximation scheme is to expand the conditional probability density functionby sequences of weighted gaussian distributions of different mean andcovariance. The resulting filter equations are a set of extended Kalman filters,93with each filter tracking the evolving mean and covariance matrix of itsassigned gaussian density. In a low noise set of measurements, the gaussiansum filter can be very nearly optimal. In a high noise set of measurements, wehave to frequently reinitialize the convariance matrices of the individual EKFwith a smaller convariance [13].Apart from the probabilistic approach discussed in this thesis, there isanother technique using neural networks to estimate and predict the futurestate of a dynamical system. In the probabilistic approach, a dynamical modelfor the system process is essential. On the other hand, in certaincircumstances, even a first-order approximation to the exact equations of aprocess is not known, e.g., the value of a stock in the stock market. In thissituation neural networks are an excellent tool to learn relationships withoutrequiring knowledge of the dynamical equations and then to predict futurestates.The theory of neural networks was developed back in the 1950s and hasbecome one of the hottest topics in artificial intelligence over the past 10years. A network operates in analogy to a human brain. The storage andprocessing of the information are facilitated by modifying the synapticstrengths of the junctions between interacting neurons. Neural networktechnology has the greatest potential in areas such as speech and imageprocessing, but applications to areas such as process control, dynamicmodeling, and robotics are currently being developed. Indeed, Lapedes andFarver [26] studied the ability of layered feed-forward networks to forecasttime series, which is an important problem in economics, meteorology, andmany other areas. The advantage of the neural network technology is that, noinformation about the dynamical equatins is required. All that is required is to94design the architecture of the neural network and to feed the neural networkmodel the necessary information to let it learn by representative examples. Inoff-line estimation of a dynamical system that is partially observable and hasno proper dynamical equations (e.g., the simulated enzyme deactivation systemin Chapter 4), we can use a neural network to process the estimated stateobtained using the EKF, and then develop the dynamical equations for thesystem. However, more research still has to be done on developing a neuralnetwork type on-line estimation scheme in analogy to the EKF.da(A.1.1b)dz =y—(y+Oz.95Appendix 1Matching Slope Technique for a Singularly Perturbed First-Order SystemIn this appendix, we will prove that the leading-order terms in the rate ofchange of the substrate concentration for the single enzyme reaction describedin Chapter 2 is always negative, We determine the initial values for the outerequations of a singularly perturbed first-order system by matching the slopesof the inner and outer solutions when the inner solutions are expressed interms of the outer variable which in turn is set equal to zero. The matchingmethod will be applied to find the initial condition for the outer equation ofthe singularly perturbed first-order dimensionless enzyme system derived inthe Heineken et al. 1967 paper [2]. The advantage of this method is that theexplicit solutions of the outer equations are not required. This method isespecially convenient when the outer solution is an implicit function of theouter independent variable, where it is difficult to apply the method ofmatched asymptotic expansions.Inner and Outer EquationsIn the following we summarize the inner and outer equations in the Heinekenet al. paper.Inner equations are given bydy= -6 [y - (y^- X) z],^ (A.1.1a)dcThe perturbation expansions are assumed to take the formY(G) = Y0(175)+ 6 Yi (CS) + 62 y2 (G)+-,^ (A.2.2a)z(a) = zo(u)+ c z,(a)+ c 2 z, (a) -F - - • (A.2.2b)The corresponding initial conditions are given by96yo (0) = 1, zo (a) = 0,y.(0) 7-  0, z.(0) = 0, for n=1,2,3,....The leading-order equations are obtained by substituting (2) into (1) andsetting 6 =0, thusdyo 0dadz° =y0 —(3'0^zo.daThe equation for the first-order and second-order terms are given bydy, =da^+(p.— X); + yo zo,^ (A.1.5a)ZI =^+(p^+ yo z, +^zo, (A.1.5b)dady, =^+(p.— X) z, + y, zo + yo z„^ (A.1.6a)dadz,= y2 -y2 ZO Y1 Z1 -y0 z., ^ (A.1.6b)daOuter equationFor the outer region, we rescale time by t=-66 and let y(r)=y(c) and (-r)•= z(r7),thus=—Y“Y-1-11-20i,dt6^_y —(_y + p)_z.dtThe perturbation expansion for the outer solution is given byy(t) = yo(T)+ 6 yl(T) +62 y2(T)+--,z(T) = zo(T) + c Zi (t) ± 62 Z2 (t)-1-• •(A.1.7a)(A. 1 7b(A.1.8a)(A.1.8b)(A.1.4a)(A.1.4b)The equations for the leading-order and first-order terms in the solution aregiven by97dY°^y0 + (y0 + i - x.) z0,at0= Yo -670 + 2,)dY1^ + ^Y,^— x)crio _yi yo z,^Y, zo^zi•^ (A.1.10b)(ITComputation of the Initial ConditionsThe leading-order and first-order inner solutions obtained by Heineken et al.[2] areYo(a) = 1,^ (A.1.11a)1 — exp(—(1+ u)zo(a)=^1+ uXa^(1+11—X)Yi(a)=  ,^ [1 exp(—^(Y)](1+ P)^(1+ 1-1)- L(A.1. 11b )(A.1.12a)zi ,e^2■, u^u (1+ — 22) (1+u-2X) 2((l+ )^)^0+03^(1+1)4^0+04 exp[—a ] +ex_((1 + t) )] {^(1+ 04 (52 + (1+ )(1+ —(-1)(3' +(1+04(1+2u—X-2Xu+ull.^(A.1.12b)Therefore, the inner expansion written in terms of the outer independentvariable t is-11^X^)+E (I +11-X) +0^ (A.1.13)6/^0+ 0 02 From (A.1.9), we getdyo^x yo (A.1.14)dt^y0 + t98so that as T--->0= _Y°(°,) x .Yo (0) +This relationship between the slope and the value of the function at T=0 isvalid since this corresponds to an interior point in the domain. We require theabove slope to be equal to the leading-order slope of the inner solutionexpressed in terms of the outer variable "C, as T—>0^Yo (0)^X ^Yo n P^+ P,The unique solution is given by yo(o) = 1 From (A.1.9b) the initial value of Zoisio (0) = 1+14^ (A.1 .1 7)Therefore, the initial condition for the outer leading-order expansion is37-0(0) =1. For this initial condition, we see that dyoidt o. Combining thisresult with the inner solution, we conclude that the leading-order expansionfor the rate of change of the substrate of the single enzyme reaction isnegative. Furthermore, if the perturbation series expansion is uniformlyconvergent, then the rate of change of substrate is always negative.Next, we calculate the initial condition for the first-order outersolutions. If we differentiate (A.1.9b) w.r.t. T, we get^0 = Ner) —^N(r) (t) — (t) yo(t) —^(t) . (A.1.18)Evaluating (A.1.18) at T=0 and substituting y(o), yo(o), io(o) into it, we getZ(0) =^X^ .(1+)3Evaluating (A.1.10b) at T=0 and substituting Z,;(0) into it, we get(A.1.19)(A.1.16)99X u,0 +^= (0) Zi (0)^311(°)(0)(1+ pt)Evaluating (A.1.10a) at T=0, we get(A.1.20)^(o)^y;(o) =^+ ii(o) +^+ (p. -^(o) (A.1.21)1 +If 37;(0) is known, then (A.1.20) and (A.1.21) form a set of simultaneousalgebraic equations for the unknowns y,(o) and MO).But --37(0) can be calculated from the inner solution using the matchingslope technique. Since y(o) is a first-order term, we need to determine thefirst-order inner expansion. When the first-order perturbation expansion of theinner solution is expressed in terms of the outer independent variable c, thereis a contribution to it from the second-order solution y2(a).An analytic solution of y2(cy) can be obtained fromY2 (a) -1311(0 Izt + — ?Of zi^ct^Yi^zo (Oct + f Yo^zi (Oct•^(A-1 -22)0^0 0 0The complete solution is tedious; however, we just need the first-ordercontribution when 82 y2(a) is written in terms of t, namely62 Y2(a) -= 6° term + 6 'I 2 A' g" +^+ 0(62).The first-order expansion of y(T/c) isY(T/g) — Yo^+ 6 y,(T/g)+ c2 y2(T/c)+---[j^— X^2X la (1 +^X) ± ow] ± 0(62).6° term +(1 + (1+t)(A.1.23)Hence, the first-order slope is the derivative of the coefficient of c withrespect to t, i.e., 2X 4(1+ — 20/0 +^. The matching condition implies1001_1(1+11-2) F(0) =^ (A.1.24)(1+ 04^•Substituting (A.1.24) into (A.1.21), and solving (A.1.20) and (A.1.21)simultaneously, we obtainyi(o) = (1+11- x) +(0) p (-1 + + 2 X) +11)4The yi(o) condition we obtain here using the matching slope technique agreeswith the one obtained by Heineken et al. in 1967 [2].101Appendix 2General Theory of Propagating Forward and Backward EquationsConsider the following equationdx f(dt^`x).If we apply the mean-value theorem to x(t+h), we get(A.2.1)x(t + = x(t)+ h^p)t + p(t + h))^ (A.2.2)forIn order to apply (A.2.2) to (A.2.1) we prove a lemmaLemma 1For a monotone increasing or decreasing function, x(t), which can beexpanded in power series, thenx(t + p h) = (1 — p) x(t) + p x(t + h) + 0(h2)^ (A.2.3)ProofSince x(t) can be expanded in terms of power series, it suffices to prove thelemma for x(t)t', namely(t + p^=^+ p (t + h)n + 0(h2)^ (A.2.4)where p is independent of n. This task can be achieved by expanding bothsides of (A.2.4) by using the binomial theorem and equating first-order termsin h. (QED)Using (A.2.1) in (A.2.2) givesx(t + h) x(t) + h f(x((1 — p)t + p(t + h)))= x(t) + h f(x(t + p h)).^ (A.2.5)Apply Lemma 1 to (A.2.5), which simplifies tox(t +^x(t) + h f((1 — p) x(t) + p x(t + h))^ (A.2.6)102If we rename x(t+h) as xk_,1, x(t) as xk, and h as At then (A.2.6) can berewritten as a difference equation, i.e.,Xk^h 1 —^+ p^)•Now we consider two casesI. p=0The difference equation isxk+i^xk h xk ).which we call the forward propagating equation.II. p=1The difference equation isx, = x, + h x„), orxk Xk+1 h(A.2.7)(A.2.8)(A.2.9)which we call the backward propagating equation.Next we are going to examine whether (A.2.9) is an inverse of (A.2.8).In other to do this, substitute (A.2.9) into (A.2.8) and get0 = At f(x,)— At f( xk + At f(xk )).^ (A.2.10)If the Taylor series expansion is applied to (A.2.10), the right hand side of(A.2.10) becomes 0(At2). Thus, (A.2.9) is an inverse of (A.2.8) with an errorof 0(At2).We illustrate this by an example of f(x)=-x.ExampleWhen f(x)=-x, the forward propagating equation becomesx k+1 =^At x k^ (A.2.11)the backward propagating equation becomes,xk x, + At xk „ .^ (A.2.12)0 . 2^0 . 3 0 . 4^0.50 . 1xk „(A.2.13)Xk = 1- At103Solve the forward equation numerically for At=0.01, x=1 at t=0 andthen solve the backward equation numerically for At=0.01 and initial conditionobtained from the solution of the forward propagation equation. The result isplotted in Figure A.10 . 90 . 80 . 70 . 6Fig. Al Numerical solutions of the forward and backward equations using(A.2.11) and (A.2.12).We can see a deviation at the beginning of the graph which represents an errorof 0(0.012).Since, in our example, f(x) is a linear function, the forward propagationcan be inverted to obtain the backward propagating equation, which isSimilarly, if we solve (A.2.11) and (A.2.13) numerically with the same initialcondition, Figure A.2 is obtained.104Fig. A2 Numerical solutions of the forward and backward equations using(A.2.11) and (A.2.13).This time the deviation at the beginning of the graph disappears.Bibliography[1] A. Fersht, "Enzyme Structure and Mechanism", W.H. Freeman AndCompany, New York, 1985.[2] F. Heineken, H. Tsuchiya, and R, Aris, "On the mathematical status of thepseudo-steady state hypothesis of biochemical kinetics", Math. Biosci. 1, 95-113, 1967.[3] L.A. Segel, "On the validity of steady state assumption of enzymekinetics", Bulletin of Mathematical Biology 50, 579-593, 1988.[4] L. Edelstein-Keshet, "Mathematical Model in Biology", Random House,New York, 1987.[5] A. H. Nayfeh, " Introduction to Perturbation Techniques", Wiley, NewYork, 1981.[6] N. Wiener, "Extrapolation, Interpolation & Smoothing of Stationary TimeSeries", The M.I.T. Press, Cambridge, Mass., 1949.[7] A.N. Kolmogorov, "Interpolation and Extrapolation", Bull de l'academiedes sciences de U.S.S.R., Ser Math. 5, 3-14, 1941.[8] R.E. Kalman, "A new approach to linear filtering and predictionproblems", J. Basic Eng., Trans. ASME. Series D 82, 35-45, 1960.[9] T.P. McGraty "Stochastic Systems and State Estimation", J.Wiley & Sons,New York, 1974.[10] A. Papoulis, "Probability, Random Variables, and Stochastic Processes",Third Edition, Mc-Graw Hill, New York, 1991.[11] G.R. Grimmett and D. R. Stirzaker, "Probability and Random Processes",Oxford, University Press, Oxford, England, 1990.[12] A.H. Jazwinski, "Stochastic Processes and Filtering Theory", AcademicPress, San Diego, 1970.[13]^B. Anderson and J. Moore, "Optimal Filtering", Prentice-Hall,Englewood Cliffs, NJ, 1979.105106[14] A. Bensoussan, "Stochastic Control of Partially Observable Systems",Cambridge University Press, Cambridge, 1992.[15] H.W. Sorenson and D.L. Alspach, "Recursive bayesian estimation usinggaussian sums", Automatica, AC4, 465-479, 1971.[16] D.L. Alspach and H.W. Soreson, "Nonlinear bayesian estimation usinggaussian sum approximations", IEEE Trans. Automatic Control, AC-17, 439-448, 1972.[17] G. Caminal, J. Lafuente, J. LOpez-Santin, M. Poch, and C. Solk"Application of extended kalman filter to identification of enzymaticdeactivation.", Biotechnol. Bioeng., 29, 366-369,1987.[18] G. Caminal, J. LOpez-Santin, and C. Solk "Kinetic modeling of theenzymatic hydrolysis of pretreated cellulose.", Biotechnol. Bioeng., 27,1282-1290,1985.[19] G. Greco, G. Marrucci, N, Grizzuti, and L. Gianfreda, Ann. NY Acad.Sci. 434, 7, 1984.[20] A. Jazwinski, "Brief paper". Automatica 10, 203-207, 1974.[21] F. H. Schlee, C.J. Standish, and N.F. Toda, Divergence in the KalmanFilter, AIAA J. 5, 1114-1120, 1967.[22] R.J. Fitzgerald, Divergence of the Kalman filter, IEEE Trans. AutomaticControl, AC16, 736-747, 1971.[23] A. Jazwinzki, "Limited memory optimal filtering", IEEE Trans Aut.Control 13, 588, 1968.[24] H. Cramer, "Mathematical Method of Statistics." Princeton Univ. PressPrinceton, New Jersey, 1961.[25] H.W. Sorenson and A.R. Stubberud, "Non-linear filtering byapproximation of a a posteriori density", Inter J. Control 8, 33-51, 1968.[26] A.S. Lapedes and R.M. Farber, "Non-linear signal processing usingneural networks : prediction and system modeling", Los Alamos preprint LA-UR-87-2662, 1987.


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items