50. 100. 100. N u m b e r of s a m p l e s Figure 3.21: Parameter identification using R L S Chapter 3. System Identification - 2 J Figure 3.22: Parameters locations > -o CO s 7 . 1 0 0 0 E - 0 2 Figure 3.23: G A , pole zero identification Chapter 3. System Identification 35 Figure 3.25: GA, parameters locations calculated from the pole zero identification Chapter 3. System Identification 36 Figure 3.27: GA, parameters locations Chapter 3. System Identification 37 steps (0.93 0 = 0.04) and a 1 ; a 2 , &i and b2 are then identified.2 Figure 3.21 shows the result using the input shown in Figure 3.20. It can be seen that the estimates have rather large variance especially b\\ and b2 (the long dashed and dotted line respectively). Their value after 200 samples is 1.3957 and 0.6088 respectively but 10 samples before their values were 0.4214 and 1.2141 respectively so it is difficult to say what value they are converging to. The estimates for ai and a2 have also a variance but much smaller and they converge to biased estimates with the final estimates as -1.1543 and 0.4974 respectively. Figure 3.22 shows plot of a2 as a function of and b2 as a function of bx with the time axis running out of the page and the triangle for a stable estimates plotted every 50 samples. To compare those results with the G A , the G A is run identifying the same parame-ters as those identified by the R L S . That means that the gain, b0 and the delay, d, are assumed to be known so there are only four parameters to be identified. Using same parameter settings as in Table 3.4 it gives a total string length of 28 bits, so the popu-lation size has been set to 50. The G A is run twice, first identifying the poles and zeros and secondly identifying the parameters. The results of the pole-zeros identification is shown in Figure 3.23, it is then converted into the parameters, Figure 3.24 and a 3-D figure is plotted, Figure 3.25. The parameters estimation is shown in Figure 3.26 and the corresponding 3-D figure is shown in Figure 3.27. It can be seen that in both cases the poles have almost zero bias, they are only limited by the resolution of the search space. They converge in about 50 generations for the pole-zero identification but in about 100 generations for the parameters identification or about twice as fast for the pole-zero identification than for the parameters. The zeros converge slowly for both cases but the final estimates are close to the true values (0.5 and 0.0) in both cases. If G A is then compared to the RLS it can be seen that the RLS needs more than 50 2True values from Equation 3.18 are -1.5, 0.7, 0.5 and 0.0 respectively. Chapter 3. System Identification 38 samples for the poles to converge but the zeros do not converge, whereas the G A needs between 50 and 100 generations for the poles to converge which means that with 3 trials per sample it needs between 17 and 33 samples and the zeros are slowly converging. So in terms of number of samples the G A converges faster. But as mentioned earlier the fitness function for the G A can not be calculated recursively so the algorithm has to calculate the outputs for all the window and to calculate every output it involves (?ia + rif, + 1) multiplications and (n a + rif,) additions. The difference in bias of the estimates are mostly caused by different objective (cost) function, the RLS uses a simple least square whereas the G A uses IV alike objective function. 3.4 Continuous time identification Consider n-th order system with a differential operator s = ^ and unknown coefficients a; and bi y(t) = V \" + 6 n