Cylindrical RF Tomography by K i m Lam B . A . S c , University of British Columbia, 2001 A THESIS S U B M I T T E D IN P A R T I A L F U L F I L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F Masters of Applied Science in The Faculty of Graduate Studies Department of Electrical and Computer Engineering We accept this thesis as conforming to the required standard T H E U N I V E R S I T Y OF BRITISH C O L U M B I A November 2003 Â© K i m Lam, 2003 Library Authorization In presenting this thesis in partial fulfillment of the requirements for an advanced degree at the University of British Co lumbia , I agree that the Library shal l make it freely avai lable for reference and study. I further agree that permiss ion for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representat ives. It is understood that copying or publication of this thesis for f inancial gain shal l not be al lowed without my written permiss ion. - M ny\ L iWiA OIL JO , -X 6C j Name of Author (please print) Date Title of Thes is : ( y |, y> Ljn L | \{f- J, Degree: p( J\S Yea r : Abstract Tomography allows the examination of an object's interior without having to destroy the object. There have been many forms of tomography including MRIs and CT scans. A l l forms of tomography infer the interior of an object from a set of measurements. An algorithm to infer the features of infinite dielectric cylinders using electromagnetic waves is developed in this thesis. The intended application for this algorithm is the imaging of lumber. The algorithm recovers the dielectric permittivity distribution from an infinite cylinder. It uses Richmond's Method to model the physical behavior of the infinite cylinder. The algorithm uses an iterative non-linear inversion scheme to recover the dielectric permittivity distribution. The non-linear inversion scheme uses a regularization term that minimizes the structure of the permittivity distribution subject to a constraint involving the measured scattered field from the cylinder. It was found that by decreasing the weighting of the regularization, and eventually turning off the regularization, the exact permittivity distribution can be recovered for a noiseless and over-determined system. In the presence of noise, an approximate permittivity distribution can be recovered; however regularization cannot be turned off. Similarly, for an under-determined system, an approximate permittivity distribution can be found, but regularization cannot be turned off. ii Table of contents Abstract i i Table of contents i i i List of figures v List of tables vii Chapter 1: Introduction 1.0 Introduction 1 1.1 Thesis overview 8 1.2 Novel content 9 Chapter 2: Examination of a 2-D homogeneous cylinder 2.0 Introduction 11 2.1 2-D scattering solution 11 2.2 How to compare observations of different permittivity values 15 2.3 Numerical results 15 2.4 Conclusion 22 Chapter 3: Mathematical background for the tomography algorithm 3.0 Introduction 23 3.1 Richmond' s Method 24 3.2 The development offhe integral equation for Richmond's Method 24 3.3 The scattered field 26 3.4 Observations for inversion 26 3.5 Gridding for the Jacobian calculation 28 3.6 Derivation of the Jacobian for locations internal to the dielectric body 30 3.7 Derivation of the Jacobian Matrix for the scattered field 31 3.8 The Jacobian for multiple frequencies 32 3.9 The flatness term 33 3.10 The system of linear equations 34 3.11 Conclusion 36 Chapter 4: Implementation of the tomography algorithm 4.0 Introduction 37 4.1 Tessellating the cylinder 38 4.2 Calculating the flatness matrix 41 4.3 Calculating the internal field using Richmond's method 45 4.4 Generation of the observation matrix 45 4.5 The Jacobian Calculation 46 4.6 Inversion function with a single beta value 51 4.7 Inversion function with multiple beta values 54 4.8 Summary 56 Chapter 5: Numerical experiments using the tomography algorithm 5.0 Introduction 57 iii 5.1 Setup for the experimental procedure 58 5.2 Test cases 1,2 and 3 61 5.3 Norms for assessing the quality of the solution 64 5.4 Experimental set 1 65 5.5 Experimental set 2 71 5.6 Experimental set 3 76 5.7 Experimental set 4 81 5.8 Experimental set 5 85 5.9 Conclusion 88 Chapter 6: Summary and future work 6.0 Summary 90 6.2 Future work 92 Bibliography 93 i v List of figures Figure 1.1 Abstraction of the scattering from a dielectric body 1 Figure 1.2 Abstraction of forward modeling process 2 Figure 1.3 Schematic for linear inversion 2 Figure 1.4 Schematic for non-linear inversion 3 Figure 2.1 Geometry of the scattering problem 11 Figure 2.2 Definition of electromagnetic fields inside and outside of the cylinder. 12 Figure 2.3 Geometric configuration of observation points 16 Figure 2.4 Variation surface for permittivities from 1 to 10 17 Figure 2.5 Variation slices for a homogenous cylinder 18 Figure 2.6 Zeros of the an<Â£^)-an (s2) for ex equals 5 19 Figure 2.7 Multifrequency variation slices for a homogenous cylinder 21 Figure 2.8 Multifrequency variation surface for permittivities from 1 to 10 22 Figure 3.1 Arrangement of the multi-frequency observation vector 27 Figure 3.2 Illustration of the mapping concept 28 Figure 3.3 Structure of the multi-frequency Jacobian 33 Figure 4.1 Flow chart for the complete inversion system 37 Figure 4.2 Illustrative diagram of the discretization concept 39 Figure 4.3 Flow chart for the B a s i c l n v e r s i o n function 51 Figure 5.1 Inversion cells for Experimental sets 1, 2 and 3 59 Figure 5.2 Modeling cells for Experimental sets 1, 2 and 3 60 Figure 5.3 Superposition of modeling regions and inversion regions 61 Figure 5.4 Test case 1 62 Figure 5.5 Test case 2 63 Figure 5.6 Test case 3 64 Figure 5.7 Variation plot for experimental set 1 66 Figure 5.8 Modeling error plot for experimental set 1 67 Figure 5.9 Recovered permittivity distribution for experimental set 1, test case 2 and 8 = 0 68 Figure 5.10 Recovered permittivity distribution for experimental set 1, test case 2 and 8 = 0.001. 69 Figure 5.11 Variation as a function of iteration for experimental set 1, test case 2,8 = 0 and B = 0.001 70 Figure 5.12 Variation plot for experimental set 2, method 1 71 Figure 5.13 Variation plot for experimental set 2, method 2 72 Figure 5.14 Modeling for experimental set 2, method 1 73 Figure 5.15 Modeling for experimental set 2, method 2 74 Figure 5.16 Image of permittivity distribution experimental set 2, test case 3, method 1, and iteration 7. 75 Figure 5.17 Image of permittivity distribution v experimental set 2, test case 3, method 2, and iteration 21. 76 Figure 5.18 Variation plot for experimental set 2 (20% noise), test case 3 77 Figure 5.19 Modeling error plot for experimental set 2 (20% noise), test case 3 78 Figure 5.20 Permittivity distribution for experimental set 3, test case 3, and iteration 30 79 Figure 5.21 Permittivity distribution for experimental set 3, test case 3, and iteration 31 80 Figure 5.22 Permittivity distribution for experimental set 4 81 Figure 5.23 Variation plot for experimental set 4, experiment 1 83 Figure 5.24 Modeling error plot for experimental set 4, experiment 1 84 Figure 5.25 Permittivity distribution for experimental set 5 85 Figure 5.26 Variation plot for experimental set 5, experiment 1 86 Figure 5.27 Modeling error plot for experimental set 5, experiment 1 87 Figure 5.28 Permittivity distribution for experimental set 5, experiment 1, iteration 41 88 vi List of tables Table 2.1 Local minimum values at 4 discrete points 18 Table 5.1 Summary of experiments for chapter 5 57 Table 5.2 Parameters for the experimental sets 1, 2 and 3 58 Table 5.3 Parameters for the experimental set 4 82 vii Chapter 1 Introduction 1.0 Introduction There has been much interest in microwave imaging (MI) otherwise known as microwave tomography. The Greek root 'tomo' means to slice or section, and tomography is a general term for obtaining an image of a slice of a target. Tomography techniques such as CT scans and MRIs are routinely used in medicine [Hendee and Wells, 1997]. MI is based on the measurement of a scattered field from an object illuminated by electromagnetic (EM) waves. An object is first illuminated with a microwave source. This results in secondary waves, known as the scattered field. MI is the recovery of the permittivity distribution of an object, based on measurements of the scattered field. To recover the distribution, an inversion algorithm is needed. The process of inversion recovers the parameters of an operator when the operator and the scattered field are known. Figure 1.1 shows a schematic of the physical system. Figure 1.2 illustrates the forward modeling problem. The differential operator simulates the physical system mathematically and can simulate the scattered field, when operating on the incident field. The first step in Figure 1.2 illustrates that a mathematical description of the entire system must be generated. It is then inverted to produce an operator that relates the incident field to the scattered field. The second step is to obtain the scattered field. Incident wave Parameters Material Data Figure 1.1: The diagram represents the abstraction of the physical system. The incident wave interacts with the Physical system, denoted by the "Material Parameters" box, and the resulting electromagnetic fields are measured. 1 Step 1: Differential Inverse of operator ' operators (equivalent to including Green's function) .parameters L 1 Step 2: L"1 Source (incident wave) Solution (predicted data) Figure 1.2: Schematic of the forward modeling process. A differential operator models the characteristics of the physical system. This operator is then inverted and applied to the incident wave to generate a scattered field. Inversion can be separated into linear or non-linear inversion. The linear inverse for MI is obtained by discretization of the integral operator to obtain a linear vector-matrix relationship between the data and the parameters. A generalized matrix inversion is then performed to obtain the parameters. The generalized matrix inversion minimizes an objective function typically interpreted to be a best linear fit, while minimizing a square error. The minimization occurs in a single step. In the case of a square matrix the solution is exact and the objective function is zero. Linear Inverse Data â€¢Parameters Figure 1.3: Schematic for the linear recovery of the material parameters. A linear inverse is first taken. This is applied to the measured scattered field to recover the permittivity values. For non-linear inversion, a direct matrix mapping from data to the parameters cannot be obtained. For this type of inversion an objective function is, also, defined and is either minimized or maximized. This technique is iterative. The objective function approaches its minimum after each iteration. The minimum is reached after several iterations. During each iteration, the parameters are estimated by examining some 2 property of the objective function, either its value or its derivative. Based on the results of the objective function, the inversion algorithm determines the next set of estimated parameters. This set of parameters leads to a better solution. Figure 1.4 illustrates schematically the results of an iteration. Non-Linear Data Parameters operator Figure 1.4: Schematic for non-linear inversion. The Non-Linear operator acts upon the data to give the parameters. The non-linear operator may be iterative in nature or may have multiple parameters as possible solutions. MI can be applied to many problems. Much recent literature [Hagness et al., 1998][Semenov et al., 2002 ] points to medical imaging as a promising area of study. To image biological objects, microwaves travel through material with high dielectric permittivity and high dielectric contrast. A particular biological application of importance is the imaging of trees. Regions of high dielectric permittivity indicate areas of rot, due to the accumulation of water. Therefore, the location of these regions can be found using microwave tomography providing in-vivo and in-situ assessment of the wood quality of the target tree. The author of this thesis is unaware of any previous successful attempts to apply M I to trees. When a microwave travels into an object, part of the wave is trapped inside the object, while part of the wave is scattered from the object. The trapped waves constantly reflect off the boundary of the object. If the object is inhomogeneous, the reflections within the object become more complicated. The trapped waves continuously leak out of the object. These leaked waves form part of the scattered field. The scattered field is, therefore, composed of the leaked and scattered waves. 3 Many techniques have been used to solve the inverse scattering problem, with varying degrees of success. The Born approximation is the simplest imaging method. The Born approximation approximates the internal field of the scatterer with that of the incident field. The total field, exterior to the scatterer, is then assumed to be a linear combination of the incident field and a scattered field. The inverse operator can be calculated using the assumed incident field, yielding a linear inversion algorithm which is solved by a matrix inverse, as described previously. The Born approximation fails when a moderate to high dielectric contrast exists [Slaney et al., 1984]. This failure occurs because of the strong scattering behavior leading to an internal field that is quite different than that of the incident field. This behavior is due to the scatterer behaving like a leaky waveguide structure. More recently, imaging using high bandwidth chirp pulses, and confocal imaging, has been studied [Hagness et al.,1998]. The technique of confocal imaging is based on the idea that the microwave chirp behavior approximates that of rays. Simulations of confocal imaging indicate that structures as small as 2mm [Hagness et al.,1998] can be detected inside breast tissue. The work on confocal imaging has largely been related to breast cancer detection. One of the first researchers in MI [Joanchimowicz et al. 1991] applied non-linear inversion techniques to an E M inverse scattering problem used to detect bone and muscle tissue. Joanchimowicz et al. used a Newton-Kantorovich's procedure to find the minimum of an objective function. Their process was iterative and used multiple illumination angles of the object. 4 Caorsi develops a number of inversion algorithms. Caorsi et al. also builds a physical system for imaging. He uses simulated annealing (SA) [Caorsi et al, 1991] for one of his inversion procedures. The SA method recovers an approximate permittivity distribution, while using multiple illumination angles. Caorsi, also, experiments using a pseudo-inverse technique as an inversion scheme [Caorsi et al., 2003]. The pseudo-inverse is the analog of an inverse for a non-square matrix [Golub and Van loan, 1996] [Wilkinson and Peter, 1970]. Chew and Lin [Chew and Lin,1995] explore the use of multiple frequencies for dielectric imaging. Chew and Lin hypothesize that the Born approximation is good for low frequencies. Chew uses the Born approximation to find an approximate distribution at low frequencies and then moving to an optimization scheme at higher frequencies, Chew uses results from a low frequency imaging technique as a starting point for a higher frequency technique. Since only one frequency is used at a time, their technique is known as "frequency hopping". For low frequencies, Chew and Lin uses the distorted Born method [Wang and Chew, 1989] and for high frequencies the conjugate gradient method is used [Borup and Gandi, 1985]. The distorted Born method assumes that the internal field is dependent on the permittivity distribution, compared to the Born method where the internal field is assumed to be that of the incident field. In the Born method, the Green's function for a homogenous background is used, while the distorted Born method employs an approximate Green's function for an inhomogeneous background. Chew and Lin quantitatively recover the permittivity distribution when an object is submerged in water. Submerging the object in water reduces the contrast between the 5 background and the object, leading to less reflection from the surface of the object, resulting in very limited trapped wave energy. â€¢ In 1998, Pastrorino et al. [Pastorino et al., 1998] experiments with using genetic algorithms for solving the inverse scattering problem. He, also, presents an overview of imaging technology for MI. A genetic algorithm is an optimization technique based on the concept of mutation and a fitness function. The fitness function is similar to the objective function previously described and is used to determine the quality of the solution. To obtain a better solution, many sets of random changes are introduced. The set of random changes with the highest fitness value is the solution selected as the starting point for the next iteration. More recently, Semenov et al. [Semenov et al., 2002] builds a 3-D tomography system based on earlier work on 2-D imaging [Souvorov et al.,1998]. It was noted that a 2-D slice method is inappropriate for modeling 3-D structures and an intrinsically 3-D algorithm must be used. Semenov et al. are capable of recovering an approximate distribution of a slice of a dog using their 2-D and 3-D algorithms. A modified Newton iterative scheme is used for their 2-D imaging method [Souvorov et al., 1998]. A gradient method for is used for the 3-D imaging method [Semenov et al.,1999]. Lin et al [Lin and Chui., 2001] produced a 2 step algorithm, similar to that of Chew and Lin [Chew and Lin., 1995]. Lin's method differs from Chew's method because frequency hopping is not used. Lin separates the problem into a linear step and a non-linear step. The Lin algorithm uses the distorted Born approximation to recover an approximate distribution of the dielectric permittivity for the linear step. The approximate distribution is used as a starting point for the conjugate gradient algorithm to 6 find a detailed solution. The conjugate gradient algorithm uses the full non-linear formulation of the E M problem to solve the inverse problem, resulting in a better solution. Their experimental-setup requires that the body be placed in a matching fluid. Their solution allows the recovery of inhomogeneities in the body. Caorsi [Caorsi et al., 2003] recently introduced a multi-scaling scheme that can be applied to any iterative inversion technique. The technique starts with a coarse grid; when an anomaly is detected in a square quadrant of the grid, that quadrant is descretized more finely. Success is achieved in recovering a number of permittivity distributions by coupling this scheme with the conjugate gradient method. The work presented here is different from that of previous work in several regards. First, this work is inspired by work done in the field of geophysics inversion, specifically the work done by the University Of British Columbia Geophysical Inversion Facility (UBC GIF). Second, this work is not focused on medical imaging; instead imaging of trees is the intended application. The imaging of trees poses a difficult problem because of the existence of a high dielectric contrast boundary at the surface of the tree. This creates two problems. The first problem is the large amount of reflection at the boundary of the tree. The second problem is the tendency for waves to be trapped inside the tree. The inclusion of the trapped waves in the inversion algorithm represents a significant departure from previous work [Joanchimowicz et al., 1990]. The U B C GIF is an academic research unit focusing on the development of geophysical forward modeling and inversion methodologies. Their work is based on physical and intuitive ideas for the use of inversion. The U B C GIF view the minimization problem as a problem of minimizing the structure of the solution using the 7 data as constraints. This work applies the regularization term of "Minimizing Structure" [ Farquharson and Oldenburg, 1998] introduced in geophysical optimization. Farquharson goes into extensive detail on L2 norms, and non-L2 norms. This work is limited to L2 norms. Minimizing structure in the solution is an idea that has an associated physical interpretation. Large structural variations, measured by the norm of the first order derivatives, are penalized, resulting in more plausible models. For this work, an objective function based on a linear combination of the norm of the structure of the system and the norm of the difference between predicted data and measured data is defined for this work. This effectively incorporates the idea of minimizing structure, subject to data constraints, into the inversion algorithm. A linear approximation, based on the Taylor expansion of the objective function, wil l be generated. The linear approximation will be minimized. By repeating this process many times, an iterative process is created to find the minimum of the objective function. Previous regularization techniques, in fields outside geophysics, used purely mathematical regularization techniques to eliminate the ill-posedness of the problem. Previous work is M I [Ciric and Qin, 1997][Tanaka et a l , 2001] viewed the problem as one of minimizing the difference between the measured scattered field and guessed scattered field. These techniques have little relationship to the actual physical system. 1.1 Thesis overview Chapter 2 wil l present the solution to scattering from a homogenous circular cylinder illuminated by a plane wave. The solution is based on the Bessel mode expansion of the problem. A n odd scattering behavior is presented. Chapter 2 provided a great deal of insight into the problem of scattering. 8 Chapter 3 presents all the mathematics necessary to perform the inversion. This includes Richmond's Method [Richmond, 1965] for forward modeling. Richmond's Method is used to obtain the scattered field. Next, the Jacobian calculation is derived from Richmond's Method. The concept of "Minimizing Structure" is formally introduced in matrix form as the flatness matrix. The objective function is introduced that includes the flatness matrix and the scattered field. The Jacobian matrix is used to linearize the scattered field and finally, a method to linearize the objective function is shown. Chapter 4 demonstrates the implementation of the algorithm using MatLab. The mapping between Chapter 3 and the source code is presented. Chapter 5: presents results, using the algorithm. This chapter explores the behavior of the algorithm and presents some very promising results. First, the algorithm uses a constant scaling to the regularization. This demonstrates that regularization helps to find the correct solution. Next, decreasing the scaling of the regularization as the iterative process proceeds. By decreasing the scaling of the regularization, the system is allowed to assume more structure and is analogous to slowly cooling the system in simulated annealing [Caorsi et al., 1991]. The most novel result is that the solution slowly emerges as the amount of regularization is decreased. The algorithm recovers the permittivity distribution in noisy data and in an under constrained system. 1.2 Novel content This thesis demonstrates the successful development of a tomography algorithm, by using electromagnetic waves. The regularization technique is taken from geophysical inversion and has not been applied the electromagnetic scattering problem. The 9 regularization enforces a flat starting model, reducing the choices of starting model. This represents a significant improvement over previous algorithms, where the problem of convergence has been solved, but the equally complicated starting model problem has not been solved. A novel unified modeling-inversion notation has been created through the use of indicator matrices. This thesis, also, introduces a previously unknown scattering behavior. Scattering from homogeneous cylinders with different permittivities can have very similar scattered fields. The difference in permittivities that yields very similar scattered field follows a regular pattern. The approximate non-uniqueness of the scattered field is resolved by using multiple frequencies. 10 Chapter 2: Examination of a 2-D homogeneous cylinder 2.0 Introduction This chapter examines scattering from a homogenous dielectric circular cylinder, illuminated by a T M wave. This problem is examined to give insight into scattering from an inhomogeneous cylinder. Initially, the solution to scattering from a cylinder is presented. Next, the similarity between the scattering patterns of different permittivity values is examined. This provides insight into the inverse problem. 2.1 2-D Scattering Solution Tjicoming plane wave: Figure 2.1: Geometry of the scattering problem. 11 The geometry of the system is shown in Figure 2.1. The origin, for both Cartesian coordinates and polar coordinates, is at the center of the cylinder. For derivation, the time factor ejwt isassumed and phasor quantities are used. A T M wave, where the E field is parallel to the cylinder axis, is assumed and can be written as Equation 2.1 with k , the wave number, and E\, the amplitude of the z component of the incident field. E\x,y) = E[e-jkxz = E ^ ^ z (2.1) Because the problem involves cylindrical geometry, rewriting Equation 2.1 in a Bessel function expansion makes the problem easier to solve. Rewriting Equation 2.1 as a Bessel function expansion, and using polar coordinates Equation 2.2 is obtained [Harrington, 1961]. Ei(p,<t>)=Eifjj-"Jn(kp)eJ"*z (2.2) n=~<x> The scattering problem can be solved as a boundary value problem. The boundary is defined as the surface of the cylinder. At infinity, the scattered field is expected to be zero. Figure 2.2: Definition of electromagnetic fields inside and outside of the cylinder. 12 The boundary conditions are presented in Equation 2.3a and Equation 2.3b and are applied at the boundary of the cylinder. The tangential components of the total electric(Â£T) and magnet ic^ field must be continuous across the boundary of the cylinder. nx(H](a,<f>)-H2(a,</>)) = 0 (2.3a). Bx(Â£,(fl^)-Â£2(a^)) = 0 (2.3b) Equation 2.4 is a convenient representation of the resulting field with a known and an unknown portion. The Incident field is the known portion and the scattered field is the unknown portion. El{pj) = E',"(p,j)+E"*'(p,j) (2.4) Upon examining Equation 2.3a and Equation 2.3b, it appears that there are 4 unknowns and 2 boundary conditions. To reduce the number of unknowns to 2, Equation 2.5a is used to link E and H. The explicit relationship in polar coordinates is given in Equation 2.5b. VxE = -jo)/u(co)H (2.5a) H = 1 5 Â£ . dE , â€”: p (/) p dtp dp (2.5b) j(op{a>)\ The scattered field is assumed to be written as Equation 2.6 [Harrington, 1961], with unknown coefficients aâ€ž and the total field exterior to the cylinder satisfies Equation 2.4. The total field inside the cylinder can be represented as Equation 2.7a with the unknown coefficient câ€ž [Harrington, 1961]. In Equation 2.7b, er is the permittivity inside the cylinder. 13 E^E^raM'me^z (2.6) n=-oo E2=E[fjrcnJn{kdp)e^z (2.7a) kd=k^sr (2.7b) By applying the T M assumption, Equation 2.3b can be rewritten as Equation 2.9a which immediately implies equation 2.9b because the unit vector <f> cannot be zero. hx(E] ~E2)z = px(El -E2)z = -(E{-E2}p = 0 (2.9a) Â£, - Â£ 2 = 0 (2.9b) Applying Equation 2.5b to Equation 2.3a, gives Equation 2.10. The normal vector to a circle is p . The term involving p in Equation 2.3a disappears because of the h x operation. dE, 8E, (2.10) J \ ^ 2 _ Q dp dp By plugging Equations 2.6 and 2.7a into Equation 2.9 and Equation 2.10, a system of 2 equations with 2 unknowns is produced. The 2 equations are Equation 2.1 la and Equation 2.11b. anH^\ka) + Jn (kd) - câ€žJK (kda) = 0 (2.1 la) ankHl2)'(ka) + Uâ€ž'(ka)-cnkdJ:(kda) = 0 (2.11b) The explicit solution for an is given by Equation 2.12. The only coefficient needed is an because the solution to the field inside the cylinder is not needed. 14 a = - J Â» { k a ) Â£dJ'n {Ka)l kdaJâ€ž (kda) - j'n (kg) I kaJ n (kg) edJâ€ž \kda) I kdaJn (kda) - H(n2)'(ka) I kaH n' (ka) (2.12) 2.2 How to compare observations of different permittivity values This thesis is concerned with the inverse scattering problem. The similarities between the scattered fields are now examined as a function of permittivity, yielding insight into the inverse problem. The scattering field is expected to be very similar for permittivity values that are close together and significantly different for permittivity values that are far apart. The variation of the scattered field at different permittivity values is quantified in Equation 2.13, where M i s the number of observations points. M ZI*:(* . ) -* : fc)T (2.13) M Equation 2.13 is clearly small when two scattering fields are similar. 2.3 Numerical results Equation 2.13 is examined in this section. The geometric configuration of the observation is a circle. The configuration is shown in Figure 2.3. 15 0.5h -0.5h -0.5 0.5 1.5 Figure 2.3: Geometric configuration of observation points. There are 360 observations points spaced 1 degree apart. Figure 2.4 is a 2-D variation surface computed using Equation 2.13. The x and ; axes represents Â£,and e2 in Equation 2.13, respectively. The line, where el equals e2 represents the global minimum. There are also several bands of local minima approximately parallel to the el equals e2 line. ' 2 ' 16 Misfit behavior in 2D reference epsilon Figure 2.4: Variation surface for permittivities from 1 to 10. The shading represents the variation value. The line, where Â£, equals Â£ 2 , is the global minimum. Figure 2.5 shows slices of Figure 2.4 by fixing Â£, and varying s2 from 1 to 10. Examination of a slice of the variation surface at sx equals 9.9 shows the existence of local minima; there are two very deep local minima. The variation equals 0.0359 at the closest local minimum. Because 0.0359 is small, when compared to the maximum variation of approximately 0.8, the two scattered fields are very similar. The foregoing case of EX equals 9.9 is shown in the bottom right coiner of Figure 2.5. The other variation plots exhibit a similar behavior which is tabulated in Table 2.1. 17 Figure 2.5: Variation slices for a homogeneous cylinder of radius lXand observation distance of 1.5 X. Top left, cylinder with permittivity of 2.5, top right, cylinder with permittivity of 5.9, bottom left, cylinder with permittivity 6.9, bottom right, cylinder with permittivity 9.9. The local minima are problematic for inversion using gradient descent search techniques. Choosing a starting permittivity value far from the true permittivity value, the search algorithm may get trapped inside a local minimum. Table 2.1 Local minimum values at 4 discrete points. Permitivity Deepest local minimum 2.5 0.0997 5.9 0.0218 6.9 0.0225 9.9 0.0359 Equation 2.7 is identical to that of a Discrete Time Fourier Transform(DTFT) if p is fixed, <j) replaces the frequency variable and an replaces the coefficient in the DTFT. The DTFT has a one-to-one mapping between the infinite set of coefficients and the 18 continuous signal. For Equation 2.13, if the two minima are exactly zero then Equation 2.14a must be true for all n values. By combining Equation 2.12 and Equation 2.14a, and simplifying, Equation 2.14b is produced. an(el)-aâ€ž(e2) = 0 1 Jâ€žM_ 1 Jâ€ž(<*82) 5,J'n{a8,) 52J'n(a52) a = ka (2.14a) (2.14b) S2 =JÂ£2 (2.14c) (2.14d) (2.14e) 10 Distribution of zeros for s = 5 Q . 0) 5*- x x x x X X . X X X X x x x x x x x x - s k -1.0 - 8 - 6 -4 10 Figure 2.6: Zeros o f an (ex) - an (e2) for ex equals 5. The x's mark the location where the equation satisfied. is 19 Figure 2.6 is an examination of Equation 2.14a with fixed e,. Note that, e2 does not line up for different n indexes. However, they are close. This implies the local minima behavior seen in Figure 2.5. The deep local minima are problematic for inversion. The multiple minima imply that, in the presence of noise, uniquely determining the permittivity value may not be possible. By adding the variation from multiple frequencies the contrast between the local minima and global minimum can be increased. # = Z > g ( / , , ) (2-15) ft Figure 2.7 illustrates the use of multiple frequencies. Three values of k are used. The lowest values is 20% below the highest value. The local minimums are significantly higher than those shown in Figure 2.5. For the total variation, the global minimum stays fixed. 20 Figure 2.7: Multifrequency(&=1.67r, 1.8 ir, 2 ir) variation slices for a homogenous cylinder of radius IX and observation distance of 1.5 X. Top left, cylinder with permittivity of 2.5, top right, cylinder with permittivity of 5.9, bottom left, cylinder with permittivity 6.9, bottom right, cylinder with permittivity 9.9 The advantage of using multiple frequencies is illustrated more dramatically in Figure 2.8 Figure 2.8 is a variation surface, similar to that of Figure 2.4, but uses multiple frequencies. Figure 2.4 uses size frequencies, and all the deep local minimums have been eliminated. 21 Figure 2.8: Multifrequency (k = % 1.2T, 1.4TT, 1.67T, 1.87T, 2ir) variation surface for permmitivities from 1 to 10. 2.4 Conclusion This chapter shows that the use of multiple frequencies is necessary for the recovery of the dielectric constant of a homogenous cylinder. Without the use of multiple frequencies, scattering patterns from different permittivities may look very similar. In the presence of noise, these scattering patterns will be indistinguishable. Using more than one frequency helps make the scattering pattern from different permittivities more distinguishable. Usage of multiple frequencies changes the variation in such a way that the gradient search technique is more likely to work. 22 Chapter 3: Mathematical background for the tomography algorithm 3.0 Introduction This chapter is separated into 2 main parts. The first part develops the mathematical background needed for substitution into Equation 3.1. The second part develops a constrained set of linear equations from Equation 3.1. The solution to As' can be found, once the linear system is developed. The solution for As1, which forms the basis for the inversion algorithm, is given by dref -d{s')-lAer mm As' 0 = NfNÂ° JSJF^' +AS']' + N~' (3.1) In Equation 3.1, J, F and d need to be mathematically derived. dref refers to a measured scattered field. J and d are derived from Richmond's Method for calculating scattering for an infinite dielectric cylinder, d represents the scattered field, and I represents the Jacobian matrix for the scattered field. F represents a measure of flatness, and is known as a flatness matrix. Equation 3:1 represents the minimization of the difference between a measured scattered field and a guessed scattered field, as represented by the first term, and penalizes large variations of the dielectric permittivity. In Equation 3.1, NJ represents the number of inversion cells, NÂ° represent the number of observation locations, and Nf represents the number of frequencies measured at each observation location. Each observation is defined by its frequency, resulting in multiple frequency observations at a particular location. 23 A l l the values in Equation 3.1 are real. Because time-harmonic electric fields are complex, the electric fields are separated into real and imaginary parts for the inversion, giving rise to a completely real Equation 3.1. This avoids the need for the complex conjugate operation in Equation 3.1. 3.1 Richmond's Method Richmond's Method [Richmond, 1965] is used to find the scattered field for a single frequency. Richmond's Method is based on the discretization of an integral equation and enforcing the integral equation at a fixed set of points to produce a set of linear equations, leading to a constrained linear system. 3.2 The development of the integral equation for Richmond's Method Richmond's Method constructs the integral equation by first stating the total electric field as a sum of a field due to the dielectric body, the scattered field, Es, and the incident field, E'. The dielectric body is replaced by a current distribution using the volume equivalence theorem [Harrington, 1961]. The replacement procedure is accomplished with Equation 3.2a, which represents the equivalent current of a point in the dielectric filament. The volume equivalence theorem states that a dielectric volume with electric field E gives the same radiated field as a current written in Equation 3.2a. J = jco(s - Â£Q)E = jO)(Â£rÂ£Q -s0)E = jcos0(er -\)E dEs (p) = -jcvdA = - jcop p(*,y) = il(x-x'Y+(y-y') J-M\kp) (3.2a) JdS' (3.3a) (3.3b) 24 Associated with each current filament is a radiated field, shown in Equation 3.3a. Integrating over the current distribution gives an integral equation for the scattered field shown in Equation 3.4. E\x,y) = -(jk214)\\(sr - l)E{x\y')Hâ„¢(kp)ax'dy To obtain Equation 3.5 the electric field is written as a sum of the incident and scattered(Equation 2.4) field. Equation 3.5 represents the total field electric field at any point in space. E{x,y) = Ei{x,y)-(jk214)\\(er-\)E(x\y)H(2\kp)dx'dy< Next, the integral is broken up into Nm square regions. Equation 3.5 can be separated into a summation of integrals over each square region. The permittivity and the electric field are assumed to be constant within each square region. The electric field in each region is denoted by En and the permittivity in each region is denoted by sn. The electric field and the permittivity can be taken out of the integral, resulting in Equation 3.6a. Em +(jk2 /4)f>â€ž -l]Eâ€ž H Hâ„¢(kpm)dx'dy<= En with c e (3.6a) Pm = yl(xm-x')+(ym-y')2 (3.6b) The integral can be written in closed form if the integral over the square region is approximated by an integral over a circular region with the same area. The closed form of the integral for the circular region is written as Equation 3.7. (/*â€¢ /4)fK>Mp',f )Wp^= I Onl^fW-lj] if n - â€ž ( 3 . 7 ) 0 0 25 Enforcing Equation 3.6 at the center of each region gives a constrained linear system. Substituting Equation 3.7 into Equation 3.6 gives the system of linear equations presented in Equation 3.8a. YC E =Ei / i mn n m Â»=â€¢ (3.8a) Cmn =l + (en-lXj/2pcanHl2)(kan)-2j] ifm=n (3.8b) Cmn ={jnkaâ€ž I2\sn-tyfa.WihpJ) i f m * (3.8c) 3.3 The scattered field For the scattered field external to the cylinder Equation 3.9a [Richmond, 1965] is used once the electric field, E, has been computed. NM Es{x,y) = j(nk 12)2 fe. - l K a K J x (kan {kpn (x, y)) (3.9a) n=\ p(X,y)=^(X-XnY+(y-yJ ( 3 .9b) 3.4 Observations for inversion There are TV0 observation locations and A 7^ frequencies are measured at each location. This gives TV complex observations (TV = NfNÂ°). When the scattered field, Es, is separated into the real and imaginary parts, 2N real observations are produced. For storage purposes the real parts are stored at the top of the vector and imaginary parts are stored on the bottom of the vector. The observations are also grouped by frequency. Figure 3.1 shows the structure of the dprd vector. 26 Fs ReÂ£f Ef. dprd NJ - l ) j V Â° + l Fs ImÂ£f Fs Figure 3.1: Arrangement of the multi-frequency observation vector. 1 27 0.8 0.6 0.4 0.2 -0.2 -0.4 -0.6 -0.8 3â€”~"2r~-<s 3 \ 9 V 0 Â© ^ 3 V Zk -i_L -0.8 -0.6 -0.4 -0.2 0.2 0.4 0.6 0.1 Figure 3.2: Illustration of the mapping concept. The small circles represent the center of the modeling cells and the large square regions represent the inversion cells. The lines connect the modeling region to the inversion region. 3.5 Gridding for the Jacobian calculation Before the Jacobian calculation can be derived, the concept of using different inversion grids and modeling grids is introduced. The use of two grids is necessary owing to the competing demands of inversion and modeling. Modeling requires a minimum of 10 squares per wavelength. The finer the grid spacing the smaller the error attributable to modeling when compared to the physical system. This is problematic for inversion. Numerical experiments have shown that the inversion system becomes more unstable as more degrees of freedom are added to the system. A coarse grid works best for inversion. The modeling and inversion use different grids and a mapping is created 28 between the two grids to accommodate the competingdemands of modeling and inversion. This mapping directly effects the Jacobian calculation. Let there be N' inversion cells and Nm modeling cells. For each inversion cell there are M" modeling cells mapped to it. M" can be different for different values of the n superscripts. Equation 3.10 must be true. N' Nm = Â£ M " . (3.10) It is convenient to first introduce the concept of the indicator matrix. The indicator matrix maps the inversion cells to the modeling cells. The indicator matrix satisfies Equation 3.1 la. Where G is an NmxN' matrix. Gn is a vector of length N' that makes up a row of G and Gâ€žk is a scalar element of G . The vector e' denotes the inversion cells and the vector s denotes the modeling cells. Equation 3.1 lc is the representation for the n'th modeling cell as a function of the inversion cells written as a summation. s = Gs' (3.11a) Gj ~" G â€ž N' (3.11b) eÂ«=Y,E'bG*b (where n=l..Nm) (3.11c) 6=1 For this thesis the, Gn vector will contain only one non-zero element and that value will be 1. The matrix element Gnk is equal to 1 i f and only i f the n'th modeling 29 cell maps to the k'th inversion cell. G is sparse because most of the modeling cell do not map into the k'th inversion cell. Equations 3.8a, 3.8b, 3.8c, and 3.8d can be rewritten with the indicator notation. A T Z C â€žA=Â£; (3.12a) C = 1 + 5>XÂ» " I {jl2)[nkanH\2\kan)-2j] ifm=n (3.12b) b=\ Cmn = {jnkan 12) ^'bGnb -1 J,(kan)Hi2\kpmn) i f m * (3.12c) pmn = J{xm-xJ+{ym-yâ€ž)2 (3.i2d) The permittivity variable is the same for all the modeling cells mapped to the same inversion cell. Equation 3.5 is only for a single frequency. For multiple frequencies the forward modeling needs to be repeated for each frequency. 3.6 Derivation of the Jacobian for locations internal to the dielectric body dE The Jacobian matrix, J, calculates â€”'j- for all combinations of n and k. Where k ds'k dE ranges from 1 to TV and n ranges from 1 to Nm . To find â€” \ , Equation 3.13a needs to dek dE be solved. Solving equation 3.13a will findâ€”'j- for all n simultaneously. The procedure dek needs to be repeated once for each k. There are k sets of linear equations that need to be solved to find all k derivatives for each inversion cell. 30 Sfr*. + i X f ^ = 0 (3.13a) ^ â€ž dEn ^ dC E ^ T - 7 = - E - - r F n (3.13b) Â«=1 t 7 6 * Â«=i 0Â£k dC. dsk T = GAjl2)VkamHl2){kam)-2j) if m = n,Gnk*0 (3.13c) 5C f - = Gnk(jnkan l2)jXkan)H^{kpmn) if m* n,Gnk * 0 (3.13d) dsk â€” T 1 = 0 otherwise (3.13e) Solving the linear system stated in 3.8a gives the solution toÂ£â€ž , leaving dE theâ€”^ terms as unknowns in Equation 3.13a. The current system looks similar to the previous system, Equation 3.2, with the right hand side being different and a change of dEâ€ž the unknown variables from E to del Since C is the same in every case, the L U decomposition technique will speed up computation of â€” f , as the same decomposition can be used N! +1 times dsk 3.7 Derivation of the Jacobian Matrix for the scattered field Next, the scattering equation is examined. Rewriting Equation 3.9a, using indicator notation, results in Equation 3.14. E'(x,y) = j{nk I'2)ff fxG n i-\\E na nJ x{ka n)H 2 0{kp n{x,y)) (3.14) 31 Assuming that there are NÂ° observation points results in a set of NÂ° xm and ym coordinates. The subscript m distinguishes between each of the observation points. El (xm ,ym)=Esm= j(nk 12)f( X^Gâ€ž h - l k â€ž / , (kan )H^{kpn (xm, ym )) Â«=1 ^ 6=1 J (3.15) The derivative of Equation 3.15 is taken with respect to the A:'th inversion cell giving rise to Equation 3.16. The terms on the right hand side of the equation are all dEs known quantities. The term â€”f can be calculated directly from Equation 3.16. de, ^ g ^ = - y ( * / 2 ) 6=1 (3.16) 3.8 The Jacobian for multiple frequencies dEs The Jacobian matrix, J , requires the calculation of â€”j- for every k value, and dek for every frequency. Figure 3.3 shows the graphical arrangement of the Jacobian matrix. dEs The top half of the Jacobian matrix contains the real part of â€”f, the bottom half of the de,. dEs Jacobian matrix contains the imaginary part of â€”f . Each half is further divided into dek groups of frequencies. A l l the physical observations from a single frequency are stored within consecutive rows. The columns represent the index k. Jmn is a specific element inside J. 32 Ad=JAs' Re dE' ds[ lm dEl ds[ dE[ dEl de[ d<> dFs de( dE{Nf-\)NÂ°+\ del dE NfNÂ° dE N^NÂ° del dE^ ds[ dEs. del dEsx dE, del dEl. dE dE ds\ Asi As' Figure 3.3: Structure of the multi-frequency Jacobian. 3.9 The flatness term There are two ways to handle the flatness term, the one-sided difference method and the central difference method. The flatness term is defined as the first order approximation of the derivative of the permittivity distribution along the x and y directions at a discrete set of points. In this thesis the flatness matrix uses the one-sided difference method to approximate the first order derivatives. The derivative with respect to x, and_y, occur 33 between the centers of the inversion cells. F, the flatness matrix, encodes Equations 3.17 and 3.18. Ss(x,y) _ s(x + h,y)- s(x,y) ^ 3x h Â° M (3-17> 0(h2) (3.18) de(x,y) _Â£(x,y + h)-e(x,y) | dy h Since the grid is regularly spaced, and the derivative approximation only involves its neighbors, the flatness matrix is sparse. The central difference approximation is not used because it does not constrain the neighbors, but every other neighbor. This leads to two independent sheets of permittivity values connected at the edge of the cylinder. Examining Equation 3.19a and Equation 3.19b it is clear that e(x,y) is not in the central difference equation, and therefore not linked directly to its neighbors. 5s(x,y) = Â£(x + h,y)-e(x-h,y) + / 2 \ 3x 2h v ; de(x,y) = e(x,y + h)-Â£(x,y-h) ^ / 2\ Dy 2h V 7 ; 3.10 The N1 system of linear equations From Equation 3.1, N1 linear equations can be derived to find the minimum of O . If Equation 3.1 is linearized with respect to As1, the system becomes quadratic with respect to the variable AÂ£!. There is only one minimum in a quadratic system and the minimum occurs at the point where the gradient vector is zero. JV7 linear equations are produced, by taking the derivative of Equation 3.1 with respect to every Ae'k and setting 34 every equations equal to zero. Using matrix-vector notation and a linear approximation of the term d(s' + As') , Equation 3.20 can be written. The Jacobian matrix represents the first order derivatives of the termd(s'). The foregoing are shown in the equations that follow. mm As' 0 dÂ«-d{s'+Ae"f j3\\F(s"+As'f + N7 NfNÂ° (3.1a) min As' \ 0 = {drÂ¥-d(s')-JAs'J(d-f-d-JAs') p(F(s< + AS')J(F{S' + As'))' I Nf NÂ° jy~' (3.20) d dAs[ dAsi dAs'â€ž 0 = (3.21) The minimization problem stated in Equation 3.20, satisfies Equation 3,21 and the solution can be written as Equation 3.22. Equation 3.23 is the equation for updating the epsilon values for the next iteration. The algorithm is shown below: 1. Start with an initial guess to the permittivity 2. Calculate As"ew from Equation 3.22 f *T * â€”T^~X{lT(d^-d) pFTFs> As' = JJ +PF'F NfNÂ° N' J NfNc N' (3.22) 3. Update the permittivity with Equation 3.23. rew = s' + AS' (3.23) 4. Repeat from 2, until the stopping condition is reached. 35 Equation 3.22 is repeated, using snew as the starting point for the linear approximation each time. The Jacobian matrix and the observations need to be recalculated given the new permittivity, snew. 3.11 Conclusion In this chapter an inversion technique is developed. The technique is based on least squares minimization. The system is minimized with respect to As' and a linear model for scattering. The technique is iterative in nature and successive iterations yield better solutions. This chapter demonstrates how to calculate the Jacobian matrix for the scattered field from Richmond's Method for dielectric modeling and how to incorporate a flatness term into the least squares minimization. The Jacobian calculation requires 2 stages. The first stage is to calculate a Jacobian for the internal field, and the second stage is to calculate the Jacobian matrix for the scattered field. The Jacobian calculation for the scattered field depends only on the internal field parameters. 36 Chapter 4: Implementation of the tomography algorithm 4.0 Introduction The detailed mapping between the code and the equations presented in Chapter 3 is shown in Chapter 4. Setup software (4.1) Tessellate cylinder T Get reference data i n : j | (4.5) Calculate Jacobian (4.2) Calculate flafnessjT^trjx^ i*| (4.3) Calculate forward model Â» jl (4.4) Calculate scattered field i l l Calculate As Â«8 M Update wmmM Decrease B No â€¢ â€¢(Stop) Figure 4.1: Flow chart for the complete inversion system. The numbers in brackets represent the section each calculation is presented. Section 4.6 represents the complete non-cooled system. The flow chart for the algorithm is shown in Figure 4.1. This chapter explains how the equations in Chapter 3 are implemented in MatLab code. The code presented works with MatLab 5.3 and is upward compatible. The code is presented in the same order as in the flow chart. The code is presented in the sections that follow: 4.1) Tessellation 37 4.2) Calculating the flatness matrix 4.3) Calculating the internal field 4.4) Generation the observation matrix 4.5) The Jacobian calculation 4.6) Inversion function with a single /J value 4.7) Inversion function with multiple B values 4.1 Tessellating the cylinder Since the problem is two dimensional in nature (infinite cylinder) the tessellation of a circle is sufficient. The circle radius and grid spacing are the only two parameters needed for the automated generation of a uniformly tessellated circle. To create the discretization, a regularly spaced set of squares is superimposed onto a circle. If the center of the square lies within the circle then the square is considered part of the set of squares that make up the circle. This mapping results in two types of tessellation errors. The first error, labeled A in Figure 4.2, is the inclusion of areas that are not part of the circle. The second error, labeled B in Figure 4.2, is the exclusion of areas that are part of the circle. The circle is centered at the origin. Equation 4.1 is used as the test for membership for the set of squares that make up the circle. x2+y2<p2 (4.1) Figure 4.2 describes the foregoing concept graphically. Listing 4.1, s q u a r e _ g r i d . m, is the MatLab implementation of this mapping. The parameter C i r c l e _ r a d i u s is the radius of the desired circle, and T _ r a d is half the length of the edge of the square. 38 A N <^ / / \ \ / \ / / V \ \ / ' / . / / / y / Ce Is US ;ed t o ap proxi mate i circ le Figure 4.2: Illustrative diagram of the discretization concept. The cells in grey are used to approximate the circle. The centers of the grey cells all lie within the circle. Notice the staircase effect at the edges. With finer spacing this staircase can be reduced, but.never eliminated. This causes a modeling error because approximate tessellation is never a perfect circle. Listing 4.1 squaregrid.m f u n c t i o n [ G r i d _ p o i n t s ] = SQ U A R E _ G R I D ( C i r c l e _ r a d i u s , T_rad) Y = ( - C i r c l e _ r a d i u s + T _ r a d ) : T _ r a d * 2 : ( C i r c l e _ r a d i u s + T _ r a d ) ; %X and Y a x i s X=Y; N = s i z e ( X , 2 ) ; % f i n d out how many d i v i s i o n s on t h e a x i s G r i d = z e r o s ( N , N ) ; %Large square G r i d f o r LX = l : N % t e s t i f the p o i n t i s on t h e c i r c l e f o r LY = 1:N i f ( ( X ( L X ) ^ 2 + Y ( L Y ) A 2 ) < C i r c l e _ r a d i u s * 2 ) G r i d ( L X , L Y ) = 1; end end end [I J] = i n d 2 s u b ( s i z e ( G r i d ) , f i n d ( G r i d ) ) ; G r i d j p o i n t s = [ X ( I ) ' Y ( J ) ' ] ; 39 Related to the s q u a r e _ g r i d function is the n e i g h b o r s function, shown in Listing 4.2. The n e i g h b o r s function generates a list of neighbors for each cell. This function is essential for generating the flatness matrix, F_ . The n e i g h b o r s function takes in the set of squares (Src) and the grid spacing (Trad) as parameters. The output is an 7Yx4 array. Af is the number of squares. Column 1 to 4 represent the neighbors. The first column represents the neighbor to the left (x- Ax), the second column represents the neighbor to the right (x+ Ax), the third column represents the neighbor on top (y+ Ay) and the forth column represents the neighbor below (y- Ay). The n e i g h b o r s function determines i f a cell is a neighboring cell by computing the distance vector to that cell. The algorithm eliminates the cells where the x or y coordinate differs from the reference cell by more than one grid spacing. This leaves 5 candidate cells, one cell for each neighbor and the reference cell. The algorithm then determines the neighbor by examining the direction of the distance vector. Listing 4. 2 Neighbor.m % t h i s f u n c t i o n d e t e r m i n e s the n e a r e s t n e i g h b o r s t o a % s i n g l e p o i n t . f u n c t i o n N e i g h L i s t = n e i g h b o r s ( S r c , Trad) N = s i z e ( S r c , 1 ) ; N e i g h L i s t = z e r o s ( N , 4); e p s i l o n = 0.01*Trad; f o r l o o p = 1 : N %This i s a c o a r s e f i l t e r t h a t e l i m i n a t e s a l o t o f t h e % c a l c u l a t i o n s L i s t l = f i n d ( a b s ( S r c ( : , l ) - S r c ( l o o p , 1 ) ) <= ... ( 2 * T r a d + e p s i l o n ) ) ; L i s t 2 t e m p = f i n d ( a b s ( S r c ( L i s t l , 2 ) - S r c ( l o o p , 2 ) ) . . . <= ( 2 * T r a d + e p s i l o n ) ) ; L i s t 2 = L i s t l ( L i s t 2 t e m p ) ; f o r l o o p 2 = LJ L s t2 ' 40 Listing 4. 2 Neighbor.m d e l t a x = S r c ( l o o p , 1 ) - S r c ( l o o p 2 , 1 ) ; d e l t a y = S r c ( l o o p , 2 ) - S r c ( l o o p 2 , 2 ) ; r = s q r t ( d e l t a x ^ 2 + d e l t a y ^ 2 ) ; i f ( r < (2*Trad + e p s i l o n ) ) i f ( d e l t a x > e p s i l o n ) N e i g h L i s t ( l o o p , 1 ) = l o o p 2 ; e l s e i f ( d e l t a x < - e p s i l o n ) N e i g h L i s t ( l o o p , 2 ) = l o o p 2 ; e l s e i f ( d e l t a y > e p s i l o n ) N e i g h L i s t ( l o o p , 3 ) = l o o p 2 ; e l s e i f ( d e l t a y < - e p s i l o n ) N e i g h L i s t ( l o o p , 4 ) = l o o p 2 ; end end end end 4.2 Calculating the flatness matrix Once the grid is generated the flatness matrix can be calculated. The flatness matrix implements Equation 3.17 and Equation 3.18, the one-sided difference equations. It requires a list of neighbors generated with the previous function and the grid spacing as parameters. The neighbors list allows for quick identification of the cell to the right of the current cell, and on top of the cell. The x derivatives are encoded in the top half of the matrix and they derivatives are encoded in the bottom half of the matrix. Listing 4. 3 gradientonesided.m % T h i s f u n c t i o n g e n e r a t e s f u n c t i o n Generate = g r a d i e n t o n e s i d e d ( N e i g h L i s t , Trad) N = s i z e ( N e i g h L i s t , 1 ) ; M a t r i x = z e r o s ( 2 * N , N ) ; %t o p N f o r x d i r e c t i o n , de(x,y) _ s(x + h,y)-s(x,y) dx h + o(h2) (3.17) ds(x,y) _ s(x,y + h)-Â£(x,y) dy h + 0(h2) (3.18) % bottom N f o r y d i r e c t i o n 41 Listing 4. 3 gradientonesided.nl f o r l o o p = 1:N % T h i s i s the x l o o p i n d e x = l o o p ; i f ( N e i g h L i s t ( i n d e x , 2 ) ~= 0) M a t r i x ( i n d e x , N e i g h L i s t ( i n d e x , 2 ) ) = 1; M a t r i x ( i n d e x , i n d e x ) = -1; end end f o r l o o p = (N+l):(2*N) % T h i s i s the y l o o p i n d e x = l o o p -N; i f ( N e i g h L i s t ( i n d e x , 3 ) -= 0) M a t r i x ( l o o p , N e i g h L i s t ( i n d e x , 3 ) ) = 1; M a t r i x ( l o o p , i n d e x ) = -1; end end Generate = M a t r i x / ( 2 * T r a d ) ; 4 . 3 Calculating the internal field using Richmond's method The internal field solution is a two-step procedure. The first part is the generation of the dielectric permittivity independent portion of the matrix. The second part is the generation of the complete matrix followed by the solution for En. Y,CmnEn=E'm (3.8a) C in Equation 3.8a can be expanded so the permittivity values are explicit in the equation, resulting in Equation 4.3, obtained from Equation 3.8a. (I + Ddiag(e - l))E = E' (4.3) D is the permittivity independent matrix and only needs to be calculated once for the algorithm. This saves time by not requiring N2 Bessel functions to be calculated each 42 algorithm iteration. This first step, i n i t _ b l o c k _ m a t r i x _ g e n . m, is implemented in listing 4.4. Listing 4. 4 i n i t b l o c k m a t r i x g e n . m ^ r e t u r n s the f o r w a r d m a t r i x dependence f u n c t i o n s u c c e s s = i n i t _ b l o c k _ m a t r i x g e n ( L i s t ' C, ... k,T_rad, FName) N = s i z e ( L i s t C,1) ; D i a g _ i n d = 1:N+1:NA2; F_mat = zeros(N,N),â€¢ Lumped_Constant=j * p i * k * T _ r a d * 0 . 5 . * b e s s e l j (1, k* T _ r a d ) ; f o r l o o p = 1:N p = s q r t ( ( L i s t _ C ( : , 1 ) - L i s t C ( l o o p , 1 ) ) . A 2 + . . . ( L i s t _ C ( : , 2 ) - L i s t _ C ( l o o p , 2 ) ) . A2) ; F_mat(loop,:) = ... (Lumped_Constant.*besselh(0,2,k.*p)) . ' ; end F_mat (Di a g _ i n d ) = ( j / 2 ) * ( p i * k * T _ r a d * . . . b e s s e l h ( l , 2 , ( k * T _ r a d ) ) - 2 j ) ; save (FName, ' F__mat' ) ; success = 1; -The second step calculates the matrix (I + Ddiag(e -1)). This calculation is presented in listing 4.5, b l o c k _ m a t r i x _ g e n . m. After C has been generated, the internal fields can be found. Listing 4.6, s a m p l e l . m, demonstrates how to calculate the internal fields. Listing 4.6 is clearly separated into 3 sections, setup, matrix generation and solving of the internal fields. 43 Listing 4. 5 b lockma t r ixgen .m ^ r e t u r n s the f o r w a r d m a t r i x dependence f u n c t i o n [F_mat] = b l o c k _ m a t r i x _ g e n ( e p s i l o n , FName) load(FName) % l o a d p a r a m e t e r s s t o r e d i n t h e % i n i t _ b l o c k _ m a t r i x _ g e n f u n c t i o n N = s i z e ( F _ m a t , 1 ) ; f o r l o o p = 1:N F_mat(loop, :) = F _ m a t ( l o o p , : ) . * ( e p s i l o n 1 -1) ; end F m a t = F_mat + e y e ( N ) ; %add t h e i d e n t i t y m a t r i x Listing 4.6 sarnplel.niat %sample s c r i p t f o r c a l c u l a t i n g t he i n t e r n a l f i e l d s %Setup C_rad = 1; % c y l i n d e r r a d i u s T_rad = 1/21; % t e s s e l l a t i o n r a d i u s k = 2 * p i ; %wave number Fname = 'temp'; % D e f i n e f i l e name f o r s t o r i n g s t u f f G r i d = s q u a r e _ g r i d ( C _ r a d , T _ r a d ) ; ^ g e n e r a t e t h e G r i d N = s i z e ( G r i d , 1 ) ; e p s i l o n = 2 * o n e s ( N , l ) ; % d e f i n e p e r m i t t i v i t y d i s t r i b u t i o n E i = P l a n e W a v e ( G r i d , k ) ; % d e f i n e the i n c i d e n t f i e l d i n i t _ b l o c k _ m a t r i x _ g e n ( G r i d , k,T_rad, Fname); % c a l c u l a t e t h e e p s i l o n independent p a r t o f C m a t r i x % g e n e r a t i n g t h e m a t r i x and LU decomposing C _ M a t = b l o c k _ m a t r i x _ g e n ( e p s i l o n , Fname); [L,U] = l u (CJYIat) ; % s o l v i n g f o r t h e i n t e r n a l f i e l d s . temp = L \ E i ; E = U\temp; 44 4.4 Generation of the observation matrix The procedure for generating the scattered field is split into step 1, a permittivity independent step, and step 2, a permittivity dependent step. To calculate the scattered field from the internal fields, Equation 3.9a is used. Â£s(xm,ym) = Ank12)X{sn - \)E Ha mJ,{ka n)H?(kp H{x m ,y m)) (3.9a) Â«=i The solution for the scattered field can be written in the form of Equation 4.4, when B is calculated first, and the actual scattered field is calculated second. E' =Bdiag(e-\)En (4.4) Listings 4.7, I n i t _ B l o c k _ S c _ f i e l d . m, and Listings 4.8, I n i t _ B l o c k _ S c _ f i e l d . m, implement the two step procedure. The function I n i t _ B l o c k _ S c _ F i e l d , which implements step 1, takes in the gr id(Lis t_S) , a list of observations points, the wave number(k), the grid spacing(T_r ad), and a filename(FName) as arguments. The filename, FName, is used to store the results temporarily. The B l o c k _ S c _ F i e l d function has the permittivity distribution(epsilon), the internal fields(E) and the filename(FName) as input arguments. B l o c k _ S c function loads the temporary results stored by I n i t _ B l o c k _ S c _ F i e l d . Listing 4. 7 I n i t B l o c k S c F i e l d f u n c t i o n s u c c e s s = I n i t _ B l o c k _ S c _ F i e l d ( L i s t _ S , L i s t _ 0 , k , T _ r a d , FName); NO = s i z e ( L i s t _ 0 , 1 ) ; NE = s i z e ( L i s t _ S , 1 ) ; ObsMat = z e r o s ( N O , N E ) ; 45 Lumped_Constant = - j * p i * k / 2 * b e s s e l j ( 1 , k * T _ r a d ) * T _ r a d ; f o r l o o p = 1:NO r = s q r t ( ( L i s t _ S ( : , 1 ) - L i s t _ 0 ( l o o p , 1 ) ) . A 2 + . . . ( L i s t _ S ( : , 2 ) - L i s t _ 0 ( l o o p , 2 ) ) . A 2 ) â€¢ ; ObsMat(loop,:) = L u m p e d _ C o n s t a n t * b e s s e l h ( 0 , 2 , k * r ) ; end save(FName, 1ObsMat'); s u c c e s s = 1 ; Listing 4. 8 Blockscat teredfield.m % t h i s f u n c t i o n r e t u r n s the s c a t t e r e d f i e l d a t a g i v e n % p o i n t f u n c t i o n [Es] = B l o c k _ S c a t t e r e d _ f e i l d ( e p s i l o n , E,FName) load(FName); % l o a d s t o r e d c a l c u l a t i o n s Es = O b s M a t * ( ( e p s i l o n - 1 ) .*E) ; 4.5 The Jacobian Calculation The first task is the creation of a mapping between the modeling grid and the inversion grid. This mapping is referred to the G in Equation 3.11a and presented in the output L i s t . The columns of the matrix G are compressed in L i s t . The columns are stored as cell arrays, where the indeces of the non-zero elements of G form the array. The Net L i s t function maps the modeling cell to the closest inversion cell. It does this in two stages: i) It determines i f the center of the modeling cell is within an inversion cell. This is done with the w i t h i n subfunction, which checks i f a modeling grid cell is within the interval of the inversion cell. The second part maps the cells that haven't been mapped to an inversion cell with w i t h i n . This step finds the closest inversion cell to the given modeling grid cell. Listing 4. 9 NetList.m % T h i s i s a mapping f u n c t i o n f o r a square f u n c t i o n L i s t = N e t L i s t ( C o r s e G r i d , CRad, F G r i d , FRad) NC = s i z e ( C o r s e G r i d , 1 ) ; 46 Listing 4. 9 NetList.m NF = si z e ( F G r i d , 1 ) ; used = ones(l,NF); L i s t = cell(1,NC); %generate the mapping for loopC = l-.NC temp = [] ; for loopF = 1:NF if(Within(CorseGrid(loopC,1),CorseGrid(loopC,2) FGrid(loopF,1),FGrid(loopF,2),CRad)) i f (used(loopF) == 1) temp = [temp loopF]; used(loopF) =0; ^prevents duplicates. end end end List{loopC} = temp; end %look for any points i n the fine g r i d that % hasn't been assigned, for loopF = 1:NF i f (used(loopF) ==1) % fi n d the nearest nieghbor loopC =1; rMax = (CorseGrid(loopC,1) - FGrid(loopF,1)) A2 (CorseGrid(loopC,2) - FGrid ( l o o p F , 2 ) ) A 2 ; Closest = loopC; for loopC = 1:NC r2 = (CorseGrid(loopC,1) - FGrid(loopF,1)) A2 +(CorseGrid(loopC,2) - FG r i d ( l o o p F , 2 ) ) A 2 ; i f ( r 2 < rMax) Closest = loopC; rMax = r 2 ; end end List{Closest} = [[List{Closest}] loopF]; end end %helper function function r e s u l t = Within(XI,Yl,X2,Y2,rad) i f ((X2 > (XI - rad)) & (X2 <= (XI + rad)) & ... (Y2 > (Yl - rad)) & (Y2 <= (Yl + rad))) result= 1; 47 Listing 4. 9 NetList.m e l s e r e s u l t = 0; end The next task, in the Jacobian calculation, is the determination of the derivative of the internal fields as computed by Equation 3.13a. BlockJacNL, in Listing 4.10, has the L U decomposition of C . Since the problem involves many right hand sides for the same matrix, the L U decomposition ,for a given C , can be reused. Listing 4. 10 BIockJacNL.M %This f i l e i s used f o r the c a l c u l a t i o n of the Jacobian f o r %a mapped SRC % l i s t . The r e t u r n type w i l l be a Nxepsilon matrix. Were %each column represent ^Parameters %SrcF - l o c a t i o n of the t e s s e l a t i o n p o i n t s %EF - f i e l d s at the t e s s e l a t i o n p o i n t s %LF - lower matrix of the lu(Mat) were Mat represents the %forward problem %UF - upper m a t r i x of the lu(Mat) were Mat represent the %forward problem % e p s i l o n - the e p s i l o n value of the coarse g r i d . Should be %stored as a column v e c t o r % N e t L i s t -%k - wave number 2*pi/larnbda %Crad - c o r r e c t e d r a d i u s s q r t ( 4 / p i ) * t e s s e l a t i o n radius f u n c t i o n Jac = blockJacNL(SrcF, EF, LF,UF, e p s i l o n , . . . N e t l i s t , k, Crad,FName); %step a l l o c a t e space NE = s i z e ( e p s i l o n , 1 ) ; NF = s i z e ( S r c F , 1 ) ; Jac = zeros(NF,NE); Map = zeros(NF,1); 48 Listing 4 . 1 0 BlockJacNL.M load(FName); f o r l o o p = 1:NE A = N e t l i s t { l o o p } ; Map(A) = l o o p ; end f o r l o o p E = l-.NE rhs = z e r o s ( N F , 1 ) ; f o r l o o p F e q n = 1:NF f o r l o o p F r h s = 1:NF i f (Map(loopFrhs) == loopE) r h s ( l o o p F e q n ) = r h s ( l o o p F e q n ) . + F _ m a t ( l o o p F e q n , l o o p F r h s ) * E F ( l o o p F r h s ) ; end end end temp = - L F \ r h s ; J a c ( : , l o o p E ) = UF\temp; end The last task is the calculation of the Jacobian matrix for the scattered field in Equation 3.16. Equation 3.16 is a simple summation of terms. HGknEnanJl(kan)Hi2\kpâ€ž) + ds'k Nm ( Nl \ Z 2 > X -1 ^anjXkaM\kpn{xm,ym)) ) 0Â£k ) (3.16) The B l o c k a s s e m b l e J a c o b i a n function produces the complex Jacobian matrix. Separating the matrix into real and imaginary parts will be done explicitly in the inversion script. Listing 4 . 1 1 BlockassembleJacobian .m %assemble complex j a c o b i a n f u n c t i o n CJac = B l o c k A s s e m b l e J a c o b i a n ( E J a c , En, e p s i l o n , N e t L i s t , Src,Obs, k, Crad,FName) NO = s i z e ( O b s , 1 ) ; NE = s i z e ( e p s i l o n , 1 ) ; 49 Listing 4.11 BlockassembleJacobian .m NF = s i z e ( S r c , 1 ) ; CJac = zeros(NO,NE); Map = z e r o s ( N F , 1 ) ; LC = ( j * p i * k / 2 ) * C r a d * b e s s e l j ( l , k * C r a d ) ; load(FName) f o r l o o p l = 1:NE temp = N e t L i s t { l o o p l } ; Map(temp) = l o o p l ; end f o r loopE = 1:NE CJac ( : , loopE) = ObsMat* ,( ( e p s i l o n (Map)-.. . 1 ) . * E J a c ( : , l o o p E ) ) ; f o r loopO = 1:N0 temp = N e t L i s t { l o o p E } ; f o r loopF2 = temp r = s q r t ( ( S r c ( l o o p F 2 , 1 ) - O b s ( l o o p O , 1 ) ) . A 2 + ( S r c ( l o o p F 2 , 2 ) - O b s ( l o o p O , 2 ) ) . ^ 2 ) ; C J a c ( l o o p O , l o o p E ) = C J a c ( l o o p O , l o o p E ) - . L C * E n ( l o o p F 2 ) * b e s s e l h ( 0 , 2 , k * r ) ; end end end c l e a r ObsMat; 50 4.6 Inversion function with a single beta value A l l the steps, as shown in Figure 4.3, are now in place. A l l the elements of the inversion scheme can be placed in a function to simplify the inversion procedure. Listing 4.12 is the function. The function b a s i c l n v e r s i o n essentially runs the inversion algorithm for any single beta value and for a fixed number of iterations that is specified by the user. The loop seen in Figure 4.3 is implemented as a for loop. Cascading the B is done outside this function. Figure 4.3 outlines the algorithm performed in Listing 4.12. Setup software Calculate forward mbde| Calculate scattered field y ; ;i ' ~ ~ ~ Calculate jacobian ,t,T. Calculate As Update Â£ Figure 4.3: Flow chart for the B a s i c l n v e r s i o n function. The setup portion of the software includes the encoding of all the parameters that must be chosen. The function loops through the Jacobian calculation and updates the permittivities. The calculation of As1 is done in Listing 4.12. It is near the bottom and implements Equation 3.22. 51 As1 = NfNÂ° + N' J ccJT(dref-d) pFTFs' NfNÂ° N' (3.22) Listing 4.12 B a s i c l n v e r s i o n . m % T h i s i s an attempt t o a b s t r a c t t he i n v e r s i o n f u n c t i o n t o %make i t e a s i e r t o r u n b a t c h e s %as w e l l a n a l y z e performance f u n c t i o n [ e p s i l o n C , M i s f i t ] = B a s i c l n v e r s i o n ( R e f _ O b s , ... %T h i s i s the r e f e r e n c e d a t a FileNames, ...%Filename f o r b l o c k m a t r i x % t h a t has been p r e - c a l c u l a t e d T e m p D i r e c t o r y , . . . %where t o s t o r e temporary f i l e s C ylinderRad,...%Where t o s t o r e p arameters o f t h e % c y l i n d e r s C o r s e _ r a d , . . . %The s i z e of t h e i n v e r s i o n t e s s e l l a t i o n F i n e _ r a d , . . . %The s i z e o f t h e m o d e l i n g t e s s e l l a t i o n C o r s e _ g r i d , F i n e _ g r i d , . I E t a b l e , .. K a r r a y , ... %an A r r a y o f wave numbers E p s i l o n C _ i n i t i a l , % s t a r t i n g model t o be de t e r m i n e % s e p a r a t e l y Obs_pts, ... % O b s e r v a t i o n P t s N i t e r , . . . INumber o f i t e r a t i o n s p i c t u r e n a m e , . . . a l p h a , b e t a , F D M a t r i x ) % P a r t 1 F i g u r e out t h e s i z e of m a t r i x e s and v e c t o r s NK = s i z e ( K a r r a y , 1 ) ; %number of f r e q u e n c i e s NO = s i z e ( O b s _ p t s , 1 ) ; %number o f o b s e r v a t i o n s p o i n t s NEF = s i z e ( F i n e _ g r i d , 1 ) ; % n u m b e r o f model c e l l s NEC = s i z e ( C o r s e _ g r i d , 1 ) ; %number o f i n v e r s i o n c e l l s C F i n e _ r a d = s q r t ( 4 / p i ) * F i n e _ r a d ; % c o r r e c t e d r a d i u s f o r %Richmond's method e p s i l o n = z e r o s ( N E F , 1 ) ; % L i n k i n v e r s i o n and model c e l l s e p s i l o n C = E p s i l o n C _ i n i t i a l ; % P a r t 1.2 g e n e r a t e f i l e n a m e s and mappings f o r s t o r a g e % purposes % P a r t 1.1 g e n e r a t e some f i l e n a m e s f o r temp d i r e c t o r i e s TempFile = c e l l ( N K , l ) ; TempFile2 = c e l l ( N K , 1 ) ; 52 Listing 4.12 B a s i c l n v e r s i o n . m f o r loopK = 1:NK TempFile{loopK} = s p r i n t f ( 1 % s % i ' , T e m p D i r e c t o r y , l o o p K ) ; end f o r l o o p l = 1-.Niter T e m p F i l e 2 { l o o p l } = s p r i n t f ( 1 % s a % i b ' , p i c t u r e n a m e , l o o p l ) ; end % P a r t 1.2 Map t h e p e r m i t t i v i t y from i n v e r s i o n t o m o d e l i n g , f o r l o o p = 1:NEC e p s i l o n ( I E t a b l e { l o o p } ) = e p s i l o n C ( l o o p ) ; end % P a r t 2 a l l o c a t e space M i s f i t = z e r o s ( N i t e r , 1 ) ; IObs = zeros(NO*NK,1); ObsJacMF = zeros(NO*NK,NEC); % P a r t 3 s e t u p t h e i t e r a t i v e a l g o r i t h m f o r i t e r = l : N I t e r %3.1 c a l c u l a t e t h e i n t e r n a l f i e l d s f o r l o o p K = 1:NK Mat = b l o c k _ m a t r i x _ g e n ( e p s i l o n , F i l e N a m e s { l o o p K , 1 } ) ; [L,U] = l u ( M a t ) ; E i = P l a n e W a v e ( F i n e _ g r i d , K a r r a y ( l o o p K ) ) ; TempE = L \ E i ; E = U\TempE; ES = . . . B l o c k _ S c a t t e r e d _ f i e l d ( e p s i l o n , E , F i l e N a m e s { l o o p K , 2 } ) ; s a v e ( T e m p F i l e { l o o p K } , ' L ' , ' U ' , ' E ' , ' E S ' ) ; i n d e x l = (loopK -1)*N0+1; index2 = NO*loopK; I O b s ( i n d e x l : i n d e x 2 ) =ES; end % P a r t 3.2 c a l c u l a t e t h e J a c o b i a n f o r l oopK = 1:NK l o a d ( T e m p F i l e { l o o p K } ) ; i n d e x l = (loopK -1)*N0+1; index2 = NO*loopK; % P a r t 3.1.1 J a c o b i a n s t u f f I n t J a c = B l o c k J a c N L ( F i n e _ g r i d , E, L,U, e p s i l o n C , . . . I E t a b l e , K a r r a y ( l o o p K ) , CFine_rad," FileNames{loopK,1}) ; ObsJac = B l o c k A s s e m b l e J a c o b i a n ( I n t J a c , E , e p s i l o n C , . . . I E t a b l e , F i n e _ g r i d , Obs_pts, K a r r a y (loopK) . . 53 Listing 4.12 B a s i c l n v e r s i o n . m C F i n e _ r a d , F i l e N a m e s { l o o p K , 2 } ) ; O b s J a c M F ( i n d e x l : i n d e x 2 , : ) = ObsJac; end %3 .3 now t o s e p a r a t e i n t o r e a l and i m a g i n a r y p a r t s d e l t a _ o b s = Ref_0bs-10bs; M i s f i t ( i t e r ) = s u m ( a b s ( d e l t a _ o b s ) . A 2 ) / ( N O * N K ) ; % P a r t 3.4 s e p a r a t e J a c o b i a n i n t o r e a l and % i m a g i n a r y p a r t s . RMObs = zeros(2*N0*NK,1); RMJac = zeros(2*N0*NK,NEC); RMObs(1:(N0*NK),:) = r e a l ( d e l t a _ o b s ) ; RMObs((N0*NK+1):(2*N0*NK),:) = i m a g ( d e l t a _ o b s ) ; RMJac(1:(NO*NK),:) = r e a l ( O b s J a c M F ) ; RMJac((NO*NK +1):(2*N0*NK),:) = imag(ObsJacMF); %3.5 D e l t a e p s i l o n c a l c u l a t e d here. d e l t a _ e p s i l o n = (alpha*RMJac'*RMJac/(NO*NK)+beta*FDmatrix'*FDmatrix/NEC)\... (alpha*RMJac'*RMObs/(NO*NK)-... ( b e t a * F D m a t r i x ' * F D m a t r i x * e p s i l o n C ) / N E C ) ; %3.6 p e r m i t t i v i t y updated here. e p s i l o n C = d e l t a _ e p s i l o n + e p s i l o n C ; f o r l o o p = 1:NEC e p s i l o n ( I E t a b l e { l o o p ) ) = e p s i l o n C ( l o o p ) ; end s a v e ( T e m p F i l e 2 { i t e r } , 1 e p s i l o n C ' ) end 4.7 Inversion function with multiple beta values Listing 4.13 performs the algorithm shown in Figure 4.1. It uses the B a s i c l n v e r s i o n function for the greyed out box in Figure 4.1. Listing 4.13 complete script c e l l N = 'cascade3'; % l o a d the r e f e r e n c e o b s e r v a t i o n s s e a r c h s t r i n g = s p r i n t f ( ' c : \ \ m o d e l \ \ f o u r c e l l o b s 2 . m a t ' ) i f ( - e x i s t ( s e a r c h s t r i n g ) ) l o a d 4 _ p a r a m e t e r s 2 ; end l o a d ( s e a r c h s t r i n g ) %generate G r i d s 54 Listing 4.13 complete script r a d i u s =1; Irad= 1/9; Mrad=l/41 ; I g r i d = s q u a r e _ g r i d ( r a d i u s , I r a d ) ; M g r i d = s q u a r e _ g r i d ( r a d i u s , Mrad); C o r r e c t _ r a d = s q r t ( 4 / p i ) * M r a d ; ETable = N e t L i s t ( I g r i d , I r a d , M g r i d , Mrad); NF= s i z e ( M g r i d , 1 ) ; NC= s i z e ( I g r i d , 1 ) ; % setup m a t r i x e s % g enerate some f i l e n a m e s f i l e n a m e = c e l l ( N K , 2 ) ; f o r loopK = 1:NK f i l e n a m e { l o o p K , 1 } =... s p r i n t f ( ' c:\\model\\FWMAT%02i',loopK) ; f i l e n a m e { l o o p K , 2 } =... s p r i n t f ( ' c : \ \ m o d e l \ \ O B M A T % 0 2 i ' , l o o p K ) ; end % I n i t i a l i z e f u n c t i o n f o r c a l c u l a t i n g s c a t t e r e d %and i n t e r n a l f i e l d s f o r loopK = 1:NK I n i t _ B l o c k _ m a t r i x _ g e n ( M g r i d , K a r r a y ( l o o p K ) , . . . C o r r e c t _ r a d , f i l e n a m e { l o o p K , 1 } ) ; I n i t _ B l o c k _ S c _ f i e l d ( M g r i d , ObsPts, . . . K a r r a y ( l o o p K ) , C o r r e c t _ r a d , f i l e n a m e { l o o p K , 2 } ) ; end % D e c l a r e b e t a v a l u e s t o use BV = [ 10*(8/9) . A [0:60] 0 0 0 ] ; NB = s i z e ( B V , 2 ) ; e p s i l o n S t a r t = 2.5*ones(NC,1); %Generate F l a t n e s s m a t r i x N e i L i s t = N e i g h b o r s ( I g r i d , I r a d ) ; F D matrix = g r a d i e n t o n e s i d e d ( N e i L i s t , I r a d ) ; N i t e r =1; a l p h a = 1; f o r loopB =1:NB b e t a = B V ( l o o p B ) ; e p s i l o n S t o r e = ... s p r i n t f ( ' C : \ \ % s O S \ \ e p s i l o n % 0 5 i _ 1 , c e l l N , l o o p B ) ; M i s f i t S t o r e = ... s p r i n t f ( 1 C : \ \ % s O S \ \ M i s f i t % 0 5 i ' , c e l l N , l o o p B ) ; 55 Listing 4.13 complete script -sperform the i n v e r s i o n f o r a s i n g l e b e t a [ e p s i l o n C , M i s f i t ] . . . = I n v e r s i o n W i t h F l a t t e s s M o d e l ( R e f Obs,...% r e f e r e n c e d a t a f i l e n a m e , ... %Filen a m e s f o r m a t r i x t h a t has 'c:\\temo',.. %been p r e - c a l c u l a t e d .%Where t o s t o r e temporary f i l e s r a d i u s , . . . %Where t o s t o r e p arameters o f t h e c y l i n d e r s I r a d , . . . %The s i z e of the c o a r s e t e s s e l a t i o n Mrad,... %The s i z e o f the f i n e t e s s e l a t i o n I g r i d , . . . M g r i d , . . . ETable, . . . Ka r r a y , ... %An A r r a y of wave numbers e p s i l o n S t a r t , . . . % S t a r t i n g model t o be %determine s e p a r a t e l y ObsPts, . . . % O b s e r v a t i o n P t s N i t e r , . . . e p s i l o n S t o r e , ...%Number of i n t e r a c t i o n s a l p h a , b e t a , F D m a t r i x ) ; e p s i l o n S t a r t = e p s i l o n C ; % c a s c a d i n g s a v e ( M i s f i t S t o r e , ' M i s f i t ' ) ; % s t o r e t h e m i s f i t f u n c t i o n % f o r a n a l y s i s end 4.8 Summary In this chapter the implementation of the inversion algorithm is presented. The implementation is separated into several functions. The separation of the program into several functions allows for easy debugging and testing of the algorithm. The program has not been optimized for speed or memory, but attempts to reuse calculations where possible. 56 Chapter 5: Numerical experiments using the tomography algorithm 5.0 Introduction In this chapter an investigation of the functional minimization defined in Equation 3.1 is explored. The parameter, B, in this equation is varied in magnitude, as is the number of iterations. B represents the relative weighting between the data misfit term and the flatness in the predicted model. The investigation is separated into experiments. The experiments are organized into 4 sets, each exploring a different aspect of the inversion algorithm. The first three sets of experiments share common parameters, as presented in Section 5.1. The first two sets of experiments wil l share 3 common test cases. The first set of experiments keeps B constant, as a function of iteration. The second set of experiments decreases B as a function of iteration. Decreasing /3 is analogous to cooling in simulated annealing. The third set of experiments introduces noise to the most complex test cases of the second set of experiments. The fourth set of experiments explores the issue of moving to a case where there are more inversion cells than observations. This is known as the under-determined case. Table 5.1: Summary of experiments for chapter 5 Experimental set 1 Permittivity distributions Figures 5.4, 5.5, 5.6 Figures 5.4, 5.5, 5.6 Figure 5.6 Figure 5.22 Figure Number of experiments 21 Description P is fixed in these experiments. P is decreasing in these experiments Noise is added to the signal P is decreasing in these experiments, and a finer tessellation is used. Same as above, but with 2 blobs. 57 5.1 Setup for the experimental procedure For the experiments in Chapter 5, the dref vectors are numerically generated, using Richmond's Method. The permittivity distributions for the numerically generated scattered fields are chosen such that the permittivity distribution can be described exactly by the inversion tessellations. That is the tessellation is a region consisting only of a collection of squares, approximating a circle. This eliminates any noise that arises from errors caused by the inability of the tessellation cells to model the permittivity distribution exactly. For all experiments, 6 frequencies are used; a circular cylinder with a radius of IX forms the outer boundary of the object. Table 5.2 summarizes the properties in experimental sets 1, 2 and 3. The properties for experimental sets 4 and 5 are presented in Section 5.7 and Section 5.8. Note that all these experiments do not represent far-field scattering. Table 5.2: Parameters for the experimental sets 1, 2 and 3 Parameter Value Cylinder radius 1 X k values 0.80*2TT 0.88*2TT 0.96*2TT 1.04*2TT 1.12*2TT 1.20*2TT Inversion cell tessellation radius 1/9 X Modeling cell tessellation radius 1/41 X Incident wave eJkx Observation locations -90 to 90 degrees with 6 degrees of separation. p=\.5 X 58 The relevant geometry is shown in Figure 5.1, which represents the cells corresponding to possible permittivity distributions. The permittivity value is assumed to be constant within each region. Each square region represents an inversion cell, with length 2/9A. The regions are generated with the s q u a r e _ g r i d function described in Chapter 4. Cel ls used to approximate cyl inder for inversion 1 r | 1 1 1 1 , 0.8 â€¢ . : : 0.6 : 0.4 - -0.2 -CO * n ->i -0.2 --0.4 --0.6 'â€¢ -0.8 â€¢ ' ' _1 I Li 1 I i I I i_l i -1 -0.5 0 0.5 1 x ax is Figure 5.1: Inversion cells for experimental sets 1, 2 and 3. Figure 5.2 represents the same region as Figure 5.1. The cells in Figure 5.2 are used for Richmond's Method for forward modeling. The permittivity and electric field are assumed to be constant in each region. Each square region represents a modeling cell, with length 2/4IX. The regions are generated with the s q u a r e _ g r i d function as described in Chapter 4. 59 Cells used to approximate cylinder for modeling 0.81- I 1 1I I 1 I I | | | 1 1 I I I 1 1 I | I I | | I I 0.4 - FE:EE:EEEEEEEEEE:EE:zE:zzEEzEEzzEzzzEL 0.2 :Ez:zEzzzzzz:zzzzzzzzzzEzzzzzz:zz:zE::z:z x n r 05 U >^ ZZZZZZZZZZZZZZZZZZIZZZIZZZZZZZZIZIIZIIZZZ -0.2 :zzz:zz:zz:zz:z::zzzzz:zzzzzzzz:zzzzz^ -0.4 â€¢ z z z z z z z z z z z z z z z z z z z z z ^ -O.B --0.8 - 'izCZZZzEzzEzZzEzzEzZIZZLT-' _i i i II 111 ii 11 II i i -1 -0.5 0 0.5 1 x axis Figure 5.2: Modeling cells for experimental sets 1, 2 and 3. Figure 5.3 illustrates the superposition of the inversion regions and modeling regions. The figure represents the mapping concept presented in Section 4.1. The square regions represent the inversion cells and the x's represent the centers of the modeling cells. The x's within a given square all share the same permittivity value. X 03 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 -1 J 1 X -> X X > y y y. ; X X X X X X X X X X X X X X v X X X X X X X X X X X X > < x x x ; < X X X X X X X X X X X X X X X X X X ; i X. X '. X X X X X X X X X X X X X X X X X Â« X X ) x x x x> X X X X > X X X X > X X X X X X X X X X X X X X X X < X X X X < X X X X < X X X X < X X X X X X X X x x x :x X X X X X X X X :x x x x ; x x x x ; x x x x i X X X X X X x x x X X X X X X X X X > X > X X > X X > â€”x y. x :â€¢ X. X X X X X X X X X X X X X X v ' v X X X X X X > X X X X > X X X X > X X X X >-^ X V X X X X X X X X X X X X X X X X X % X X" X <xxxx '. X X X X < X X X X â€¢-; y v. y X X X X X X X X X X X X X X X X <: x x x x" < X X x â€¢/â€¢ iX xxx ; x x x x X X X" X X X X X X X X X X X X X , ix : x : x x : x x X X x > X X x > X X X X > X X X X > X X X X X X X X X X X X X X X X X X X X > ".XXX/ X X X X ; X X X X X X X X X X X X X X X X X X X X X < X X X X < X X X X < X X X X < X X X X X X X X X X X X X X X X X X X X < X X X X < X X X X ( X x x x < X X X X x x x x ; x x x x ; X X X X ! X X X X 3 : x x x I X X X I X X. X X : x x x x x x x x ; X X X x> X X X X > X X X X 5 X X X X X X X X X X X X X X X X -\ A A A A X X X X X X X X X X X X X X > X X X XV X X X X X X X X X X X X X X X X X X X X < X X X A. < X X X X < X X X X < X X X X < X X X X X X X. X X X X X X X X X X X X X X X X X I X X X X < X X X X ; x x x x ; x x x x C X X X X X X X X J x x x x : x x x x : x x x x : X X X X ' . X X X X : x x x x : x x x x : x x x x ; x y \- x x x x x ; X X X x > X X X > X X X > X X X X X X X X X X X X X X X X X X X X > X X X X >â€¢ X X X X > X X X X > X X X X X X X X X X X X X X X X < X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X C X X X X ; x x x x C X X X X <x x x x x x x x ; x x x x : x x x x : x x x x : i x x x x : x x x x ; x x x ; x x x X X > x x : X J x > X X X X X X X X X X X X X X X X X X X X X X X X x y X X X x> X X X X > X X X X X X X X X X X X X X X X X A" !X"W-X X X X X X X X X X X X X X X X < X X X X W X' 'A XXX X XXX X XXX X X X X X 'A A' X' !X < X X X X <x x x x < X X X X t x x x x A A' 'A 'A x x x x ; x x x x : X X X X I x x x x : X X X X X X X X X X X X X X X X X X X X X X X > X X X X > X X X X J> X X X X s< X X X X X X X X X X X X X X X X Â« x x x < X X X X <xxxx < X X X X X X X X X X X X X X X X X X X X <x x x x i X X X X < X X X X ; x x x x x x x x : X X X X x x x X X X X X X > xx> > 1 X X X X X X X X X X X X X X < X X X X < X X X X < X X X X < X X X X X X X X X X X X X X X X X X ; x x x x X X * -0.5 0 0.5 1 x axis Figure 5 . 3 : Superposition of modeling regions and inversion regions 5.2 Test cases 1,2 and 3 Three test cases are used for experimental sets 1 and 2. The proprieties of the test case are summarized in Table 5.1. The first test case is shown in Figure 5.4. It has 1 cell with a permittivity of 1, and 68 cells with a permittivity of 2.5. The distribution is asymmetric. The test case will be used in experimental sets 1, 2, and 3. 61 -0.6 â€¢0.B 0.5 Figure 5.4: Test case 1, background has the permittivity of 2.5 and perturbation has a permittivity of 1. The second test case is shown in Figure 5.5. It has 4 cells with a permittivity of 1, and 65 cells with a permittivity of 2.5. The distribution is asymmetric. The test case will be used in experimental sets 1, 2, and 3. 62 "-1 -0.5 0 0.5 1 Figure 5.5: Test case 2, background has the permittivity of 2.5 and perturbation has a permittivity of 1. Test case 3 is shown in Figure 5.6. It has 9 cells with a permittivity of 1, and 60 cells with a permittivity of 2.5. The distribution is symmetric. The distribution is used i experimental sets 1, 2, 3 and 4. 63 -1 -0.5 0 0.5 1 Figure 5 . 6 : Test case 3, background has the permittivity of 2.5 and perturbation has a permittivity of 1. 5.3 Norms for assessing the quality of the solution Two norms can be defined to help assess the quality of the solution. Because the measured scattered field is simulated, the exact permittivity distribution corresponding to the scattered field is known. By knowing the permittivity distribution, Equation 5.1 can be defined. Equation 5.1 is a measure of the difference between the recovered permittivity distribution and the exact permittivity distribution. Ideally the value of <t>2 is zero at the end of the algorithm. This term is called the modeling error. 64 The second norm is a measure of the difference between the measured scattered field and the predicted scattered field. This is known as the variation and is shown in Equation 5.2. dref -d 2 5.4 Experimental set 1 For the experiments in experimental set 1, /? is held constant as a function of iteration and the algorithm runs for 9 iterations. The parameters in Table 5.2 are used, the modeling regions are descretized according to Figure 5.2, and the inversion cells are descretized according to Figure 5.1. Seven values of B are used for the three test cases described in Section 5.2 for a total of 21 experiments. The values of used are 100, 10, 1, 0.1, 0.01, 0.001 and 0. For all the cases a flat starting model of permittivity of 2.5 is assumed. 65 Variation Plot Figure 5.7: Variation plot after 9 iterations as a function /? for experimental set 1, test cases 1, 2 and 3 . The graph in Figure 5.7 is the plot of the variation for the experiments in experimental set 1 after the ninth iteration. The line connects experiments that share the same test case. The significant features of the graph include the decreasing variation with decreasing /?, and the sudden increase in the variation value when f3 is equal to zero. 66 0^ as a function of (3 0.001 Figure 5.8: Modeling error after 9 iterations as a function /? for experimental set 1, test cases 1, 2 and 3. Figure 5.8 is a plot of the modeling error at the ninth iteration of each experiment. The lines connect experiments that share the same test case. Note the fact that the modeling error decreases as /? decreases, and then suddenly increases when J3 is set to zero. Examination of test case 3 indicates that the variation is not a good measure of divergence. The variation at (3 = 0, is large but finite. The magnitude is of the same order as the value of the variation at /? = 100 . The modeling error at B = 0 is very large. Cleary, the solution has diverged, yet the variation value remains finite. When (3 = 0 the inverse problem is ill-posed. Without regularization the inversion algorithm is expected to diverge when the initial model is significantly different from the true model. This 67 implies that the inclusion of the flatness term in the minimization helps find a stable solution to test case 3. For test case 2, the values of the variation and modeling error are studied as a function of iteration for ft = 0 and fi = 0.001. This case is examined closely because of the sudden increase in modeling error. It is not clear from looking at Figure 5.7 and 5.8, whether test case 2 is converging at fi - 0. Figure 5.9: Recovered permittivity distribution for experimental set 1, test case 2 and fi = 0. Figure 5.9 shows the permittivity distribution recovered in test case 2 at the ninth iteration. The surface is bumpy and the peak permittivity value is about 5. This is quite different than that of the actual permittivity distribution. 68 Figure 5.10: Recovered permittivity distribution for experimental set 1, test case 2 and (3 = 0.001. Figure 5.10 is the recovered permittivity distribution for (3 = 0.001 at the ninth iteration. The scaling of the graph is different than that of Figure 5.9 to accentuate the features of each graph. Figure 5.10 looks much more similar to Figure 5.6, than Figure 5.9. This impli( that the solution produced with a little regularization is better than the solution with no regularization for test case 2. Figure 5.11 examines the behavior of the modeling as a function of iteration for the case of ft = 0 and f3 = 0.001 for test case 2. 69 Q,. Vs . iteration Iteration Figure 5.11: Variation as a function of iteration for experimental set 1, test case 2 , fi â€” 0 and /? = 0.001 For test case 2, the fi = 0 line does converge, however the fi = 0.001 line converges much more rapidly in Figure 5.11 and the final variation value is lower than that of the fi = 0 line. This further implies that, when fi = 0, a local minimum has been reached. The regularization helps avoid the local minimum because the local minimum has significantly more structure than that of the global minimum. The behavior of the variation suggests that slowly decreasing fi leads to a better solution for test case 2 and test case 3 compared to when fi - 0 . 70 5.5 Experimental set 2 For the second set of experiments (3 decreases as a function of iteration. For each of the three test cases, fi is decreased in 2 ways. For the first method, fi is decreased after every iteration. For the second method, B is decreased after every third iteration. There are a total of six experiments. The values of B used in both methods are 100, 10, 1, 0.1, 0.01, 0.001 and 0. The parameters in Table 5.2 are used for both methods. A flat starting model of permittivity 2.5 is used. O . . Vs . Iteration Figure 5.12: Variation plot for experimental set 2, method l(decreasing fi after every iteration). Figure 5.12 is a plot of the variation for the first method of decreasing fi. The variation decreases after every iteration. At the 7 t h iteration, the variation is not zero, although from the graph it may appear so. 71 0.45 0>M Vs iteration 0.35 0.3 0.25 5 e 0.2 0.15 0.1 0.05 test case 1 test case 2 test case 3 -eâ€”e-<!> oâ€”*â€”Â©â€”$â€”1 10 12 14 16 18 20 Iteration Figure 5.13: Variation plot for experimental set 2, method 2(decreasing fi after every third iteration). Figure 5.15 is a plot of the variation for the second method of decreasing fi. The lines represent a specific test case. Similar to Figure 5.14, the variation decreases after every iteration. The variation exhibits large decreases in variation when the value of /? is changed. The variation at the 20 t h iteration is nearly zero for all three test cases. 72 <S?2 Vs. iteration test case 1 - Â© - test case 2 â€”&~ test case 3 iterations Figure 5.14: Modeling for experimental set 2, method l(decreasing /? after every iteration). Figure 5.14 plots the modeling error for the case where f3 is decreased after every iteration. The modeling error decreases after every iteration. The misfit value at the seventh iteration is clearly non-zero for test case three. 73 <D, vs iteration Figure 5.15: Modeling error for experimental set 2 , method 2(decreasing B after every third iteration). Figure 5.15 is a plot of the modeling error for the case where B is decreased after every third iteration. Both methods of decreasing B improve the variation and modeling error when the final solution is generated. For method 2, when B is decreased after every third iteration, the computed solution comes very close the exact solution. The average error in test case 1 is on the order of 10"4, with similarly small errors for test cases 2 and 3. The variation and modeling error terms also exhibit the steepest drop when the /? value is changed. The model error changes very little in Figure 5.15 when /? is held constant. This implies that the system reaches a minimum value for that particular B quickly. 74 The first method, where the value of /? decreases after every iteration, has a lager modeling error. Note that the first method requires 3 times less computation than the second method. The solution to method 1 is not as good as the solution to method 2. Method 1 corresponds to cooling the system too fast. Examining the variation and modeling error, these experiments show the same behavior as the first set of experiments, that is the variation and modeling error decreases after every iteration. The final value of modeling error for test case 3 is lower in method 2 than method 1. Figure 5.16: Image of permittivity distribution the experimental set 2, test case 3, method 1, and iteration 7. Figure 5.16 is a surface plot of the permittivity distribution for test case 3 of method 1 at the seventh iteration. The surface looks very rough. The major feature of test case 3 is visible. 75 Figure 5.17: Image o f permittivity distribution experimental set 2, test case 3, method 2, and iteration 21. Figure 5.17 is a surface plot of the permittivity distribution of test case 3 of method 2 at the 21 s t iteration. The solution resembles the exact solution. Figure 5.17 resembles Figure 5.6 more than Figure 5.16 and demonstrates that the method where decreasing B slowly can find a better solution than the case where /? is decreased too quickly. 5.6 Experimental set 3 Experimental set 3 uses the same cooling concept as experimental set 2. There is only one experiment in experimental set 3. A noise signal with a standard deviation of 0.2 is added to the scattered field of test case 3. This is approximately 20% of the maximum 76 signal. In the presence of noise the cooling needs to be slower. More/? values are used in this case than experimental set 2. The value of fi decreases after every third iteration. The values 100, 50, 10, 5, 1, 0.5, 0.1, 0.05, 0.01, 0.005, 0.001 and 0 are used. The parameters in Table 5.2 are used. A flat starting model of permittivity 2.5 is used. <DM Vs . iteration 15 20 iteration Figure 5.18: Variation plot for experimental set 2 (20% noise), test case 3. Note the nonzero variation value at the last iteration. Figure 5.18 is a plot of the variation. The value of the variation decreases after every iteration with the sharpest decreases when the value of fi changes. Because of noise the value of the variation never reaches zero. When fi is set to zero, the variation exhibits a small increase. For the similar test case in experimental set 2, the variation was near zero. The noise is clearly the cause of this behavior. 77 <I>2 Vs . iteration 0.05 h 01 i i , _J , i i i j I 0 5 10 15 20 25 30 35 iteration Figure 5.19: Modeling error plot for experimental set 2 (20% noise), test case 3. Note the jump in modeling error at iteration 30. Figure 5.19 is a plot of the modeling error. When the value of /? is set to zero, a sudden jump occurs in the modeling error. There is a corresponding jump in the variation plot. However, the jump in the variation plot occurs later. 78 Figure 5.20: Permittivity distribution for experimental set 3, test case 3, and iteration 30. Figure 5.20 is a plot of the permittivity distribution at the 30th iteration and Figure 5.21 is a plot of the permittivity distribution at the 31st iteration. Figure 5.20 looks much closer to Figure 5.6 than Figure 5.21. Both permittivity distributions display the major feature of a perturbation in the middle. In the presence of noise, small changes to the distribution are not recoverable. Figure 5.19 clearly illustrates that the regularization parameter cannot be turned off. When the parameter is turned off the system converges to a system completely different from the exact solution. The function 0 2 decreases with iteration, implying the solution is improving. Only when regularization is removed does the <D2 increase, which is opposite to the desired behavior. In the noiseless experiment the exact solution is found when /? is set to 0; however, in this experiment the exact solution was not 79 recovered, due to the inability of the model to match the scattered field exactly. The regularization cannot be turned off in the presence of noise. Examining the variation in Figure 5.18, it can be seen that the equilibrium variation term is approximately equal to the variation of the noise signal and the data misfit remains small even after B has been set to zero. This is because the permittivity distribution in Figure 5.21 produces a scattered field that matches a noisy scattered field better than that of the scattered field from the permittivity distribution in Figure 5.20. This is expected from the results of Chapter 2. The results from Chapter 2 illustrated that two homogenous cylinders with significantly different permittivies can generate very similar scattered fields. The difference in their permittivity can be smaller than the noise. 2.6 2 4 2 2 1.6 1.4 1.2 -1 -1 Figure 5.21: Permittivity distribution for experimental set 3, test case 3, and iteration ion 31. 80 5.7 Experimental set 4 Experimental set 4 explores the case where there are more parameters than data points. There is one experiment in experimental set 4. B is decreased every iteration. Experimental set 4 uses different parameters from the previous three sets of experiments. The permittivity distribution is shown in Figure 5.22. Again, numerically calculated data are used, and the inversion tessellation can represent the distribution exactly. The parameters are presented in Table 5.3. Figure 5.22: Permittivity distribution for experimental set 4. Perturbation has permittivity of 1 and background has permittivity 2.5 The value of B is decreased more slowly than before. The need for more B values is expected in this case because of the higher number of inversion cells causing the system to be more sensitive to small differences. From experimental experience it is 81 known that more B values need to be used. Forty-one B values are used. This provides a slow convergence to the solution. Table 5.3: Parameters for experimental set 4 Parameter Value Cylinder radius 1 X k values 0.80*2TT 0.88*2TT 0.96*2TT 1.04*2* 1.12*2* 1.20*2* Inversion cell tessellation radius 1/19 X Modeling cell tessellation radius 1/51 X Incident wave eJkx Observation locations -90 to 90 degrees with 6 degrees of separation. p=\.5 X B values 10.0000, 8.7500, 7.6563, 6.6992,5.8618,5.1291, 4.4880,3.9270,3.4361, 3.0066,2.6308,2.3019, 2.0142, 1.7624, 1.5421, 1.3493, 1.1807, 1.0331, 0.9040, 0.7910, 0.6921, 0.6056, 0.5299, 0.4636, 0.4057, 0.3550, 0.3106, 0.2718, 0.2378, 0.2081, 0.1821,0.1593,0.1394 0.1220, 0.1067, 0.0934, 0.0817, 0.0715,0.0626, 0.0547,0.0479 Iterations per B 1 Starting model Flat model of permittivity 2.5 Figure 5.23 is a plot of variation with respect to iteration. The variation decreases with every iteration. 82 <DM Vs. iteration 0.051 , -_, , _^ , 0.045 -0.04 -0.035 -0.03 -0.025 40 iteration Figure 5.23: Variation plot for experimental set 4, experiment 1. Note at iteration 41, the variation value off the scale presented, but finite and is approximately 0.5. Figure 5.24 is a plot of the modeling error. Notice the sudden increase in the modeling error when J3 is set to zero. 83 0.05 <I>2 Vs. iteration 20 25 iteration Figure 5.24: Modeling plot for experimental set 4, experiment 1. From Figure 5.24 and 5.25, it can be inferred that the solution is getting closer with each iteration, however when the regularization is dropped to zero, the solution changes rapidly. This is due to the ill-posedness of the problem. If regularization is never turned off, an approximate solution can be recovered. When regularization is turned off the modeling error becomes large, while the data misfit remains relatively small. This is a result of the system being under-determined. The permittivity values become very large. The modeling grid is no longer adequate for the high permittivity values causing Richmond's Method to fail. This is the reason why the variation value remains small. 84 5.8 Experimental set 5 Experimental set 5 is the same as Experimental set 4, but with a different permittivity distribution. The permittivity distribution is shown in Figure 5.25. The permittivity distribution contains two blobs which are different than that of the background. -1 - 0 . 8 -0 .6 -0.4 -0 .2 0 0 .2 0.4 0 .6 O.i Figure 5.25: Permittivity distribution with two blobs different than the background. Figure 5.26 the variation demonstrated the same behavior as the previous experiment. The variation decreases until B is turned off. The system diverges when B turned off. 8 5 0.05 0.045 0.04 0.035 0.03 0.025 0.02 0.015 0.01 0.005 d>M V s iteration 0 10 15 20 25 Iteration 30 35 40 Figure 5.26: Variation plot for 2 blob case. Figure 5.27 is a plot of the modeling error. It exhibits the same behavior as Experimental set 4. The modeling error decreases until 8 is turned off. 86 0.05 0.045 h o 2 V s iteration 20 25 Iteration Figure 5.27: Modeling error plot for 2 blob case. Figure 5.28 represents the recovered permittivity distribution at the 41 s t, one iteration before the regularization is turned off. The major features in the reference distribution are clearly visible. 87 Figure 5.28: Recovered distribution at the 41 s t iteration 5.9 Conclusion In this chapter the behavior of the inversion algorithm is examined. It is shown that the addition of the flatness term aids in the search for the solution. In cases where there is a large perturbation from the starting model the regularization yields a better solution than the unregularized solution. It has also been shown that the exact solution can be found for over constrained cases i f /? is slowly decreased. The regularization imposed initially favors a flat starting model. 88 Figure 5.28: Recovered distribution at the 41st iteration 5.9 Conclusion In this chapter the behavior of the inversion algorithm is examined. It is shown that the addition of the flatness term aids in the search for the solution. In cases where there is a large perturbation from the starting model the regularization yields a better solution than the unregularized solution. It has also been shown that the exact solution can be found for over constrained cases i f /? is slowly decreased. The regularization imposed initially favors a flat starting model. 88 For under constrained cases, where the number of observations are less than the number of inversion cells, the algorithm is still able to recover major features in the permittivity distribution, provided a very slow cooling algorithm is used. 89 Chapter 6: Summary and future work 6.0 Summary This thesis examines the problem of recovering the permittivity distribution of an infinite cylinder. The scattered field is influenced by the permittivity distribution; therefore information can be recovered from the scattered field. A n optimization technique is used to recover this distribution. Many optimization techniques have been applied to the permittivity distribution recovery problem. Each technique has its strengths. The strength of the technique presented in this thesis is that there is a physical interpretation to the regularization, which differs from previous work in this area. The inference, from scattering observations, of the parameters of a homogeneous, infinite, circular cylinder is studied in Chapter 2. To solve the scattering problem a Bessel function expansion is used, and the boundary value problem for each Bessel function mode is solved. A quasi-periodic behavior in the scattered field is discovered. This quasi-periodic behavior is problematic because for a single frequency, two cylinders with different permittivity values can cause similar scattered fields. For optimization this implies that the wrong permittivity value can be found by the algorithm. This non-uniqueness problem is solved by the use of multiple frequencies in the recovery algorithm. In Chapter 3, Richmond's Method for forward modeling is presented and an inverse modeling algorithm is developed. Richmond's Method is used to model the scattering behavior of an inhomogeneous cylinder. From Richmond's Method the Jacobian of the predicted scattered field with respect to a guessed permittivity distribution is calculated. Knowing the Jacobian matrix, a linear approximation to the predicted 90 scattered field can be generated. To develop the inverse modeling algorithm an objective functional is minimized. The objective functional is the sum of two norms. The first norm term is the difference between the measured scattered field and a predicted scattered field. The second norm term is a measurement of the structure in the guessed permittivity distribution. To minimize the objective functional, the predicted scattered field is replaced by a linear approximation. The permittivity distribution that minimizes the objective function with the linearized scattered field can be found with a simple matrix inverse operation. To find the minimum of the original objective functional, this process is repeated using the permittivity distribution obtained from the previous iteration. The iteration process is repeated several times to find the minimum of the objective functional. A minimum of the objective functional occurs at the point where the exact permittivity distribution obtained and the numerically computed permittivity distribution are close to each other. The iterative algorithm is implemented in MatLab and the implementation is presented in Chapter 4. The weighting between the structure in the permittivity distribution and the scattered field is explored in Chapter 5. A homogenous model with permittivity 2.5 is used as the starting model. A constant weighting with respect to iteration is initially explored and it reveals that regularization aids in the convergence of the solution. When the weighting is large, the resulting solution looks flat. When small, a solution close to the exact one is found by the algorithm. Next, the weighting of the model structure with respect to iteration, is decreased, which is analogous to cooling in simulated annealing. The cooled system can now recover the permittivity distribution. When the system cools too quickly, the correct solution is not recovered. Rather, a solution close to the exact 91 solution is found. In the presence of noise or in a case where there are more degrees of freedom than observations, the weighting of the model structure cannot be zero, implying the necessity of a norm-based regularization. 6.2 Future work There are three directions for future work. The first direction is to implement a real system that uses the algorithm to examine long cylindrical structures, such as trees. Such a system currently exists [Dotto, 2003]. There are a number of challenges associated with the measurement of the electric field. These challenges include precision measurement of the incident and scattered fields, precision measurements of the geometry, and simulating the signal received by an antenna. A dipole antenna can be used to simplify the conversion between electric field and received signal. The current algorithm needs to be augmented by weighting the data with the measurement errors and including a x misfit criterion to terminate the inversion iteration. The second direction is to theoretically extend the algorithm. The algorithm be extended to work with complex permittivity as well as 3 spatial dimensions. The speed of the algorithm can also be improved. One possible way to increase the speed to explore an iterative technique to the L U decomposition. A different forward modeling algorithm can also be used to reduce the computation time. A third direction is the exploration of non-linear inversion and optimization in different electromagnetic problems, such as antenna design. Different model norms can be used in this type of parametric inversion. can is 92 Bibliography K. Dotto,"Development of an ultra wide band antenna", Preliminary draft of PHD thesis, unpublished, 2003. E. J. Bond, X . L i , S. C. Hagness, and B. D. Van Veen, "Microwave Imaging via Space-Time Beamforming for Early Detection of Breast Cancer", IEEE Transactions on Antennas and Propagation, Vol . 51, ppl690-1705, August 2003 S. Caorsi, M . Donelli, D. Franceschini, and A. Massa " A New Methodology Based on an Iterative Multiscaling for Microwave Imaging", IEEE transactions on Microwave theory and technique, vol. 51, pp 1162-1173, April 2003. S. Y . Semenov, R.H. Svenson, A . E. Bulyshev, A . E. Souvorov, A . G. Nazarov, Y . E. Sizov, V . G. Posukh, A . Pavlovsky, P. N . Repin, A. N . Starostin, B. A . Voinov, M . Taran, G. P. Tatsis, and V. Y . Baranov, "Three-Dimensional Micowave Tomgraphy: Initial Experiments Imaging of Animals", IEEE transactions on Biomedical Engineering, Vol . 49, pp. 55-63, January 2002. M . Tanaka and J. Ogata, "Fast Inversion Method for Electromagnetic Imaging of Cylindrical Dielectric Objects with Optimal Regularization Parameter", vol. 84, pp2560-2565, December 2001 C. J. Lin and C. Chui " Image reconstruction of buried Dielectric Cylinder by TE Wave illumination" IEEE MTT-S LMOC 2001 Proceeding 213-216 (2001) Semenov, S.Y.; Svenson, R.H.; Bulyshev, A.E. ; Souvorov, A.E . ; Nazarov, A .G . ; Sizov, Y.E. ; Pavlovsky, A . V . ; Borisov, V . Y . ; Voinov, B.A. ; Simonova, G.I.; Starostin, A . N . ; Posukh, V .G. ; Tatsis, G.P.; Baranov, V . Y . "Three-dimensional microwave tomography: experimental prototype of the system and vector Born reconstruction method", IEEE transactions on Biomedical Engineering, Vol . 46, pp. 937-946, August 1999. M . Pastorino " Short-Range Microwave Inverse Scattering Techniques for Image reconstruction and Applications", IEEE Transactions on Instrumentation and Measurement 47(6) 1419-1427 (1998) C. G. Farquharson and D.W. Oldenburg, "Non-Linear Inversion Using General Measures of Data Misfit and Model Structure" International Journal of Geophysics, Vol . 134, pp213-227, 1998. S. Hagness, A. Taflove, and J. Bridges. "Two-Dimensional FDTD Analysis of a Pulsed Microwave Confocal System for Breast Cancer Detection: Fixed-Focus and Antenna-Array Sensors", IEEE Transactions on Biomedical Engineering, Vol . 45 pp 1470-1479, December 1998 93 A. Souvorov, A . Bulyshev, S. Y . Semenov, R. H. Svenson, A . G. Nazarov, Y . E. Sizov, and G. P. Tatsis "Microwave Tomography: A Two-Dimensional Newton Iterative Scheme", IEEE Transactions on Microwave Theory and Technique, vol. 46 ppl654-1659, November 1998 I. R. Ciric and Y Qin " Self-adaptive Selection of the Regularization parameter for Electromagnetic Imaging." IEEE Transactions on Magnetics, 33(3) 1556-1559 (1997) W. R. Hendee and P. N . T. Wells, The Perception of Visual Information, Springer-Varlag, New York, 1997 G. H . Golub and C. F. Van Loan, Matrix Computations, Third edition Maryland: The John Hopkins University Press, 1996 I. R. Ciric and Y . Liu " A n adaptive Algorithm for Electromagnetic Imaging", IEEE Transactions on Magnetics 32(3) 1303-1305 (1996) W.C. Chew and J. H . Lin " A Frequency-Hopping Approach for Microwave Imaging of Large Inhomogeneous Bodies", IEEE Microwave and Guided Wave Letters 5(2) 439-441 (1995) S. Caorsi, G.l . Gragnani, et al "Microwave Imaging Method Using a Simulated Annealing Approach", IEEE Microwave and Guide Wave Letters 1(11) 331-333 (1991) S. Caorsi, G. L. Gragnani, et al " A Multiview Microwave Imaging System for Two-Dimensional Penetrable Objects." IEEE Transactions on Microwave Theory and Techniques 39(5) 845 - 851 (1991) I. Joachimowicz, C. Pichot, and J. Hugonin "Inverse Scattering: An iterative Numerical Method for electromagnetic Imaging" IEEE Transactions on Antennas and Propagation, 39(12) 1742-1752 (1991) Y . M . Wang and W.C. Chew, "An Iterative Solution of the Two-Dimensional Electromagnetic Inverse Scattering Problem" International Journal of Imaging Systems and Technology, vol. 1, no. 1, pp 100-108,1989. D.T. Borup and O.P. Gandhi, "Calculation of high resolution SAR distributions in biological bodies using the FFT algorithm and conjugate gradient method," IEEE transactions on microwave theory and technique, vol 33 pp 417-419, may 1985. M . Slaney, A. C. Kak, and L. Larsen, "Limitations of Imaging with First-Order Diffraction Tomography", IEEE Transactions on Microwave Theory and Technique, vol. 32 pp860-874, August 1984 J. H . Wilkinson and G. Peters, "The Least Squares Problem and Pseudo-Inverses," The Computer Journal, Vol . 13, No. 3, August, 1970. 94 J. H. Richmond, "Scattering by a Dielectric Cylinder of Arbitrary Cross-Section Shape," IEEE Transactions on Antennas and Propagation, vol. 13, pp. 334-341, May 1965. R.F. Harrington, "Time-Harmonic Electromagnetic Fields" McGraw-Hill Book Companv Toronto, 1961. F y ' 95
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Cylindrical RF tomography
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Cylindrical RF tomography Lam, Kim 2003
pdf
Page Metadata
Item Metadata
Title | Cylindrical RF tomography |
Creator |
Lam, Kim |
Date Issued | 2003 |
Description | Tomography allows the examination of an object's interior without having to destroy the object. There have been many forms of tomography including MRIs and CT scans. All forms of tomography infer the interior of an object from a set of measurements. An algorithm to infer the features of infinite dielectric cylinders using electromagnetic waves is developed in this thesis. The intended application for this algorithm is the imaging of lumber. The algorithm recovers the dielectric permittivity distribution from an infinite cylinder. It uses Richmond's Method to model the physical behavior of the infinite cylinder. The algorithm uses an iterative non-linear inversion scheme to recover the dielectric permittivity distribution. The non-linear inversion scheme uses a regularization term that minimizes the structure of the permittivity distribution subject to a constraint involving the measured scattered field from the cylinder. It was found that by decreasing the weighting of the regularization, and eventually turning off the regularization, the exact permittivity distribution can be recovered for a noiseless and over-determined system. In the presence of noise, an approximate permittivity distribution can be recovered; however regularization cannot be turned off. Similarly, for an under-determined system, an approximate permittivity distribution can be found, but regularization cannot be turned off. |
Extent | 7467079 bytes |
Genre |
Thesis/Dissertation |
Type |
Text |
FileFormat | application/pdf |
Language | eng |
Date Available | 2018-05-07 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
IsShownAt | 10.14288/1.0065522 |
URI | http://hdl.handle.net/2429/15083 |
Degree |
Master of Applied Science - MASc |
Program |
Electrical and Computer Engineering |
Affiliation |
Applied Science, Faculty of Electrical and Computer Engineering, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 2004-05 |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-ubc_2004-0075.pdf [ 7.12MB ]
- Metadata
- JSON: 831-1.0065522.json
- JSON-LD: 831-1.0065522-ld.json
- RDF/XML (Pretty): 831-1.0065522-rdf.xml
- RDF/JSON: 831-1.0065522-rdf.json
- Turtle: 831-1.0065522-turtle.txt
- N-Triples: 831-1.0065522-rdf-ntriples.txt
- Original Record: 831-1.0065522-source.json
- Full Text
- 831-1.0065522-fulltext.txt
- Citation
- 831-1.0065522.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065522/manifest