UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The registration of multimodality medical scans Zuk, Torre Dana 1993

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1993_fall_zuk_torre.pdf [ 5.5MB ]
Metadata
JSON: 831-1.0051366.json
JSON-LD: 831-1.0051366-ld.json
RDF/XML (Pretty): 831-1.0051366-rdf.xml
RDF/JSON: 831-1.0051366-rdf.json
Turtle: 831-1.0051366-turtle.txt
N-Triples: 831-1.0051366-rdf-ntriples.txt
Original Record: 831-1.0051366-source.json
Full Text
831-1.0051366-fulltext.txt
Citation
831-1.0051366.ris

Full Text

THE REGISTRATION OF MULTIMODALITYMEDICAL SCANSByTorre Dana ZukB. Sc. (Computer Science) University of AlbertaA THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTERS OF SCIENCEinTHE FACULTY OF GRADUATE STUDIESCOMPUTER SCIENCEWe accept this thesis as conformingto the required standardTHE UNIVERSITY OF BRITISH COLUMBIAOCTOBER 1993© TORRE DANA ZUK, 1993In presenting this thesis in partial fulfilment of the requirements for an advanced degree at theUniversity of British Columbia, I agree that the Library shall make it freely available for refer-ence and study. I further agree that permission for extensive copying of this thesis for scholarlypurposes may be granted by the head of my department or by his or her representatives. Itis understood that copying or publication of this thesis for financial gain shall not be allowedwithout my written permission. Computer ScienceThe University of British Columbia2075 Wesbrook PlaceVancouver, CanadaV6T 1Z1Date:uf,111),,, 15^I AbstractThe registration, or alignment, of multiple medical scans can provide a plethora of new informa-tion about a patient beyond what can be gained from studying the individual scans in isolation.This added information can increase the accuracy in specifying regions of interest, help in ra-diation therapy planning, and aid in a myriad of other treatment and diagnostic techniques.However, the registration of the scans is very complex, and no panacea exists for adjusting forthe numerous variations and distortions that can occur between scans.This thesis briefly summarizes the current technology of MRI, CT, PET, and SPECT imag-ing. Possible uses and benefits of using registered data sets are then explained in this context.This is followed by a structured description of the general problem of data registration and aframework that can be used to solve it. Current methods that have been utilized by variousresearchers are surveyed. The implementation of a surface matching method and the adapta-tions and innovations that were utilized is described in detail. These are analyzed under diversetest conditions involving both real and simulated data. The surface matching methods thatwere developed provide an accurate means for the registration of rigid body transformations,although many problems remain to be solved before a robust solution exists.fiTable of ContentsAbstractList of Tables^ viiiList of Figures ixAcknowledgment^ xii1 An Introduction to Registration 11.1 Reader's Guide ^ 11.2 Variations Between Scans ^ 21.2.1 Patient Positioning 21.2.2 Sampling ^ 31.2.3 Noise 31.2.4 Imaging Configuration ^ 31.2.5 Time-series Studies 31.2.6 Different Modalities ^ 41.3 The Goals of Registration 41.3.1 Increasing the Precision of Regions of Interest ^ 41.3.2 Radiation Treatment Planning ^ 41.3.3 Localization of Functional Data 51.3.4 Accurate Comparison of Regions Over Time ^ 51.3.5 Comparison of Patient Data to Statistical Norms 51.3.6 Removal of Distortions ^ 61.4 Method of Registration 61.51.4.1^Segmentation of the Data ^1.4.2^Matching of the Features 1.4.3^Verification of the Solution ^Overview of the Thesis ^67782 Imaging Physics and Technology 102.1 Electromagnetics ^ 102.2 Volumetric Representations ^ 112.3 CT ^ 132.3.1^The Physical Nature of CT ^ 132.3.2^Current Technology and Trends 142.4 MRI 152.4.1^The Physical Nature of MRI ^ 152.4.2^Current Technology and Trends 172.5 SPECT 172.5.1^The Physical Nature of SPECT^ 182.5.2^Current Technology and Trends 182.6 PET 192.6.1^The Physical Nature of PET ^ 192.6.2^Current Technology and Trends 203 The Theory of Registration 213.1 Definitions ^ 213.2 Types of Transformations ^ 233.2.1^Rigid Body Motion 233.2.2^Affine ^ 253.2.3^Projective 253.2.4^Non-linear ^ 25iv3.3 Variations in Data ^ 263.3.1^Corrected Distortions ^ 273.3.2^Uncorrected Distortions 283.3.3^Variations of Interest ^ 303.3.4^Summary of Variations 303.4 Conditions for Finding a Solution ^ 314 The Registration Process 324.1 Segmentation ^ 324.1.1^Feature Types ^ 334.1.2^Choice for Features 344.1.3^Techniques ^ 364.2 Matching ^ 374.2.1^Features 384.2.2^Comparison Function ^ 394.2.3^Heuristics ^ 434.3 Verification 464.3.1^Comparison Function Residual ^ 464.3.2^Simulated Data ^ 464.3.3^Patient Data 484.4 Case Studies ^ 514.4.1^Iterative Manual Alignment ^ 514.4.2^Solving Equations Formed From Corresponding Points ^ 524.4.3^Iterative Surface Alignment ^ 535 Implementations and Innovations 555.1 Platform ^ 555.2 Direct Point Matching ^ 57v5.35.2.1^Segmentation ^5.2.2^Matching 5.2.3^Verification ^Iterative Surface Matching ^575859605.3.1^Segmentation 605.3.2^Matching ^ 625.3.3^Verification 716 Analysis of Surface Matching 766.1 Accuracy of Segmentation ^ 766.1.1^Types of Segmentation Error ^ 776.1.2^Segmentation and Transformations 786.2 Theoretical Analysis ^ 796.2.1^Comparison Function ^ 796.2.2^Bounds of the Parameter Space ^ 836.2.3^The Nature of the Search Space 836.3 Simulations: Transformed Real Features ^ 876.3.1^Methods ^ 886.3.2^Thresholds 906.3.3^Hierarchies of Scale and Thresholding ^ 926.3.4^Distribution of Starting Points ^ 956.3.5^Multiple Searches ^ 966.4 Simulations: Transforming Real Data ^ 976.5 Patient Data Studies ^ 1006.5.1^Registering to Norms ^ 1006.5.2^MRI-MRI Registrations 1017 Conclusions and Future Work 107vi7.1 Experimental Conclusions ^  1077.2 Extensions and Future Development ^  1087.3 Applications of the Implemented Techniques ^  1097.4 Summary ^  110B ibliography^ 111Glossary^ 119viiList of Tables2.1 Examples of SPECT Radionuclides ^  182.2 Examples of PET Radionuclides  204.3 A List of Feature Types ^  354.4 Comparison of the Choices for Intrinsic Features ^  364.5 Features and Appropriate Comparison Functions  404.6 Overview of Pelizzari et al.'s Results ^  546.7 Distance Transform Errors ^  806.8 Errors in an Anisotropic Euclidean Chamfer Distance Transform ^ 826.9 Comparison of Search Methods with a Noise Free Search Space  906.10 Comparison of Distance Transform Thresholds on Matching ^ 926.11 Comparison of Thresholding in Conjunction with Multiple Scales ^ 936.12 Results of Using Multiple Searches ^  966.13 Simulation Results with Transformed Real Data ^  1006.14 Analysis of Interpolation for Reducing the Effects of Anisotropy^ 103viiiList of Figures1.1 The Registration Process ^ 72.1 The Electromagnetic Spectrum 112.2 Sample Types ^ 122.3 Basic Forms of the 3D Scans ^ 122.4 A CT Slice ^ 142.5 An MRI Slice 162.6 A SPECT Slice ^ 182.7 A PET Slice 193.1 The Default Labeling of Co-ordinate Axis ^ 223.2 An Example of Registration (Rigid Body) 223.3 Categories of Transformation Elasticity ^ 234.1 A Taxonomy of Feature Types ^ 344.2 The Formation of a 2D Distance Transformation ^ 424.3 A Comparison of Features ^ 425.1 A WiT Igraph for Registration 575.2 PET/SPECT Segmentation ^ 625.3 PET Data ^ 635.4 PET Segmentation ^ 635.5 MRI Data ^ 645.6 MRI Segmentation ^ 645.7 Actual Distance 67ix5.8 DT Function ^  675.9 DT with Threshold at 10 ^  685.10 DT with Threshold at 2  685.11 The "Colorwashing" of an Image ^  735.12 A WiT Igraph for "Colormerging"  745.13 A Ti Weighted MRI Scan ^  745.14 A T2 Weighted MRI Scan  745.15 Alternate Scan Line Merging of Ti and T2 Weighted MRI^ 756.1 Minor Error in Different Segmentations of a PET Scan  786.2 A 3D Chamfer Euclidean Distance Transform (101x101x101) ^ 816.3 Comparison Function Over X Rotations Only (c(Fs, f (rot., Fd)))  856.4 c(F., f (roty, Fa)) ^  866.5 c(F, f (rot, Fd))  866.6 c(Fs, f (transx,Fd)) ^  876.7 c(Fs, f (trans y, Fd))  876.8 Comparison Function Over Z Translation Only (c(Fs, f (trans, Fd))) ^ 886.9 A Floating Point Anisotropic Chamfer Distance Transform ^ 896.10 The Effect of Thresholding with Minor Error ^  916.11 Accuracy of Method A ^  946.12 Accuracy of Method B  946.13 Accuracy of Method C ^  956.14 Accuracy of Method D  956.15 Histogram of Actual Error Testing Method B (d=2.05, o-d=1.51) ^ 966.16 Histogram of the Comparison Function Testing Method B  976.17 Search Sensitivity to Initial Parameters ^  986.18 A WiT Igraph for the Registration of Simulated Data ^  996.19 A Simulated Scan ^  1016.20 Segmented (Dynamic) Surface ^  1016.21 Result of Registration  1026.22 Difference of Static and Registered Sets ^  1026.23 PET Scan "X" ^  1036.24 "X" Registered to Normal ^  1036.25 PET Scan "Y" ^  1046.26 "Y" Registered to Normal ^  1046.27 Ti Weighted MRI Data  1056.28 Ti Weighted MRI Segmentation ^  1056.29 T2 Weighted MRI Data ^  1056.30 T2 Weighted MRI Segmentation ^  106xiAcknowledgmentThe following people and groups should be acknowledged for their help in the direction and/ordevelopment of this thesis and the research behind it. Their contribution was greatly appreci-ated.I would like to thank: my parents for their continuing support throughout, Brian Johnston(UBC1) for his help with various aspects of the thesis, Dr. Stella Atkins (SFU2) and Dr. Kel-logg Booth (UBC) for supervision and help in making revisions, Dr. Alain Fournier (UBC) forsupplying constructive criticism, Terry Arden (Logical Vision Ltd.) for adapting the WiT soft-ware package to help in our research, Dr. Anna Celler (Dept. of Nuclear Medicine, VGH3) andDr. Ken Poskitt, M.D. (B.C.'s Children's Hospital) for providing practical information, ThePET Group (TRIUMF4) for the informative seminars that provided much needed backgroundknowledge, NSERC5 for their Post Graduate Scholarship and funding under grant #8020, andLarry Palazzi (UBC) for his help in the layout and formatting of the thesis.I would also like to thank all my friends and colleagues here at UBC for the assistance theyhave provided and for making my time here so enjoyable.lUniversity of British Columbia2Simon Fraser University3Vancouver General Hospital4Tri-University Meson Facility5National Science and Engineering Research Councilxi iChapter 1An Introduction to Registration Human knowledge and human power meet in one; for where the cause is not known theeffect cannot be produced. Nature to be commanded must be obeyed; and that which incontemplation is as the cause is in operation as the rule.- F. Bacon, Novum Organum, 1620, First Book of Aphorisms, IIIRegistration is the process of finding the condition for correct alignment or proper relativepositioning of one medical scan with reference to another. In this context the alignment isof one imaged object relative to another imaged objects' reference frame. Any two medicalscans may be registered to compare and contrast the unique information present in each. Thiscorrelation can provide important insights into both data sets.Registration requires the removal of certain variations, while ignoring others. When donecorrectly it can facilitate the process of diagnosis and qualitative analysis. The process can bestructured into the phases of segmentation, matching, and verification. This chapter will clarifythe what, why, and how of registration and then provide an overview of the rest of the thesis.1.1 Reader's GuideDue to the interdisciplinary nature of this thesis it is impossible to present all the relevantmaterial. Therefore the reader is encouraged to read the provided references in order to gaina deeper understanding of the concepts. For this reason when multiple articles are referencedon a single topic they are listed from general to specific. Words that are marked with a "t" aredefined in the glossary at the end of the thesis. Chapters 2, 3, and 4 are intended to provide thegeneral background necessary for understanding the specific implementations and analyses inChapters 5 and 6. A reader unfamiliar with medical imaging is advised to read Chapter 2 first1Chapter 1: An Introduction to Registration^ 2and then return to Chapter 1, while those with the background knowledge may skip Chapter 2with no loss of continuity.1.2 Variations Between ScansThere can be many types of variations between the data sets. The registration process must beable to handle each one in an acceptable manner. Voxelst in different modalities that representcorresponding physical spaces may have no direct functional relation to each other. Even twoscans of a single modality will never be identical. Within the same scan voxel values will bedependent on the patient's position and the configuration parameters of the individual scanningmachine. In almost every case the two scans to be registered are not identical and may differas a result of a variety of known or unknown factors.1.2.1 Patient PositioningThe position of the patient in the imaging device is the major variation for which registrationwill try to compensate. During successive scans the patient may be situated at a differentorientation and location in the machine, or the patient may be in a different machine altogether.It is possible to fix the patient into a common 3D reference frame for each scan, but thisusually requires a device be physically attached to the patient. This may be traumatic, es-pecially if the head is clamped into a non-moving position for an extended period of time.Another more flexible way of establishing the patients' reference frame is through the use offiducial markerst. They are limited by the accuracy of their placement, they can be costly,and they cannot be introduced into a scan retrospectively. In another technique, by Saeed andDurrani [77], a previous MRI scan is used to realign the patient into the same position duringa preliminary scanning procedure.Chapter I: An Introduction to Registration^ 31.2.2 SamplingThe voxel values calculated by the imaging device are representative of a discrete volume. Theexact spatial domains of these voxels are almost never duplicated from scan to scan due todifferences in sampling rates and voxel dimensions. The exact spatial resolution will even varywith position in a single scan.1.2.3 NoiseEvery scan will be subject to a variable amount of noise. The noise may be due to the physicsof the imaging technique, external sources, or patient movement, all of which can result innoise and artifactst of various frequencies and amplitudes. Often this noise may be reduced byfiltering or similar techniques, but it cannot be eliminated entirely.1.2.4 Imaging ConfigurationThe specific imaging device parameters in effect during a scan can introduce large variations incorresponding voxels of the same modality. For example, an MRI can be specially configured toprecisely image the brain's ventriclest, but then may not provide good detail of other structures.With SPECT and PET the dose of the radiopharmaceutical is yet another variable that affectsthe scans.1.2.5 Time-series StudiesIn many cases a single patient will require multiple scans over varied periods of time. Thedynamics that occur may then be structural and/or physiological depending on treatments,disease progression, trauma, or other changes. One example is the response of a tumour toradiotherapy. The arbitrary variations of a region of interest (ROI) in each scan may create asituation that must be accommodated on a case-by-case basis.Chapter I: An Introduction to Registration1.2.6 Different ModalitiesThe different imaging modalities have no intrinsic correspondence between voxels that representthe same physical space. The physical property being measured usually has little relation tothat used in the other modalities. The only common features are often the boundaries betweeninternal structures which manifest themselves as gradients.1.3 The Goals of RegistrationNumerous insights that can be attained from registered data sets are pushing its development.The diverse advantages provided by registration lie in the increased ability to quantify regionsand monitor changes. Multimodality registration of functionalt and structural t data providesa more complete understanding of the patient, which is essential for an optimal diagnosis.1.3.1 Increasing the Precision of Regions of InterestThe complementary information provided in the different imaging modalities will often allow amore accurate delineation of an arbitrary ROI. The boundary of a tumour may be more clearlyobserved in CT used in conjunction with a contrasting agent than in MRI [55], while MRI maybetter contrast tumour and edema to gray and white matter [15]. In a limited study by Hill [34]in which two surgeons and a radiologist assessed registered CT and MRI from seven patients,it was shown that the combined data provided significant new information clarifying the ROIsin each case.1.3.2 Radiation Treatment PlanningRadiation treatment involves the irradiation of a specific ROI with high energy photons (X-rays). A CT volume is required for radiation treatment planning to provide the electron densityfunction for calculation of the photon beam specifications [48]. An MRI scan may be used tomore accurately segment the normal critical structures and identify the treatment volume; theseChapter 1: An Introduction to Registration^ 5can then be transferred to the CT after registration [48]. Similarly PET or SPECT ROIs canbe transferred to CT for beam attenuation calculations.1.3.3 Localization of Functional DataThe registration of functional (SPECT or PET) and anatomical t data (CT or MRI) allowsdirect correlation between the structure and the observed functionality. The comparison ofthe low resolution functional data directly to corresponding structures allows for a more ac-curate diagnosis, especially in patients with large anatomic variations (congenitalt or diseaserelated) [73, 47, 19]. The localizationt of the functionality can also guide the surgeon in decidingon a biopsy site [38].1.3.4 Accurate Comparison of Regions Over TimeFor the analysis of a treatment's effect on a certain patient, it is important to spatially correlatethe changes within a ROI [68]. One example is that a tumour's size and position will be studiedover time to see how a patient is responding to radiation therapy. The size of the tumour canbe calculated in each volume of the time series, but this does not provide an accurate spatial-dependent response. With registered data sets ROIs can be superimposed over each other fordirect comparison. This increases the precision of treatment monitoring [55].1.3.5 Comparison of Patient Data to Statistical NormsA new idea which has not yet gained widespread use is the direct comparison of the patients'data to accumulated statistical norms. By registering patient data to a uniform average dataset, abnormalities may be more easily assessed. This registration is different from the othertypes in that the intra-patient variability must be accommodated by the registration process,enabling a meaningful comparison of the patient's data to the norms.Chapter 1: An Introduction to Registration^ 61.3.6 Removal of DistortionsThe registration of a distorted data set to one free of the distortion may allow some distortionsto be eliminated. MRI, which suffers from many non-linearities, could be registered to CT toremove them, or a SPECT scan could be reconstructedt with fewer artifacts if registered to aCT scan in order to utilize attenuationt values for improved scatter correction.1.4 Method of RegistrationThe goal of registration is to find a transformation that enables an ROI to be compared directlyand accurately to the same region in another scan. The process of reaching this goal can bebroken down into three phases:1. Segmentation - features that will be compared are selected2. Matching - calculation of the registering transformation3. Verification - a visual or numerical check on the accuracy of the alignmentThe general flow of the registration process through these three phases is shown in Figure 1.1.1.4.1 Segmentation of the DataSegmentation is usually employed to classify data for analysis, but registration requires seg-mentation to isolate features that exist in both data sets for matching purposes. This allowsthe data to be matched in a more controlled and precise way. The user's actions and knowledgerequired for segmentation vary greatly, from completely manual methods, which are time con-suming and work intensive, all the way up to fully automatic ones, which require no operatorcontrol at all.Re-register ?FiducialsNo-FiducialsChoose Features‘1!Image ProcessingSegmentationIsolate Features <Iterative SolutionDirect SolutionMatchingVerification Resample^Transform ROIsAccuracy Check ^Transformation ParametersChapter 1: An Introduction to Registration^ 7Data Sets‘I/PreprocessingFigure 1.1: The Registration Process1.4.2 Matching of the FeaturesMatching is the process of discovering what transformation can align the segmented features,and thus the entire scans. Initially a comparison function must be chosen for matching the twosets of features, and it will be highly dependent on the types of features chosen for segmentation.Based on the function, a heuristic for finding the transformation will attempt to determine theoptimal match. Numerous different heuristics exist for solving for the transformation directlyor iteratively converging to a solution. The large number of variations that can occur makethis a very difficult part of the process.1.4.3 Verification of the SolutionThe validity of a solution can be checked by using an appropriate display technique to inspectthe match visually, although this may require a radiologist's skills. Another way to checkChapter 1: An Introduction to Registration^ 8the solution can be the proximity of the comparison function to the "optimal" value; thistheoretical verification is often incorporated into iterative heuristics as a stopping criteria. Inreal applications the exact accuracy of the registration is unknown, as all of the variationscannot be removed.1.5 Overview of the ThesisThe next chapter provides a brief summary of the background material needed to place subse-quent chapters in the proper context. It touches on the physics utilized for the creation of thethree dimensional data sets, and introduces much of the vocabulary that will be assumed later.A reader familiar with the physics of CT, MRI, PET, and SPECT may skip this chapter withno loss of continuity.Chapter 3 provides the theoretical foundation for an accurate formulation of the problemsof registration. It provides a categorization of the possible registering transformations, as wellas the types of distortions that exist within and across modalities. Preconditions for an exactsolution are discussed. It is not meant to be an in-depth analysis, but contains references tomore thorough treatments of the material within the literature.A detailed description of the method of registration is provided in Chapter 4. This containsboth theoretical and application-driven aspects of the process, and covers the implementationdetails from the general to the specific. The basic constructs for segmentation, matching,and verification are detailed in this chapter, as well as a brief discussion of three registrationimplementations typical of the major genres (manual, point mapping, and surface matching).Chapter 5 describes the implementations that form the main thrust of the thesis. Theregistration process was initially based on point features, and then modified to use surfaces.This required the construction of segmentation, matching, and verification tools, all of whichwere constructed in a data flow environment for image processing. A complete set of data flowtools for the registration of multi-modality data was created.An investigative analysis of the implemented ideas is summarized in Chapter 6. A formalChapter I: An Introduction to Registration^ 9declaration of the registration parameters, and the search space is given in order to clarify theresults. Simulated and real data were utilized in this process to quantify the theoretical andpractical performance of the various implemented techniques.The final chapter presents conclusions and provides the direction that future developmentaland analytic work may take. The algorithms developed provided good results, and many of thedesign decisions are in agreement with the recent Computer Vision in Radiology (COVIRA1)registration project [2, 1]. The testing of the applications by research groups will further exercisethe implementations, and provide useful knowledge for future performance improvements.'Computer Vision in Radiology, an Advanced Informatics in Medicine (AIM) Programme of the EuropeanCommunities CommissionChapter 2 Imaging Physics and TechnologyNow to descend to the particular tenets of vain philosophy derived to the Universities, andthence into the church, partly from Aristotle, partly from blindness of understanding; Ishall first consider their principles. There is a certain Philosophia Prima, on which all otherphilosophy ought to depend; and consisteth principally, in right limiting of the significationsof such appellations, or names, as are of all others the most universal: which limitationsserve to avoid ambiguity, and equivocation in reasoning; and are commonly called defi-nitions; such as are the definitions of body, time, place, matter, form, essence, subject,substance, accident, power, act, finite, infinite, quantity, quality, motion, action, passion,and divers others, necessary to the explaining of a man's conceptions concerning the natureand generation of bodies.- T. Hobbes, Leviathan, 1651, Part IV, "The Kingdom of Darkness", Chapter 46Each modality reveals various unique physical properties of the regions it scans. In order tomeasure and localize these diverse qualities scanning devices utilize a wide range of electromag-netic (EM) radiation. Different modalities sample volumetric regions in distinctive ways andamounts. This chapter introduces the physical principles underlying each modality and brieflydescribes the technology that implements it.2.1 ElectromagneticsCT, MRI, SPECT, and PET are non-invasivet methods. Internal information is carried exter-nally for measurement by EM waves. This look into the inner nature of the body is dependenton accurately recording and quantifying these EM signals. The position in the EM spectrumof the waves used by the different modalities is shown in Figure 2.1 [301.The EM-radiation carries energy (the signal) in a wave of alternating electric and magneticfields whose energy is expressed in Joules by the formula:E = hv10Chapter 2: Imaging Physics and Technology^ 11Wavelength (m)^4^ -4^-8^-123x10 3^3x10 3x10 3x10^›.1InfraredRadio Waves (MRI)Ultraviolet^Gamma rays(SPECT & PET) Visible lightMicrowaves^X-rays (CT)I 14^6^8^10^12^14^16^18^2010^10^10^10^10^10^10^10^10Frequency (Hz)Figure 2.1: The Electromagnetic Spectrumwhere E is the energy, h is Planck's constantR.,- 6.63 x 10-34J • s, and v is the frequency.Therefore the relative energy of the EM used for each modality is shown as increasing from leftto right in Figure 2.1.2.2 Volumetric RepresentationsAs opposed to traditional images that represent 2D data (pixels), the scanning methods of MRI,CT, SPECT, and PET create 3D data (voxels). This data is often viewed as a two-dimensionalplane slicing through the volume, but the elements still represent discrete three-dimensionalsamples.Samples can be taken from an arbitrary variety of different configurations which then formthe voxels of the data set. The samples represent a function of the actual data over their spatialdomain. With adjacent voxels the sampling functions commonly overlap. This can be seen inFigure 2.2. The distance between the centres of the functions (d) represent the inter-voxeldistances and may vary in all three dimensions. The rectangular voxel is a gross abstractionand commonly represents only a tessellation of the sampling space.The most prevalent data format contains planes (slices) of approximately adjacent voxels(Case 2) with gaps (Case 3) between the "slices". This is depicted as type A in Figure 2.3, withType BChapter 2: Imaging Physics and Technology^ 12Case 1Case 2Case 3Figure 2.2: Sample Typestype B being contiguous 3D samples, which could possibly be formed by resampling type A.References to data resolution (I Vx1x1Vy lx117) indicate first the number of planar voxels (inter-slice) and then the number of slices (4x2x3 for Figure 2.3). The effective spatial resolution, orvolume a voxel represents, is more complex and it is stated as the full width of the point spreadfunction t at half its maximum value (FWHM).Type AFigure 2.3: Basic Forms of the 3D ScansDue to different spatial resolutions the data is commonly resampled into cubic form (Fig-ure 2.3.B) for the simplification of manipulation and visualization techniques. The resamplingChapter 2: Imaging Physics and Technology^ 13process is based on a filter which determines the type of interpolation that is performed. Theresampled data will not contain any information that was not inherent in the original data, butmay contain less due to the approximations used for interpolation.2.3 CTComputer Tomography (CT) was the first 3D non-invasive medical imaging technique to bedeveloped. By measuring the attenuation of X-rays through the body it provides the highestin-plane resolution of all the medical imaging techniques. It is the most widely used technique,and the least expensive. An example of the data created by CT is shown in Figure 2.4 whichillustrates a single 512x512 axialt slice through the top of the brain. The smoothly curvedstructure around the bottom of the image is the external head holder used with the scanningdevice. Please note that all images of medical data included in this thesis have been inverted(black <#> white or 0 4=> 255) for better reproduction with the printer's greyscale gamut.2.3.1 The Physical Nature of CTCT utilizes an earlier imaging technology which provided 2D X-ray attenuation values. X-ray film records the amount of high energy electromagnetic radiation that passes through theexposed object. If numerous X-ray images are projected at different angles it becomes possibleto solve for the volumetric (3D or voxel) representation of the attenuation values.In CT imaging the 2D film is replaced by a 1D array of photon detectors. The X-ray beamis spread out in fan-like form for a single exposure. The source is then rotated in a circle so thatnumerous single X-rays are taken at different angles around the object. This rotation delineatesthe plane (slice) that is imaged. Numerous slices then can be acquired by repeating the processafter moving the relative position of the camera.The CT voxels represent a mathematically formed approximation of attenuation values fora discrete slice in 3-space. The slice is commonly computed (or reconstructed) from the radialsamples by projecting lines back to the source, accumulating the counts for each voxel, andChapter 2: Imaging Physics and Technology^ 14Figure 2.4: A CT Slicethen filtering to account for artifacts. This technique is called "filtered back-projection" andit is the most popular. Another technique for reconstruction is the use of Fourier transformsaccording to the Fourier projection-slice theorem [57].2.3.2 Current Technology and TrendsCurrent CT commonly provides an intra-slice resolution of 512x512 voxels while the slice resolu-tion is dependent on the number of times the planar process is repeated. Each voxel commonlyhas around a 0.5mm in-plane spacing and inter-slice spacing of 1 to lOmm [91]. The numberof slices varies greatly depending on what is to be imaged, but commonly ranges from 10 to 40.Chapter 2: Imaging Physics and Technology^ 15The fine detail and relatively few distortions present in CT enables very detailed structures tobe imaged. Contrasting agents may also be used to change the attenuation characteristics ofregions of interest, thereby creating sharper contrast between different structures.The latest development is the spiral CT. With scanner a 2D array of sensors is rotated ina spiral (translated while rotating) which allows the generation of arbitrarily positioned slices.Other current work is on the creation of novel contrasting agents. More detailed backgrounddescriptions are provided by Bates [8], Vannier [91], Lancaster [53], and Kreel [52].2.4 MRIMagnetic Resonance Imaging (MRI) has recently become a widely accepted imaging technique.The use of low-energy EM radiation provides a safety advantage over and above its imagingcapabilities, but future research may discover that the strong magnetic fields that are utilizedhave some negative effects. It also provides superior flexibility in the many possible imagingconfigurations. A typical slice of MRI data (256x256) is shown in Figure 2.5. It represents asagittalt slice through a head revealing a large tumor below the posterior corpus collosumt.2.4.1 The Physical Nature of MRIMRI reveals the number of atoms with a chosen odd atomic number within a 3D sample.The physical basis for MRI is that the resonant frequency (Larmor frequency) of an atom ina magnetic field is dependent on the atomic nuclei and the strength of the applied magneticfield. Atoms are put in a strong magnetic field to align their magnetic moments t and then aselected EM pulse in the radio frequency (RF) range is used to excite them. The resonatingatoms produce another RF signal and in this way the concentration of a selected atom maybe measured. The nature of the resonance is in two planes, each of which can be used for themeasurement of atom concentration. This provides scans labelled Ti or T2, depending on therelative weighting of the two planes. If the resonation is measured as a function of the time ittakes the magnetization to return to 63% of its equilibrium value (before excitation) it is calledChapter 2: Imaging Physics and Technology^ 16Figure 2.5: An MRI Slicea Ti weighted scan. A T2 weighted scan is measured as time it takes for the signal to decayto 37% of its maximum value. The different weightings provide slightly different informationbecause they represent different magnetic properties that vary with tissue type [3].3D spatial encoding of the excited resonant frequencies is performed by the use of a magneticgradient, and frequency/phase variations of the pulse. The signals created are in the frequencydomain and so the slices are reconstructed by the use of an inverse Fourier transformt. Thefact that the EM triggering pulses are highly controllable allows arbitrary oblique slices to bereconstructed.Chapter 2: Imaging Physics and Technology^ 172.4.2 Current Technology and TrendsCurrently only the concentration of 11/ (Hydrogen) is measured because it has the strongestmagnetic moment and thus provides the strongest signal. This provides very high resolutiondata, commonly around 256x256x100 with iter-voxel spacing of lmmxlmmx2mm. The use ofdifferent signal parameters as well as the Ti and T2 weightings allows a greater control overthe imaging process than is possible with other modalities.A result of the push for faster scanning times is the Fast Low Angle Shot (FLASH) imagingmethod, which can provide images in the time frame of a second rather than a minute, but at acost of a lower signal to noise ratio (SNR). Future developments may make the measurement of31/3 (Potassium), 23Na (Sodium), 19F (Fluorine), and 13C (Carbon) more common, allowinga greater amount of functional data retrieval. Further information can be found in articles byJones [44], Hinshaw [35], Kreel [52], and Riederer [76].2.5 SPECTSingle Photon Emission Computed Tomography (SPECT) provides a means to view the in-ternal physiology of a patient. In vivo functional data may be measured by imaging specificmetabolically related molecules. The molecules are made radioactive (labeled) and then theiremissions can be observed externally. A single SPECT slice (64x64) is shown in Figure 2.6. Itis taken from an axial Technetium series of the head.Chapter 2: Imaging Physics and Technology 182.5.1 The Physical Nature of SPECTAs opposed to CT, the source of EM radiation for SPECT isinside the body and the EM radiation is in a higher band of thespectrum. When radioactive source elements (radionuclides) arecontained within specific molecules they form radiopharmaceu-ticals, which are injected into the body. These release radiationin the form of gamma rays (photons) which are recorded by ex-ternal detectors. The direction of the incident photon is deter-mined by using a a device (called a collimator) that only allows Figure 2.6: A SPECT Sliceperpendicular rays to enter the detector. Although there are many possible configurations ofdetectors, they typically form a 2D array, and so a single 360 degree rotation of the array aroundthe body can enable a 3D volume reconstruction.The various radiopharmaceuticals that can be created add a new dimension to medicalimaging. The level of emissions in a region reveals the uptake of the radioactive substance.Due to the use of high energy radioactivity, the process becomes dependent on the decay timesof the radionuclides as well as the amount of relative uptake throughout the body.2.5.2 Current Technology and TrendsThe SPECT data is commonly of resolution 64x64x64 with voxel spacing from 6mm to 15mm.They have low SNRs resulting in effective spatial resolutions of 9 to 18mm FWHM. Somecurrent radionuclides are listed in Table 2.1 [21}.Radionuclide Half-life (h) Main Emission (keV) Typical Use99mTc (Technetium) 6.0 140 brain perfusiont1231 (Iodine) 13.2 159 brain perfusion"In (Indium) 67.3 171, 245 inflammation201n (Thallium) 73.1 68-82 myocardial perfusion133Xe (Xenon) 126.0 81 lung scintigraphytTable 2.1: SPECT RadionuclidesChapter 2: Imaging Physics and Technology^ 19The spatial resolution is limited mainly by the collimator. Future progress will providehigher resolution data, higher SNRs, and new radionuclides for monitoring various metabolicfunctions. For more information on SPECT, see discussions by are Jaszczak [40], Peters [72],and Ell [21].2.6 PETPositron Emission Tomography (PET) records functional data in a similar manner to SPECT,but provides higher resolution images. PET is the most recent imaging technology and currentlythe most expensive to own and to operate. An illustration of a PET slice (100x100) is given inFigure 2.7, showing an axial slice of the head using a fluorine radionuclide.Figure 2.7: A PET Slice2.6.1 The Physical Nature of PETPET, akin to SPECT, uses radiopharmaceuticals, but higher energy ones that emit positronsas they decay. The positrons travel a short distance until they hit an electron and then an-nihilate releasing two photons (gamma rays) in opposite directions. These pairs are recordedby detectors and subsequent computer processing determines the direction of the incident rays.Chapter 2: Imaging Physics and Technology^ 20The counts of matching rays are then used to calculate slice data. Commonly a ring or ringsof detectors are used to catch co-incident pairs in the plane(s) being imaged.The use of higher energy radiation provides more emissions and thus more statisticallyaccurate approximations can be made of the 3D distributions of the radionuclides. PET offersa more versatile range of radiopharmaceuticals than does SPECT. A typical example is 18Flabeled deoxy-glucose, which can reveal important details of sugar metabolism.2.6.2 Current Technology and TrendsThe resolution of PET is around 100x100x20, with the planar (in-slice) voxel dimensions of ap-proximately 5mm FWHM and slice thickness of lOmm FWHM. Examples of PET radionuclidesare listed in Table 2.2 [40, 72].Radionuclide Half-life (m) Emission (keV) Typical Use82 Rb (Rubidium) 1.3 511 perfusion150 (Oxygen) 2.0 511 tissue blood flow13N (Nitrogen) 10.0 511 oxygen utilization"C (Carbon) 20.0 511 drug labelling68Ga (Gallium) 68.0 511 toxoplasmosist vs. lymphomat1-8F (Fluorine) 111.0 511 glucose metabolismTable 2.2: PET RadionuclidesFuture research is increasing the scanning resolution and the range of radiopharmaceuti-cals. PET is inherently limited by the fact that a positron travels a variable distance beforeannihilating; therefore a spatial resolution of 2mm FWHM is a theoretical minimum [45]. Formore information on PET see papers by Jaszczak [40], Peters [72], and Ell [21].Chapter 3The Theory of Registration Philosophy is written in that vast book which stands forever open before our eyes, I meanthe universe; but it cannot be read until we have learnt the language and become familiarwith the characters in which it is written. It is written in mathematical language, andthe letters are triangles, circles and other geometrical figures, without which means it ishumanly impossible to understand a single word.- G. Galileo, Ii Saggiatore, Question 6The underlying process of registration is the formation of a transformation to map one scanonto another. The type of transformation required depends on the variations between the twoscans. By using an appropriate transformation that correctly models the variations present, itis possible to properly align the data.The simplest registration transformation will require only translation and rotation to correctfor patient positioning, and possibly scaling to correct for different sampling rates. Somemodalities may contain other variations that must be counteracted for a successful registration.The ability to find a solution and accurately quantify the error can be related to how well anindividual case satisfies certain conditions.3.1 DefinitionsVolumetric registration between two medical scans can be defined as a spatial mapping. Let pvispecify a function, such that given a scan (17,) and a position in that scan (x, y, z), it returnsthe 3D positions in the real or physical coordinate system defined by the "imaged object". Theregistration is then the mapping between two scans V1 and V2 by a function f, given in equationform as,Pvi(x,Y,z) --=PV2(f(x,Y,z)),213D axis labels 2D axis labelsdynamic registeredstaticChapter 3: The Theory of Registration^ 22where f is a 3D spatial-coordinate transformation. Therefore f pv-12(x,y, z) • pv 1. In layman'sterms, it is the alignment of two scanned objects into the same physical space. The defaultlabelling of the co-ordinate axis is provided in Figure 3.1. When the transformation maps setB onto set A, B will be called the dynamic set, and A the static set. An example registrationis shown in Figure 3.2.Figure 3.1: The Default Labeling of Co-ordinate AxisAn exact transformation function f will not be discovered for most cases. The types ofvariations between V1 and V2 will determine the complexity of the function f. When dealingwith actual data subject to noise and other distortions, a good approximation for f is all thatis feasible.Figure 3.2: An Example of Registration (Rigid Body)Chapter 3: The Theory of Registration^ 233.2 Types of TransformationsThe transformation from one scan to another can be arbitrarily complex and can be charac-terized in a large number of ways. The transformation may be global and have a domain ofthe entire volume, or be local and affect only sub-regions of the volume. Global changes arecommonly due to patient positioning, while local ones are usually more complex and can becaused by object deformation, joint movement, or other distortions. Local transformations maybe handled by segmenting the data into smaller regions each of which can then be handled in-dividually, or with complex non-linear transformations which can affect each region differently.A major problem with local transformations is the discontinuities that may appear betweenlocal regions.Geometric transformations have been grouped into four basic categories of elasticity asrigid, affine, projective, and non-linear by van den Eisen et al. [88]. Each category's class oftransformations is theoretically contained in the next, hence curved transformations form thesuperset. A visual example of a global transformation of each type is shown in Figure 3.3.none rigid body^affine projective^non-linearFigure 3.3: Categories of Transformation Elasticity3.2.1 Rigid Body MotionThe most common and simplest transformation is rigid body motion, which Giancoli [30] definesas: "... By a rigid body we mean a body that has definite shape which doesn't change, so thatthe particles that make it up stay in fixed positions relative to each other.". This results inmotion that can be analyzed by rotation and translation about the center of mass. While therotation and translation can correct for different patient positioning, the addition of scalingChapter 3: The Theory of Registration^ 24factors is necessary to account for the arbitrary sampling rates of voxels. Although an objectmay be manipulated about an arbitrary axis, the equivalent transformation may always beperformed by three separate rotations about each of the x, y and z axis through the center ofmass, and then a single translation in 3D. Baraff provides an indepth presentation [6] of thevarious aspects of rigid body motion.Rigid body motion can be represented in matrix form as a composition of scaling, rotationsabout each axis, and finally translation, expressed in homogeneous coordinates [25]:X2Y2Z2-= T RzRyRxSxly1z11 1-^_ _where1000 cos(9)^—sin(Oz)-0^00100 sin(Oz)^cos(Oz) 0^0T= ,Rz =0010 0^0 10tx ty tz^1 _ 0^0_ 01_cos(8y) 0 s in(0y ) 0 1 0 0 00 1 0 0 0 cos(0x) —sin(0x) 0Ry --= , Rx ,---—sin,(0y) 0 cos(0y) 0 0 sin(0x) cos(Ox) 00_ 0 0 1 0 0 0 1Ss 0 0 0S =0 s Y 0 00 0 sz 00001Chapter 3: The Theory of Registration^ 253.2.2 AffineThe next more general transformation is the affine, which allows for scaling and shearing ineach dimension independently. It can be expressed in matrix form as:X2 all al2 ai3 0 xiY2 a21 a22 a23 0Z2 a3 a32 _33 01 tx ty tz 1__iAn affine transformation preserves straight lines and parallel lines remain parallel after trans-formation.3.2.3 ProjectiveAfter a projective transformation parallel lines do not necessarily remain parallel. The expan-sion of the affine transformation to provide a projection (perspective) requires the full power ofthe homogeneous matrix representation. However the result will not be in regular 3-space, butthis can be formed by dividing the .x1 zi2 coordinates by the weighting factor zi72 [25].all a12 ai3 a14 X X2 x/02YI2 a21 a22 a23 a24 yi Y2 V21022 a31 a32 a33 a34 zl Z2 4/v/2"-z2 a41 a42 •43 a44 1 1 w/21w'2Projective transformations are commonly used to map between 2D X-ray images and volumedata [89]. These types of projections are commonly used in graphics applications to createrealistic 2D images which portray the 3D nature of the data.3.2.4 Non-linearNon-linear transformations allow the mapping of lines to curves, and do not necessarily preserveany lines. There are numerous types of functions that map to curves, but polynomial are perhapsChapter 3: The Theory of Registration^ 26the simplest. An arbitrary polynomial transformation may be performed by using functions ofthe form:n (n—k)(n—k—j)Xv2 = E E E z I kai3kxvigyizvik=0 j=0 t=0n (n—k)(n—k—j)Yv2 = E E^bijkxviivv.,Zvkik=0 j=0 i=0n (n—k)(n—k—j)E E Ek—o^i=0Although more than an infinite number of other non-linear transformations exist, many are toocomplex for practical use.Often with complex variations between scans a single transformation will not properly trans-form the entire data set. This necessitates the use of local transformations that may provideincreased control. Thin-plate splines are a family that provide a means for piecing togetherlocal transformations without discontinuities.3.3 Variations in DataRegistration requires the removal of or compensation for distortions between data sets, in orderto facilitate an accurate comparison. Distortions can be defined as the variations in the datathat cause misregistration [14]. The removal of variations may also provide insight into otherdifferences between the two scans. All variations can be broadly grouped into those which areinherent within a single scan (intrinsic), and those that occur between different scans of thesame or differing modality (extrinsic). Intrinsic variations are noise, physical distortions of theimaging technique and scanner calibration. Some extrinsic variations are patient positioning,physiological changes, and the differing physical nature of each modality.Brown, in her survey of image registration techniques [14], describes three categories forthe variations: corrected distortions, uncorrected distortions, and variations of interest. Thedefinition of these variations and examples in their respective category or categories are providedi j kCijkXvlYviZvlChapter 3: The Theory of Registration^ 27in the next three subsections. Many other corrected and uncorrected distortions exist due tothe construction and associated physical limitations of the scanning machines.3.3.1 Corrected DistortionsCorrected distortions are distortions that can be modelled. The model hopefully provides ameans for correction of the distortion in the registration process.1. All modalities• different voxel sizes and sampling rates (intrinsic)Multi-modality data sets will usually have different spatial resolutions and extents.In order to directly compare the data sets they must be mapped to a single viewingspace. The scale transformation may be performed using a priori knowledge providedby the imaging device, or by including scale deformations in the transformation.• body motion (intrinsic/extrinsic)Rigid body motion can be corrected by the use of rotation, translation, and scaling toalign the data. While the rigid body motion assumption often holds for the skull, itwill not for many other regions that contain joints and soft tissue, and thus are moreflexible. Motion within a single scan creates spatial artifacts, and when periodic cansometimes be isolated and corrected in the Fourier domain.2. CT• beam hardening (intrinsic)The beam that passes through the center of a scanned object is usually of higheraverage energy than the beams passing through the periphery [3]. This is due to agreater amount of lower energy photons being absorbed, and results in higher relativeattenuation values near the center of a scanned object (also known as cupping).Various reconstruction techniques can provide some corrective measures.Chapter 3: The Theory of Registration 283. MRI• object independent signal inhomogeneities (intrinsic)Main-field inhomogeneities can be caused by shimmingt in the electromagnets, ab-normal magnetic gradients can occur due to improper calibration, and non-linearvariations can be created by electromagnetic eddiest. These are related to the MRIconfiguration, and can be corrected if quantified correctly [20]. Various regularlyshaped phantomst can reveal stable object-independent distortion patterns.• object inherent signal inhomogeneities (intrinsic)These distortions can be due to the variation of the magnetic susceptibility of mate-rials, chemical shifts, or flow shifts. With extra work, they can usually be removedsatisfactorily using various techniques [83].4. CT, SPECT, PET• photon attenuation (intrinsic)As electromagnetic waves travel they may be absorbed and refracted by variousmediums.• Compton scattering (intrinsic)The photons can be deflected by collisions with electrons in the medium they passthrough. This will result in lower energy photons travelling in different directionsthan the original photons.3.3.2 Uncorrected DistortionsUncorrected distortions are those that are difficult to model and so are often ignored, or handledseparately. It is hoped that their effect will be on the same order of magnitude as the residualerror from correcting the other distortions.1. All ModalitiesChapter 3: The Theory of Registration^ 29• partial volume effects (intrinsic)The volume of a single sample will often encompass more than a single contiguousstructure. In these cases the resulting voxel value represents a composition of all thetissue and/or bone types present within the sample.2. Multi-modality• different imaging physics (extrinsic)Because the different physical properties in each modality are independent there isno requirement of identical features existing in both.3. MRI• object dependent signal inhomogeneities (intrinsic)Cannot be removed completely, although recent correction methods provide approxi-mate results (see Corrected Distortions)'. The variations may be from non-linearitiesin the gradient magnetic fields, or EM-eddies, or loss of signal due to foreign objectsin the body.4. SPECT and PET• Radionuclide decay and uptake (intrinsic)With decay, the time elapsed from radionuclide creation to imaging will greatly affectthe scale of imaged values and SNR. The relative uptake of the radionuclide can alsovary drastically from scan to scan.5. PET• positron wander (intrinsic)'It is often assumed that nonisotropic distortions are negligible in MRI and PET [47].Chapter 3: The Theory of Registration^ 30The positron created in the decay wanders a variable distance before annihilating,and thereby emitting a gamma ray. Thus the source of the radiation can only bedetermined approximately, which lowers the spatial resolution.3.3.3 Variations of InterestVariations of interest are the changes or differences in the data that the researcher or radiologistwishes to visualize. These variations should not be removed, and it is the goal of registrationto reveal them more accurately.I. Single Modality• temporal differences (extrinsic)The anatomical or physiological structure may change over time. One example is atumor's progress that can be monitored by its change in shape and volume.2. Multi-modality• different information (extrinsic)Two data sets may provide complementary representations of a physical structure(CT/MRI). A functional data example would be PET/MRI registration. In thiscase, the dysfunction of an anatomical region without structural problems could berevealed by the metabolic variations.3.3.4 Summary of VariationsThe act of correcting the distortions represents the registration process. Although the uncor-rected distortions and variations of interest may be ignored in the registration model, they willhelp determine the features and heuristics used to find the optimal transformation. A goodstrategy for registration is to remove the corrected distortions independently if they can bemodelled correctly and isolated. This will provide a more controlled and flexible process.Chapter 3: The Theory of Registration^ 313.4 Conditions for Finding a SolutionExact registration of the data requires two conditions:1. The regions in each set must be correctly segmented to represent the same features withrespect to the object reference frame.2. There must exist a one-to-one transformation that maps one set of features onto thecorresponding features exactly.If these requirements are fulfilled, then an exact solution may be found. Both requirementsdepend highly on the segmentation of the data because this forms the basis of the solution.The derived solution is based on the features, and not the physical space. For the registrationto be valid, the segmentation must be a true mapping between them.In real cases, the conditions for exact registration will not hold completely. An approximatesolution will depend on how well the conditions are met, and on the ability of heuristics to findthe correct mapping. The ability to measure the accuracy of the solution is also dependent onthe degree of fulfillment of the conditions.Each corrected distortion will have its own necessary conditions to properly remove it. Forexample, with rigid body motion there must be at least three matching pairs of features (points)that are not collinear in order to provide a sufficient basis for solving the transformation. Othervariations further compound the problem of finding an exact solution.Chapter 4 The Registration ProcessIt is manifest that if we could find characters or signs appropriate to the expression of allour thoughts as definitely and as exactly as numbers are expressed by arithmetic or lines bygeometrical analysis, we could in all subjects, in so far as they are amenable to reasoning,accomplish what is done in Arithmetic and Geometry. ... Moreover, we should be able toconvince the world of what we had discovered or inferred, since it would be easy to verifythe calculation either by doing it again or by trying tests similar to that of casting out ninesin arithmetic.- G. Leibniz, Preface to the General Science, 1677The process of registration takes the raw data from two or more scans and transforms oneonto the other, providing an estimate of the resulting accuracy. The entire process can beperformed in three phases:1. Segmentation2. Matching3. VerificationThe series of actions may be tightly or loosely coupled (overlapping or disjunct phases) depend-ing on the method used. Loose coupling allows each of the phases to be performed by methodsspecialized for individual cases, thus providing flexibility.4.1 SegmentationSegmentation is the process of reducing the data into a more compact representation (called afeature) that may be compared more easily and effectively. It can be stated as a function (s)acting on both the static (Vs) and dynamic (Vd) scans:s(17,) = Fs32Chapter 4: The Registration Process^ 33S(Vd) = Fc1)resulting in the set of static (Fs) and dynamic features (Fd). It is important that the featuresselected represent the same physical entities in both data sets. This is especially true with theregistration of functional data. In these data sets the lack of pronounced distinctions betweendifferent anatomical structures is compounded by the lower spatial resolution. It requires expertknowledge to select appropriate features, as there is great variability within functional data ofthe same modality due to metabolic variations. This is probably clearest in SPECT, whereboth radioisotope uptake rates and delivered dose amounts can vary.Segmentation may be performed manually, semi-automatically, or automatically, althoughthe diverse variations in the data dictate that the segmentation can almost never be fullyautomatic. Pre-processing of the data may facilitate the segmentation process. It is usuallyperformed by various filtering operations. The goal of segmentation is to eliminate all of thedata except those features that will be used for the matching process. The features chosenshould be representative of the transformation needed to register the entire data.4.1.1 Feature TypesThere are numerous ways to represent the features of three-dimensional data. A taxonomy(as they are derived) of the different types of features is given in Figure 4.1. Points are thesimplest feature, and are the underlying constructs for most other features. Their compactrepresentation allows operations to be performed efficiently. Edges are the next basis and aremost commonly the representations of boundaries. A boundary may be based on an equal value,an equal gradient, edge-detectiont operations, classified regions, or any other distinguishingtrait. Volumes are different in that they are 3D space filling, but they utilize surfaces asexternal boundaries. Hence they are often formed from very similar techniques to those usedto create surfaces. Statistical features are formed from numerical analyses of the data, whichgroup the data into various salient components. They are often formed from a subset (a roughsegmentation) of the data. There are also many other representations based on various levelsChapter 4: The Registration Process 34Raw DataPoints^Edges^Volumes^StatisticsContours^Surfaces^Frequencies^Model-FeaturesMoments^Priciple AxisFigure 4.1: A Taxonomy of Feature Typesof abstraction and structures.Often features are composed into a model of the object being scanned. The model canbe formed from statistical norms or strictly theory, and a group of these models for a specificregion is commonly called an "atlas". Hohne et al. [36] have shown that an atlas can be apowerful visualization and educational tool. Incorporating knowledge of the data into a featurecreates a model, the simplest of which may be the interhemispheric fissure of the brain whenrepresented as a plane. It has been used by Gershon et al. [29, 28, 26, 27] and Kapouleas [47]for registration, and Gershon also discusses incorporating various other information into arepresentation. Chinzei et al. [16] presents the use of models for segmentation and matching,Peifer et al. [67] utilize a model of the coronary arterial tree to aid in registration, and Kovacicet al. [50] register PET and CT by mapping them to a normal (model) position. The mostcommon features are listed in Table 4.3.4.1.2 Choice for FeaturesThe region of the data to be selected for a feature may vary for each patient. It will depend onthe types of scans used and the regions being imaged. Van den Elsen [89] has labelled all featuresas either intrinsic or extrinsic. The intrinsic features are only dependent on the patient, whileChapter 4: The Registration Process^ 35Feature Description Representationpoints 3D discrete positions or voxel volumes tuplescontours stacks of 2D lines linessurfaces iso-surfaces/gradient polygonsvolumes volumetric region closed-surfacesfrequencies transform into Fourier domain complex numbersmoments centroids, ... tuplesprincipal axes compound moments vectorsatlas model of average object surfaces Sz volumesTable 4.3: Feature Typesextrinsic are the result of artificial objects present in the scan. Extrinsic features are commonlyreferred to as fiducial markers, as they are intended to provide a stable object-defined referenceframe.Intrinsic features are usually chosen as the boundaries of physical structures. Bone struc-tures are often the most rigid and so are good choices if they exist in both scans. With functionaland anatomical scans of the head, the brain surface is often used [73, 15] but the epidermis of thehead can be obtained from PET transmission scans' and may provide a sharper feature [55, 9].Intrinsic features should not be deformable objects, unless these alone are the objects to beregistered. A comparison of some of the possible choices for features of the head was performedby Wowra [98]. This is summarized2 in Table 4.4.There are many different types of extrinsic features (fiducials) that allow the delineationof the object's reference frame. Fiducial markers (commonly points or lines) are constructedto facilitate easy segmentation. They are limited by the accuracy of repeated positioningand segmentation. The major limitations of extrinsic markers are the inability to be usedretrospectively and the discomfort inflicted on the patient.A transmission scan is similar to a CT scan, and is done before each PET slice to obtain attenuation valuesto aid in the reconstruction.2Wowra's survey did not include SPECT, but it can be approximately derived as a degradation of the PETvalues.Feature CT MR1 PET/ Total forSPECT Structure (X/9)mid plane 3 3 3 9skull 2 o o 2skull base 3 o o 3ventricles 3 3 2 8cortical relief 2 3 2 7vessel architecture 1 2 0 3Total for Modality (X/18) 14 11 7Legend: Detectability Scale0 - not, 1 - in principle, 2 - sufficient quality, 3 - optimalChapter 4: The Registration Process^ 36Table 4.4: Comparison of the Choices for Intrinsic Features in the Head4.1.3 TechniquesThe techniques are often tailored for specific modalities. Comparisons of manual tracing, count-based methods, static thresholds, adaptive thresholds, and 2 and 3D edge-detection methodsare provided by Long et al. [60]. Lee and Chung [54], Hohne and Hanson [37] and Andreasenet al. [4] present various other techniques. For more detail, see Johnston's thesis [43] for adiscussion of segmentation.Segmentation techniques require varied amounts of user interaction and time, and thisprovides one of the means for classifying them. Current techniques are most often image basedoperations in which each slice is treated as an independent image and the 2D results combined.3D segmentation exists but the huge amount of data makes most routines impractical. In eachcase they will reduce the data into features for use in the matching process.• Manual SegmentationManual segmentation is the process of direct human interaction to select the variousfeatures. The class can be defined as those methods in which the inclusion or exclusionof all voxels is directly decided by the user. Two examples are the selection of single3D points by a user, or the manual tracing of an ROI, both of which require interactionthrough the use of a mouse, trackball, or other user interface. These methods can be veryChapter 4: The Registration Process^ 37accurate (although they are not always so) as they can utilize the highly adapted visualsystem of a expert. They also are very time intensive for the person performing the task.• Semi-Automatic SegmentationMost methods of segmentation fall into the class of semi-automatic methods. Theserequire less human interaction because they utilize some form of knowledge about thedata in order to perform the decision making for including or excluding voxels. Thresholdmethods allow the selection of voxels in various ranges. They may be combined with seedpoints to form region or volume growing techniques. Auto-contouring is another simpleone of these methods, and is used in two dimensions to create a line according to certainboundary criteria. A three-dimensional counterpart is isosurface generation from a seedvalue or point. With increased knowledge, a probabilistic method can utilize statisticaltechniques both a priori and "on the fly" to help make decisions.• Automatic SegmentationAutomatic methods can be created for very specific tasks. When variations in the dataare highly predictable, that knowledge can be incorporated into a model that can allowthe segmentation to proceed automatically. When the knowledge is explicit it can beincorporated into an expert system, and it will perform well to the degree the assump-tions underlying the rule base hold. These types of segmentation usually cannot toleratetoo much variance, but they may be necessary to feasibly perform the large number ofsegmentations required for a comprehensive statistical analysis.4.2 MatchingEach matching method can be broken down into its three main components:1. Feature Space - type and choice of features that will be compared2. Comparison Function - quantification of how closely the features matchChapter 4: The Registration Process^ 383. Heuristic - an approach based on the comparison function to find a solutionThe great diversity of the different images to be registered precludes the existence of a singlebest method for all cases. The huge transformation parameter space necessitates a strategy forfinding the solution, as any brute-force method would be infeasible. The solution may be theresult of a direct calculation or it may involve some form of iterative process to optimize thematch.4.2.1 FeaturesThe feature type that will be selected is linked directly to the matching method to be used,although this usually does not restrict the method used to segment the features. The featuresshould be chosen to minimize the effects of uncorrected distortions and variations of interest,as their inclusion would only lead to a less accurate registration. Depending on the differencesbetween the data sets, a certain level of abstraction (feature segmentation) may provide auseful and more efficient method for handling the matching process. The more similar the datasets are, the less abstraction is necessary. Simple features can require more computationallyexpensive methods; complex features often inherently contain more relevant information in acompact representation and thus require less.The types of features selected allow the desirable properties of the object being imagedto be isolated. Structures or regions that are less vulnerable to physiological variations canbe chosen. This allows the removal of some variances due to growth, motion, or dynamicdensity. Boundaries of structures are more easily delineated and often represent more stablepositions. The differing physical properties imaged by the different modalities require thatsome segmentation into features be performed. Again, the high contrast that often occurs onstructure boundaries is often the most suitable property consistent across different modalities.Only in registering data sets of the same modality would it be appropriate to use the entiredata set without some form of segmentation.Chapter 4: The Registration Process^ 394.2.2 Comparison FunctionWhen any two non-identical sets of features are to be matched, an appropriate criterion mustbe chosen to reveal the relative success of the match. The comparison function quantifies howclosely the features match, and therefore is a relation between two sets of features. It can beformulated as a function. In equation form it is:c(Fs, f (Fd)) residualwhere c is the comparison function, F, and Fd are the static and dynamic sets of features, f isa transformation on the dynamic set of features, and the residual is the function value. Theregistration process attempts to find the transformation f in order to minimize the residual.A single numerical value can usually be produced from the chosen function. Thus when thefunction has a zero value it represents no discernible difference between the two sets of features.The comparison function can range from the simplistic to the complex. After the dynamicfeatures undergo a 3D transformation the resulting features will be in a continuous3 domain.Therefore the comparison function may be discrete or continuous in nature. When using adiscrete function the features are commonly projected onto the slice planes of the static dataset, or rounded to integer values.It is highly desirable to use a function which is a metric, and therefore has the followingproperties:1. The global minimum is reached only when the two sets of features are identical2. The function is equal regardless of the order of comparison in that c(Fi, F2) = c(F2, F1).3. The triangle inequality holds between individual features. c(i, j)+c(j, k)^c(i, k) Vi, j, k EA, BAnother useful property is that the overall function be monotonic; as the difference betweenthe features increases the function is non-decreasing.3Continuous in the sense of being real number coordinates not restricted to the original slice planes.Chapter 4: The Registration Process^ 40If the sets (F, & Fd) are composed of more than one feature, the residual of the comparisonfunction is based on some method for the combining of all the values for the individual features.This method will be dependent on the types of features. Any comparison function should beable to tolerate variations in order to allow the heuristic to find an optimal match. The use ofmore features usually provides increased robustness. A brief summary of features and commoncomparison functions is presented in Table 4.5.Feature Comparison Function Commentsraw data differences of intensitycross-correlationfew variations (same modality)raw data/bitmap, expensivepoints distance any distance function or approx.contourssurfacesdistance,perpendicular distance computationally expensivevolumes differences of volumes non-overlapping volume or (n/u)frequencies Fourier correlation computationally expensivemoments distance simpleprincipal axes rotation angles & distance simpleTable 4.5: A Summary of Features and Corresponding Comparison FunctionsDifferences are the simplest function and are usually calculated by arithmetic subtraction,although the binary AND, OR, or XOR operations could be used. While normalized cross-correlation is still too numerically expensive for 3D, it can be used on stacks of 2D images ifthe registration process can be reduced to two dimensions. Distances are the most commoncomparison function and will be discussed in more detail later. A large amount of theoreticaland practical work along lines similar to registration with surfaces has occurred in the area ofcomputer vision in the area of attitude determination. This work on the registration of surfaceshas developed numerous comparison functions based on curvature and other surface properties,but has not been applied to very noisy or complex surfaces present in medical applications. SeeLi [58] and Little [59] for further information. 3D Fourier correlation methods are not utilizedfor similar reasons as those for cross-correlation. Sometimes the comparison function may beused directly to calculate the optimal transformation, as when the angle between principal axesis used to align them.Chapter : The Registration ProcessDistances and Their ComputationThe distance between features is the most prevalent form of comparison. When the featuresallow a distance function to be used, it is most commonly Euclidean distance4:DRxi, yi, zi), (x2, Y2, z2)]^V(xi _ ,2)2^(m. _ y2)2 +^_ z2)2but other less computationally demanding functions such as the class of non-negatively weightedabsolute functions:DRxi, yi, zi), (x2, Y2, z2)] = co^— x2I + cilyi — Y21^cdzi — z21(this is just Manhattan distance when co =cl = c2 = 1) are often implemented.When an iterative solution requires repetitive calculation of the distance functions, an ap-proximation may be necessary. One way this can be done is with a distance transform (DT). "Adistance transformation converts a binary digital image, consisting of feature and non-featurepixels, into an image where all non-feature pixels have a value corresponding to the distanceto the nearest feature pixel [11]." This is essentially the precomputation of distances over adiscrete region surrounding a feature, and can be applied in 2D or 3D.One type of DT, called chamfering, can be efficiently formed by a two pass filtering operationon a data set where features are set to 0 and the rest to oo. This is illustrated in a simple2D example shown in Figure 4.2. As the mask is moved across the feature image, if the sumof any mask value and the pixel below it is less than the center pixel value, the center valueis replaced. After two passes of the image represents the distance transform of the features,where each pixel represents the "distance" to the nearest feature pixel. Using the mask valuesin this simple example results in the computation of what is called the City block distance. 3Dchamfer Distance Transforms were incorporated in the surface matching techniques that wereimplemented. They are discussed further in Section 5.3.2 and analyzed in Section 6.2.1.Other DT approximations exist such as the n-neighbor and octagonal methods, but theyprovide less accurate approximations [10]. The actual Euclidean distances can be calculated4The L-norm, L, =^— ydIP, for p = 2Chapter 4: The Registration Process^ 42Feature Image Distance Transform90999 Mask 1012399099 919 2101299099 ^>. 101 ^>- 2101299999 919 3212399999 43234Figure 4.2: The Formation of a 2D Distance Transformationwith a similar but expanded chamfer DT algorithm. The increased computation and memoryrequirements necessary for this are not reasonable, although Paglieroni's 2D algorithm [66],which reduces the increased time costs, may be expandable to 3D. The chamfer DT describedis good enough for most applications and is efficient to calculate. It is used for a hierarchical2D edge matching by Borgefors [12], and has been adopted for 3D registration of surfaces byJiang [41] and van den Elsen et al. [88].When more than one feature is present there are different ways in which a overall functionvalue can be formed. Borgefors' [12] tests with 2D chamfer matching have shown that ofmedian, arithmetic sum, root-mean-square (rmst), and maximum, the rms value provided thebest results. This analysis is applicable to medical imaging and the two dimensional case isillustrated in Figure 4.3.Figure 4.3: A Comparison of Features2-Pass FilteringChapter 4: The Registration Process If feature B is being matched to feature A, then the rms value of position 1 would besmaller than position 2, while the mean value would be the same for both. It is probablymore appropriate to have the more consistent error present in position 1, rather than the largevariations in error with position 2. This type of closed feature is commonly segmented inmedical scans, in which a slight error may result in one feature being contained within theother.4.2.3 HeuristicsOnce the data has been segmented into features, a transformation must be found that willmap one set to the other. The type of data, as well as the nature of the changes that haveoccurred, will determine the complexity of the heuristics needed to find a correct mapping. Aminimization of the comparison function will usually drive the heuristic. There are two majorclasses of heuristics: those that calculate the solution directly, and those that iteratively refinea approximation until it converges.Direct SolutionsDirect heuristics are those that calculate the optimal transformation in one step. A single resid-ual value from the comparison function must be used to define the registering transformation.Although the techniques used to solve systems of equations and correlation methods have aniterative nature, they are grouped in this class because there is no large distinction betweensteps. Direct methods commonly contain numerically expensive algorithms at their core, andso are not feasible for an iterative process. These techniques require enough knowledge of thetransformation so that they do not require a search of the different possibilities.One of the most commonly used 2D methods is cross-correlation,a b =^fcc) a(x —^— 77)b(e,rndethif-00 —00and the selection of the maximal value for x and y [391. In practice it may utilize the form ofChapter 4: The Registration Process 44a normalized discrete convolution, which has the form in three dimensions of:ExEyEzVi(x, y, z)V2(x — t, y — u, z — v)C (t, u, v)ExEyE ,V12 (x — t, y — u, z — v)calculated for an arbitrary translational range for t,u and v for x, y and z-axis respectively.The convolution can be seen to be identical to the correlation except volume V1 is invertedin all dimensions, while the normalization provides voxel magnitude invariance. The globalmaximum of this function represents the optimal translational coefficients.By resampling (mapping) the data into polar co-ordinates, rotations about the chosen polarorigin can be found by a translation [17, 333. Hibbard [33] does this with the autocorrelation (a®a) function to avoid dependence on an arbitrarily chosen sampling origin, while Cideciyan [17]maps the Fourier transform of the data into log-polar form to find rotation and scaling fromthe translations.A major problem is that the computation required for these correlation methods is too greatfor use in three dimensions. The results of these methods will not produce a one step solutionfor both rotation and translation, and so will usually have to be iterative to be exact. Thetechniques are still useful. Maguire et al. [62, 61] utilized 2D cross-correlation in segmentationto improve localization of their point features. Future development of parallel processing orfaster computers will make these methods more feasible.Moment based methods use basic statistical values derived from the shape of the data todirectly calculate a transformation. Centers of mass can be aligned to correct for translation,and principal axes (if they are representative and stable) can be aligned for rotation correction.These methods are very dependent on the low frequency nature of the features being similar.They require that the variations in the data be uniformly distributed so that the averagingperformed with a moment calculation can accurately reveal the optimal transformation. Theuse of moments and principal axes were studied by Faber and Stokely [24] to find the orientationof cardiac blood pools, while Bartoo and Hanson [7] and Toennies et al. [86] utilized moments forregistration. The use of moments are also inherently utilized in most model-based registrations.Chapter 4: The Registration Process^ 45The use of corresponding point features (commonly extracted fiducial markers) provides abasis for solving a system of equations. Once a transformation type is chosen and matchingpoints are selected, any of the numerous methods for solving systems of equations can be used.A comparison of algorithms that can be used to solve for the transformation matrix is providedby Timmens [85]. Pelizzari et al. [69] found that solving for an affine transformation providedreasonable results when enough corresponding points could be chosen, and Henri et al. [32]also solve equations for a transformation matrix. Polynomial transformation (commonly calledwarping [95]) is one of the curved transformation types that uses the matching points to derivepolynomial functions which can map one set onto the other. This may be done by either ap-proximation or interpolation, although the use of higher order polynomials, needed for complexwarping, can have problems with large oscillations in the solution5. Approximation attemptsto minimize the variance (least squares) from all the control points, while interpolation requiresthat the transformation map all the corresponding points exactly. For distortion correctionSchad et al. [78] utilize fourth order 2D-polynomials. Maguire et al. [62, 61] and Kramer andNoz [51] use first and second order polynomials for the mapping of points in 2D, and Schierset al. [79] use them for 3D for registration.Iterative SolutionsIterative methods utilize successive approximations to "zero in" on the optimal parameters,and so are often more flexible in accommodating variations than direct methods may be. Thecomparison function must be simpler than those used for direct methods due to their repeatedcalculation. One exception is the novel technique discussed by Woods et al. [96], in which thevariance of the voxel-to-voxel ratios of two PET scans is iteratively minimized. In a similarbut more complex method [97] Woods performs MRI-PET registrations. The process of iter-ation often allows errors in initial approximations to be corrected later. The formulation ofthe heuristic into discrete steps allows for the integration of a hierarchical (often pyramidal)5This is called the Rungian effect. When this occurs the solution may have only small errors near the controlpoints, but elsewhere the error will be very large.Chapter 4: The Registration Process^ 46structure into the process. This can provide an efficiency gain by initially optimizing on highlevels of abstraction and then fine tuning the parameters on the lower levels.Gradient search methods can be utilized when the comparison function provides a gener-ally convex shape over the parameter space. This allows a series of small refinements of theparameters to guide the search from an initial position to the minimum. The presence of localminimum, as opposed to one global minimum, is the major stumbling block for these methods.The comparison space is very complex and so these local minimums inevitably exist for 3Drotation and translation. Jiang [41, 42] deals with this problem by using a large number ofstarting points spread over the parameter space.4.3 VerificationThe result of the matching process will be a transformation to register the data. How accuratelythe data are aligned must be determined before results can be used. This is difficult to do inreal applications, and the residual from the matching heuristic and visual inspection are theonly two techniques. Therefore the heuristic should be tested in simulations which allow theerror to be accurately quantified and the robustness tested.4.3.1 Comparison Function ResidualThe residual of the comparison function is often used to describe the similarity between featuresafter a calculated transformation. This residual should be a good indicator of the relativesuccess, however, it is dependent on the segmentation and the approximations of the comparisonfunction and so should not be used as the final indicator. Often the residual can only be usedto quickly discover which results are poor.4.3.2 Simulated DataSimulated data is just about the only way to arrive at a truly quantitative measure of verifi-cation. A simulation is the artificial creation of two scans that are to be registered. With realChapter 4: The Registration Process 4 7data there are often too many unknowns and other distortions to perform a highly accuratecheck on the mathematical soundness and completeness of the results.With most simulations the actual correspondence between 3D points in the features can becalculated. The correspondence of static and dynamic feature points can be used to calculatethe Euclidean distances between the points after the registration. Depending on the positionof the features, relative to the other data, this may provide a bound for the misalignment errorof the entire data set. When using resampled real data the segmentation error can be excludedfrom the resulting alignment error. If the simulation transformation is also used on the staticset of features, then it too can be registered and compared to the identical unmodified set.With the simulated data, the registering transformation can be compared to the one usedto create the simulation. When the transformations are in matrix form, the product of the twomatrices should result in the identity matrix.1.0 0.0 0.0 0.00.0 1.0 0.0 0.0Mreg * Msim =0.0 0.0 1.0 0.00.0 0.0 0.0 1.0Although this results in a quantitative measure of error, it is not very intuitive. The matrix canbe then used to transform a single set of features to determine the average distance betweencorresponding points mentioned previously.Mathematically CreatedSimulated data can be based on various levels of abstraction:• mathematically created data and/or featuresIn this way highly controlled shapes can be created and transformed often with no uncer-tainty in orientation and position. Ellipsoids or other more complex shapes can be usedto give bench marks of the heuristics with regards to well-behaved comparison functions.Chapter 4: The Registration Process^ 48If segmentation methods require testing then original data can be created, otherwise thefeatures can be formed directly from equations.• numerically transformed real featuresUsing real features allows the heuristic to be tested on the actual comparison space thatwill exist in real cases. A single set of features can be transformed, distorted, or modified ina completely controllable manner, or else two sets of matching features can be transformedaccordingly. This can provide data sets which test aspects of the heuristic that relate tospecific variations.• resampling real dataResampling at different rates after an arbitrary transformation creates data that mimicsthe results of multiple scans, with the only variation being the position and orientation.This also allows the segmentation to be tested along with the matching method. Addednoise and other modifications can be used to replicate the differences occurring fromvarious scanning device parameters, or possibly other variations, enabling the creation ofhighly realistic simulations.Phantom DataA phantom is an object that is imaged for the purpose of simulating the effects of imaginga real object. It can be scanned after various controlled physical transformations. The exactcomposition of the phantom's volume is known, so the scanned volume can be checked. Oftenthe phantom contains various imaging substances that can be exactly controlled.4.3.3 Patient DataWhen actual data has been registered, it can be visually inspected to verify the result. This will,of course, be a subjective evaluation. It may be aided by the presence of fiducial or referenceChapter 4: The Registration Process 4 9markers in the images, which can be easily segmented and compared. Current analysis ofthe data is almost always performed on 2D images, and so it is most likely that one dataset will be resampled along the same image planes as the other data set. This enables anynumerical analysis or algorithms to be performed on identical regions. While these operationsare normally done to utilize the added information from the registered data set they can alsobe used to discover errors.Various visualization techniques allow the comparison of the data sets for verification. Thesecan enable the user to detect various signs of misregistration.• Linked CursorThe simplest visual inspection of registered data is the use of a linked cursor. In this casethe movement of a cursor in one data set will move another cursor to the spatially equiv-alent position in the other data set. For 2D viewing, this is commonly performed after aninitial step of resampling identical slice planes using the registering transformation. Thissimplifies the process of calculating the matching cursor position, but more importantlyit preserves directional motion across the data sets (e.g. horizontal motion of the cursorin one set results in horizontal motion in the other).• Transferring ROIsWith the registering transformation any ROI may be transferred from one set to the other.The ROI may be of any type from simple polygons to the output from an edge-detectionmethod. With a 2D ROI in set A, the data in set B may be resampled so as to lie inthe plane of the ROI. The ROI from set A can then simply be overlaid on the other set.While this also can be done with 3D ROIs, another alternative is that the ROI can betransformed over to set B and then intersected with the appropriate planes.• Image MergingIt is often beneficial to resample one entire data set so that it lies in the same planes asthe other. The equivalent slices can be shown side by side, or can be integrated usingChapter 4: The Registration Process^ 50various visualization techniques. This may involve averaging, color-washing, selectivesegmentation, or other compositing techniques.• Volume RenderingThree-dimensional display techniques have only recently been utilized to provide a betterunderstanding of the data. Diagnosis is still mainly performed by looking at the 2D slices,but volume visualization techniques are quickly gaining wide-spread acceptance. Theprocess converts the 3D data into 2D images that retain some form of 3D information.There are two broad classes of volume visualization. One utilizes surfaces and the othervarious types of projections. The first fits surfaces onto the data and then uses simulatedlighting and hidden surface removal to simulate 3D viewing effects, while the other projectsrays through the data onto the 2D image and accumulates desired characteristics.For all 3D display techniques the data requires some segmentation to provide suitableresults. The segmentation may be inaccurate, compounded by the volume renderingapproximations, this may create a misleading 3D image. This is a major concern withthe use of volume visualization, and a reason why 2D slices are the primary diagnosticplatform. Another disadvantage of volume visualization is the long time necessary forcreating the rendered images. A 3D object is often rotated to provide different viewpoints,and so this multiplies the number of images to be created and further increases the timeneeded for creating the visualization. One method to accelerate image creation is theuse of an octree representation of the data [94], which eliminates the traversal of emptyspace, or in surface rendering techniques, a reduction in the number of polygons [87, 80].Ultimately, parallel computation and faster processors may provide the only true solutionfor real-time volume visualization. See Mealha et al. [64] for a discussion of various datastructures and optimizations. Different volume visualization techniques are surveyed byElvins [22], while Wallis and Miller [93] and Tiede et al. [84] describe them in a medicalcontext. A survey of the hardware involved is given by Stytz et al. [82], and two recenttechniques are explained by Yoo et al. [100] and Levoy [56].Chapter 4: The Registration Process^ 514.4 Case StudiesThree specific registration methods will be discussed here in further detail. These includetypical examples from the three most common techniques represented in the literature: manualmethods, corresponding point mapping, and the iterative alignment of surfaces.4.4.1 Iterative Manual AlignmentA completely interactive method for the 3D alignment of functional and morphological scans isprovided by Uwe Pietrzyk, Karl Herholz, and Wolf-Dieter Heiss [73]. Another manual methodis described by Kapouleas et al. [47].Feature extractionThe feature extraction is formed by 2D preprocessing of the data. This involves convolutionwith various filters for edge-detection and smoothing. The output of this preprocessing ishopefully the set of contours which represent the boundaries of various internal structures. Thesegmentation is only performed on one set, usually the higher resolution data set.RegistrationThe contours obtained from the feature extraction are overlaid on the other data sets to providea visual means of comparison. The data sets are interactively and arbitrarily re-sliced to find thecorrect transformation. When an expert is satisfied that there is enough correspondence, theregistration process is complete. This typically required only 5-10 min on standard computingequipment.ResultsPietrzyk et al. discovered that it is visually easy to detect a variation in rotation of 3 degreesand a 2 pixel translation. Therefore they concluded that an accuracy to within ±1 voxels wasChapter : The Registration Process^ 52obtained. Their matching process was able to register to a point where the greatest error wasbelow the FWHM of the line spread function in the lower resolution imaging device.4.4.2 Solving Equations Formed From Corresponding PointsAn approach that uses numerous corresponding 3D points to define a polynomial function totransform and register the data is presented by Schiers et al. [79]. A similar approach in 2Dwas implemented by Maguire et al. [62, 61].Feature extractionThe features chosen are single points (landmarks) in each data set that represent the same3D point. To accomplish this, Schiers created a tool for the interactive selection of 3D points.This was commonly done with a projection of the outer skin surface (CT/MR), which could bemanipulated to provide different views.RegistrationOnce the corresponding points are selected by an expert, the calculation of a polynomial whichapproximates or interpolates them must be made. Linear polynomial interpolation requires atleast 4 points, and if distortions are present higher polynomials are needed. From these points,Schiers solves three equations for the respective x, y and z transformations. The co-efficientsare calculated from the minimization of a least squares residual.ResultsThe accuracy of results were not discussed specifically, but were stated as being sufficient forrigid objects. It was noted that other distortions required higher degree polynomials, but thenproblems with oscillations occur, so many more points are needed.Chapter 4: The Registration Process^ 534.4.3 Iterative Surface AlignmentPelizzari et al. [68, 55, 15] has devised a well-known "hat" and "head" surface matching method.Other methods of this type are Jiang et al. [41, 42], Malandain and Rocchisani [63], van denElsen et al. [88], Collignon [18], and van Herk [90].Feature extractionPellizari's method involves the extraction of a surface from each data set. This is done by manu-ally selecting a feature (contours) in each set. The user will go through all the slices segmentingthe contours in 2D. This can be aided by edge-detection and contour-tracing programs whichprovide a faster, semi-automatic procedure. This process will yield a stack of contours thatcan be used to represent a surface for both the static data (which they call the head) and thedata set to be transformed (the hat). These surface features are then used for the registrationprocess.RegistrationThe features are initially transformed according to the machine setup parameters. The trans-formation is assumed to be rigid body motion with scaling used for configuration error and MRdistortions, and therefore 9 parameters are needed to encompass all rotation, translation, andscaling.Powell's minimization technique [74] is used to find the transformation function parametersthat minimize the residual; with the residual being the sum of an approximated distance foreach "hat" point to the closest point on the "head". The minimization method is based on thechanging of one parameter at a time and is a conjugate-gradient approach. The comparisonfunction is the distance between the "hat" and "head" surfaces calculated by a ray travellingfrom the "hat" point to the centroid of the "head". Points that are not within the range ofthe "head" slices are ignored with the intent of allowing a better match of features that onlypartially overlap. Typical surface features consist of 1500 points for the "head" and 250Chapter 4: The Registration Process 54for the "hat". The minimization process required 5-10 min on a VAX-11/750.ResultsIn phantom studies (CT to CT/MR/PET and MR to PET) the method found transformationsthat had residual values (rms) that were less than the largest pixel sizes [68]. In-plane errorswere estimated from the deviations of the centroids of cylinders imaged in both modalities. Theoverall analysis is shown in Table 4.6 [68].Modalities Type Voxel Spacing(mm)Residual(rms mm)In-planeError (mm)CT/CT Phantom 1.15x1.15x4 / 1.15x1.15x4 0.73 0.7+0.48CT/MR Phantom 1.15x1.15x4 / 1.95x1.95x10.5 1.16 1.36+0.45CT/PET Phantom 1.15x1.15x4 / 2.50x2.50x14 1.80 1.38+0.65MR/PET Phantom 1.95x1.95x10.5 / 2.50x2.50x14 1.83 2.48+1.09PET/MR Clinical 2.50x2.50x14 / 1.17x1.17x10.5 1.6 n/aTable 4.6: Overview of Pelizzari et al.'s ResultsChapter 5Implementations and Innovations But as soon as I had acquired some general notions in physics, and beginning to test themin various particular difficulties, had observed how far they can carry us, and how muchthey differ from the principles that have been employed up to the present time, I believedthat I could not keep them concealed without offending gravely against the law by whichwe are bound to promote, as far as in us lies, the general good of mankind.- R. Descartes, Discourse on Method, 1637As part of the research for this thesis, numerous different methods and techniques for eachphase of registration were implemented and tested. The main thrust of the work involveda thorough examination of surface matching methods. The programs used to perform theregistration were created in an object-oriented data flow environment. In this environmentimages are objects which can flow down a pipeline of processing modules.The registration was restricted to rigid body transformations because they are simple toperform, they maintain the relation of intra-data distances, and they are the most widely used.They are appropriate for the scans of the head that were the focus of testing. This chapterdescribes the major characteristics of the various methods and provides an in-depth descriptionof the techniques used.5.1 PlatformThe registration process was performed in a data flow environment created for multi-purposeimage processing. This was provided by the software package WiT [5] (Object Oriented ImagingTechnology) running on SUN workstations. The WiT data flow environment provides a largelibrary of I/O, image processing, and control modules. These modules are visually connectedtogether with links to describe the data flow algorithm.55Chapter 5: Implementations and Innovations^ 56A high-level toolkit was used to decrease the cost of developing a complete registrationsystem. This eliminated the need for the programming to start from scratch, because low-levelimage operators and graphical user interfaces are provided by WiT. It did, however, lock theprogrammer into a framework in which the programmer is biased towards using the providedtools, whereas accuracy and efficiency gains could be achieved if specialized functions werecoded manually. With WiT, user defined modules can be easily incorporated into the system.In this way a large number of different algorithms can be prototyped quickly and efficiently,simply by connecting together different procedures.An example of a data flow diagram (called an 'graph) produced directly from WiT is shownin Figure 5.1. WiT is a 2D image processing system and so pseudo 3D methods were performedby utilizing a vector of images and an external data structure to represent a scan volume.The scan info operators on the far left of the diagram represent scan or volume informationobjects. They contain the dimensions of the data set as well as the inter-voxel distances, allof which are shown below the operator as the dim and scale values. Each operator can haveits parameters manipulated by clicking on it's icon, which provides a pop-up control box. Theread raw data operators load the data into a vector of images, which is then sent to a vectorfunction operator. The operators with solid black circles in the top right corners representhierarchical operators which contain arbitrary Igraphs within them. These can be expanded,and are constructed the same as any other Igraph. Each element in the input vector is sentto the auto segment operator connected to it. This operator (which will be explained later)segments each image independently. The resulting vector of segmented data, representing asurface, is passed down to the surface to points operator. The magnifying glass seen on thislink can be placed on any link, and allows the user to examine the data that is flowing throughit. The surface to points operator converts the images to points which will be the primitiveused by the registration process. The registration operator (which has many other parametersthat have not been selected to appear below the icon) then registers the two sets of featuresand outputs the registering transformation that minimizes the comparison function, as well asChapter 5: Implementations and Innovations^ 57the final residual value. The centre of mass of the features is used in the registration process.It is also used in the next operator that resamples the dynamic set of data to extract slices thatshould match with the static set. The resampled data, and residual are displayed, as well asthe aligning rigid body transformation (the RBT seen through the magnifying glass).Scan A Infoauto segmentxdim: 100ydim: 100zdim: 14xscale: 2.5yscale: 2.5zscale: 14read raw data vector in•Surface A (static)surface A to pointsRegistration RBTresample Display Slicesfilename: Scan Axysize: 100Scan B lnf o^zsize: 14 fraction: 0.1B registered to Aauto segment #1Scan BAt center of mass^.D splay Residualxdim: 100ydim: 100zdim: 14xscale: 2.5yscale: 2.5zscale: 14read raw data #1filename: Scan Bxysize. 100zsize: 14vector_fn #1Surface B (dynamic)surface B to pointsctr of massrms DT00Figure 5.1: A WiT Igraph for Registration5.2 Direct Point MatchingAn initial implementation utilized individually selected point features, which in their simplic-ity allowed a brute force matching method to be used. This served as a foundation for theimplementation and testing of various segmentation and verification subroutines. The simplenature of this method precludes it from use for highly distorted data sets, but it provided auseful starting point for further development. Because of its labour intensive nature, no detailedanalyses of this algorithm were performed.5.2.1 SegmentationThe features chosen for this registration method were corresponding 3D points, representingthe same relative position within each data set. These points, commonly called "landmarks",Chapter 5: Implementations and Innovations^ 58are often the center of local extrema or a discontinuity in a boundary. In each data set thepoints were selected by pointing the mouse at the desired pixels in a 2D slice of the data andclicking, while a slider was provided to allow the user to find the desired slice.In this way corresponding points were selected from two data sets as representing the samephysical position in space. The correspondence between points that were selected could notconveniently be recorded (in WiT), although the user's selections were almost always enteredin matching pairs. This segmentation provided a small number of relatively accurate points,the accuracy of each point being dependent on the voxel dimensions, the expertise of the user,and the display and interaction techniques.5.2.2 MatchingThe comparison function for point matching is commonly the distance to the closest point(rms for > 1 point). The initial implementation used a brute force method to test all possibletransformations within a given range. For two sets of N points the closest distance calculationrequires 0(N2) operations. Hence, a less computationally expensive approximation is essentialfor large N and brute force methods. This was done by precomputing the distances for a smallregion surrounding each static point and putting the values in a 3D lookup table, similar tothat used for a distance transform. After transforming the dynamic set of points, each pointwould be rounded to get a lookup table distance entry, and the squared distances totalled. Thesquare root of the average point value was then used for the comparison function.The WiT interface did not maintain the order of selected points, so corresponding pointscould not be recorded easily. Therefore a system of equations could not be set up 1. Insteadan initial guess of the transformation was used to restrict the parameter space, and a completesearch was carried out. The scanning devices' configuration values (sampling distributions)were taken as accurate and used to perform any necessary scaling between data spaces. This'Although these selected points would naturally provide a basis for a mathematical least squares techniquessuch as Procruste's point cloud matching, not knowing corresponding points turns the order 0(N3) matrixelimination into a order 0(N!N3) problem due to the choices for matching pairs.Chapter 5: Implementations and Innovations^ 59left six parameters for rotation and translation, with the rotation performed about the centre ofthe data set. The algorithm then iterated by looping through all parameter ranges in discretesteps.The minimum value of the comparison function (within the tested range) was taken as thetrue registration point. If the optimal value occurred when a parameter is at the limit of itstested range, then the search would be repeated with extended ranges for all such parameters.Increased accuracy can be obtained simply by decreasing the step sizes. The brute force methodensures that the correct solution (an approximation) will be found if it is within the bounds ofthe parameter search, and if the other assumptions hold.5.2.3 VerificationThis basic algorithm was tested on PET-PET and MRI-MRI registrations. The tests were inno way comprehensive, but meant only to initiate the development of the verification phase.Simulated data requiring registration was created by resampling a patient data set after amathematically performed rigid body transformation. In this way the exact transformationis known. The simulated data was then resampled again, along identical slice planes as theoriginal, using the result of the registration. These registered slices were compared with theoriginal static set, both visually and by subtraction, in order to reveal inconsistencies in theresult.A linked cursor was also simulated by allowing point ROIs to be selected in one set andidentical points shown in the other. This allows selected misalignment to be seen more clearly.Identical slice planes can be extracted from the original and registered data and averaged toprovided insight into the errors. These techniques, along with the actual numerical values ofthe result, provided a reasonable and accurate verification of the correctness.Chapter 5: Implementations and Innovations^ 605.3 Iterative Surface MatchingCurrently the most successful general purpose registration techniques utilize surface match-ing. From the "hat" and "head" matching used by Pelizzari [70] in conjunction with Powell'snumerical optimization [74], to a recent physically based rigid body model which incorporatedtorque [63], these methods have provided good results by using a large number of surface points.The large surface features can be segmented with much less expertise than specific points, espe-cially when dealing with low-resolution data or multi-modalities. Therefore multiple variationsof surface based registration were developed and tested.5.3.1 SegmentationThe major benefit of a surface is that by using a large number of points it allows small uncertain-ties in segmentation to be averaged out. A surface feature inherently provides more informationsuch as curvature, but it is not often used due to the increased computation and storage re-quirements. The chosen surface will usually represent the boundary of a single anatomicalstructure.There are a great number of different methods available to segment a surface. Most commonare two dimensional approaches that operate on each slice independently, and then combinethe resulting contours in stacks. Manual segmentation, in which all contours are drawn byhand, was performed but it is extremely time consuming. The 2D iso-value contour was alsoextended into 3D with an algorithm [99] to provide a single iso-surface. Simplistic heuristicssuch as thresholding in conjunction with morphological operations were also used to provide asurface feature.Chapter 5: Implementations and Innovations^ 61PET/SPECT segmentationWith PET and SPECT the data has a low SNR and so most iso-value and iso-gradient basedcontour creation techniques do not work well. Although PET transmission scans can be seg-mented more easily, they were not readily available and so could not be used. The basic segmen-tation heuristic (for the brain surface) utilized a single thresholding value, region smoothing,and outlining.The variability in reconstruction techniques does not necessarily preserve relative valuesbetween slices, so a single threshold value for all slices could not always be used. Thereforethe average value in each slice was calculated as a base value to which a certain proportion ofthe standard deviation was added to arrive at a threshold value. Although highly dependenton the variance of the noise, this approach provided an approximate segmentation of the brainsurface. The WiT Igraph for this technique is shown in Figure 5.2 along with typical data andresults in Figures 5.3 and 5.4.This Igraph is the expanded hierarchical operator auto segment from Figure 5.1. The slice(image) enters in the top left corner and proceeds to the threshold and image stats operators.The image stats operator calculates the average and standard deviation (o-) which are laterused. The proportion factor operator provides a constant which is linked to the memory operatorbecause it will be used repeatedly for all the images that come through. The calc operatorscalculate 0.8* o- + which is then used as the lower threshold. The closeBin operator performs5 iterations of the morphological operations: dilation followed by a erosion, that removes smallholes from the segmentation. The outline operator then outputs the contours that form thesegmented surface.CT/MRI segmentationManual segmentation was used to generate reasonably accurate surfaces. It was performed bya WiT operator that provided mouse controlled contour drawing for each image. This methodwas extremely slow because 110 special aids were developed. The thresholding techniques usedChapter 5: Implementations and Innovations^ 62Figure 5.2: PET/SPECT Segmentationfor SPECT and PET were also utilized.Wyvill's iso-surface growing algorithm [99] was also implemented to segment surfaces. Thealgorithm tags the voxels and then triangulates the intersections of the surface and the voxel.However the large amount of data in a polygonal surface representation was not needed, soonly the tagging of voxels was performed. This was initiated by selecting a seed point fromthe data, which began the growth of the iso-value surface. An example MRI data set and aresulting segmentation are shown in Figures 5.5 and 5.6.5.3.2 MatchingThe surface was treated only as a set of 3D points which the heuristics could manipulate tofind the best match. Numerous comparison functions were studied for their suitability withsurfaces. Based on these comparison functions, various iterative strategies were compared tostudy their behavior under diverse conditions.Comparison FunctionsThe comparison function was chosen as the rms value of the distance from each point on thedynamic surface to the closest point on the static surface. The precomputation of local distancesused in the initial registration method was originally utilized, but it was too inefficient for theFigure 5.3: PET Data Figure 5.4: PET SegmentationChapter 5: Implementations and Innovations^ 63calculation of large distances. The great amount of overlap when points are close together ona surface made most of the computation redundant. Therefore a distance transform was used.The process of creating a 3D chamfer distance transform initially requires a volume in whichthe features voxels are set to 0 and the other voxels to co. Then two passes with a 3x3x3 filtermask will create the DT. The process is provided here in pseudocode:7* 3D Chamfer Distance Transform *//* Mask for Forward Pass */+d3 +d2 +d3^+d2 +d1 +d2/1/11-1] = [ +d2 +d1 +d2 1 ; MF[0] --= [ +d1^0^oo 1 ; MF[1] = [oo];+d3 H-d2 +d37* Mask for Backward Pass *700^00 DO -Fd3 -Fd2 -Fd3M B[--1] = [oo]; MB[0] = [^oo^0 +d1 1 ; MB[1] --= [ +d2 +di +d2+d2^-Fd1 +d2 +d3 +d2 +d3 -Chapter 5: Implementations and Innovations^ 64Figure 5.5: MRI Data Figure 5.6: MRI Segmentation/* Forward Pass */FOR (k = 0; k < slices; k + +)FOR (j = 0; j < rows; j + +)FOR (i = 0; i < columns; i + +)DT[k1^= rnin(a,b,c)emF^+ a][./ + b][i + c] + MF[a][b][c])/* Backward pass */FOR (k = slices-1; k >-= 0; k — —)FOR (j = rows-1; j^0; j — —)FOR (i columns-1; i >= 0; i — —)DT[k][j][i]^min(a,b,c)emB(DT[k + a][j + b][i +^MF[a][b][c])The comparison function provided by a chamfer DT results in a more efficient calculationof approximate Euclidean distances [11]. Anisotropic data was kept in its original form (notinterpolated to cubes) and so required an adaptation of the basic chamfer distance transform(DT) that could best be performed using floating point mask values. Although this allowsanisotropic DTs to be calculated it may require more memory than is feasible. A 256x265x70Chapter 5: Implementations and Innovations^ 65MRI scan would require slightly more than 18MB for the DT. This can be circumnavigated byforming integer approximations that may be stored in a 0 to 255 valued char (1 byte) versusthe arbitrary valued float (4 bytes), or by forming the DT from the lower resolution modalitywhen possible. The first of these two optimizations has recently been utilized in the EuropeanCOVIRA project on surface registration [2, 1].AbstractionsA few different search strategies were implemented incorporating hierarchical abstractions ofvarious types. One hierarchy is based on the representation of the data on multiple spatialscales as Borgefors [12] utilized for 2D images. This allows the lower frequency features to bematched first (for initial approximations) and then the higher frequency features can be utilizedfor fine tuning the parameters. At each hierarchical level, more and more detail of the completesegmentation would be used. Beyond the efficiency gain provided by using a reduced amount ofdata, the hierarchy may provide more accurate results by ignoring the fine details when grossalignments are being made.Jiang et al. [41] created different levels of this type by scaling both data sets down to createsmaller sets for each level. This required the precomputation of a DT for each scale, and afiltering step to provide accurate features at the lower resolutions. We created the different scalelevels by sampling the dynamic set of features (points) at various rates. With a large numberof points this probabilistic approach can attain the same hierarchy of scales. Our approach ofsampling the dynamic set is more computationally efficient than Jiang's, and allows a singleDT of the static set to be used for all levels of the hierarchy.A hierarchical threshold approach [42] was also incorporated. It requires only the estab-lishment of a limiting value on each DT look up; this will restrict the maximum distance anypoint will contribute to the comparison function. The focus of the matching process can thenbe controlled by restricting the matching to localized regions.Chapter 5: Implementations and Innovations^ 66The strategy usually starts with a large threshold (maximum distance) allowing gross differ-ences and thus gross alignment, and then decreases the threshold, so that for fine adjustmentsof the parameters only the regions that match very closely make a difference. The thresh-olding also allows variations in segmentation beyond a certain limit to be ignored. The basiceffect of thresholding is illustrated in the comparison function shown in Figures 5.7 - 5.10.The surface shown in Figure 5.4 was used for both static and dynamic surfaces with intra-slicevoxel separation of 1.0mm and inter-slice separation of 2.0mm. The matching surfaces wereidentical with the exception that the dynamic surface had approximately 25% of its surfacetranslated by -15mm, and another 25% translated by -6.5mm, both in the x direction. Theactual distance between corresponding points and the DT comparison function are shown asthe dynamic surface is translated by x in Figures 5.7 and 5.8 respectively. In these figuresthe actual distance between corresponding points can be calculated because the same surfacewas used to form both dynamic and static sets. The surfaces are registered and so the globalminimum should occur when there is no translation in x. In the comparison function withoutthresholds (Figure 5.8) it can be seen that the error biases the minimum. The threshold valueof 10 (Figure 5.9) moves the minimum closer to its optimal position by keeping the effect of the15mm error constant around the 0 point. The threshold value of 2 (Figure 5.10) provides theglobal minimum at the correct position of zero, but it can also be seen to create local minimumat the erroneous segmentation locations. Where the errors in segmentation are larger than thethreshold value, the effect of these errors will remain constant, and the thresholding will movethe global function minimum to a more accurate registration point.Search MethodologyThe surface dependent nature of the search space is too complex for an in-depth analysis.Therefore an iterative gradient search approach was initially adopted. It consisted of discretesteps taken independently for each of the six parameters. The step size was modified andcontrolled in various manners. Rotational parameters were initialized to zero and rotationsChapter 5: Implementations and Innovations^ 670 0Major Error / Distances to Corresponding Points-40^-20^0^20^40X Translation (mm)Figure 5.7: Actual DistanceMajor Error / No Threshold-40^-20^0^20^40X Translation (mm)Figure 5.8: DT Function504540-E-3530;2520-6 151055045_40.E 3525iU- 20cLE15105were performed about the centre of mass so that identical step sizes would not bias the search.With highly irregular objects this may still lead to inadequate rotational steps for some axes.The initial translational parameters were set to align the centres of mass. This was intendedto quicken the search process and increase the accuracy, but this result is dependent on thefeatures being globally similar.The highly complex nature of the search space led to the use of univariate relaxation tech-niques [65]. Given the parameter state s after iteration k, univariate relaxation completes anext state that can be written as:k+1 ks s — akpk k = 0,1, ...,where ak is the step size scaling factor, pk = ±Ak„,,d6, and basis A =Ix-axis rotation (xr),y-axis rotation (yr), z-axis rotation (zr), x-axis translation (xt), y-axis translation (yt), z-axistranslation (zt) 1. The gradients were calculated from finite differences using the entire stepsize. With only descending steps being taken the comparison function (c) satisfies:c(F,, f (sk+1, Fd)) < e(Fs, f (sk Fd)), k^0, 1, ...,Chapter 5: Implementations and Innovations^ 68Major Error/ Threshold = 10 Major Error/ Threshold = 210 2^9E.."2 76051.9-ES 1.6 -5.2 1.5 -Cu0.a 1.4 -1.3 -4 1.2-40^-20^0^20^40X Translation (mm)-40^-20^0^20X Translation (mm)Figure 5.9: DT with Threshold at 10^Figure 5.10: DT with Threshold at 2We formed hierarchical levels in the search after either a local minimum was found, or whena single loop through the parameters resulted in no change in parameters. A level was alsoformed after a predetermined maximum number of iterations in order to stop slowly convergingor oscillating searches. The a parameter was also varied to study its effect and interactionalong with the scale and threshold levels. For each parameter the step sizes (Ai) were set toan initial value, and at the end of each hierarchical level they were halved.The basic matching heuristic is presented here in pseudocode:Chapter 5: Implementations and Innovations^ 69/* Hierarchical Univariate Relaxation */xt = initial x translation; yt = initial y translation; zt = initial z translation;xr = initial x rotation; yr = initial y rotation; zr-= initial z rotation;Axr = x rotation step size; Ayr -= y rotation step size; A zr = z rotation step size;Axt -= x translation step size; Ayt -= y translation step size; Azt = z translation step size;ax, — ^az, axt — yt — at — step size scaling factor; /* decay term */FOR (i = 0; i < levels in hierarchy; i +Fd = subs et(scale(i), Fdynamie); I* sample dynamic features at multiple scalesthreshold = DT,^I* refine maximum DT value */--dtreshold(i);6sr Axr; 6yr = Ayr; 6zr = Azr; I* instantiate step sizes */6xt =-- Axt; 6yt^Ayt; 6zt^Azt;converged = false; /* minimum not found */FOR (j = 0; j < maximum iterations AND NOT (converged); j + +)* //* x rotation (gradient calculated by finite difference of step size) */ro = c(threshold, F8, f (xr, yr, zr, xt, yt, zt, Fd));= c(threshold, Fs, f (xr '5, yr, Zr, xt, yt, zt, Ed));= r1 — ro; /* gradient */IF (-ZT > 0) THEN 6x, = — axr6x, 7* reverse Sz decrease step size */ELSE xr = xr + Oxr; I* step taken *//* similarly for y and z rotations *//* x translation */ro^c(threshold, Fs, f (xr, yr, Zr, xt, yt, zt, Ed));= c(threshold, Fs, f (xr, yr, zr, xt + Oxt, Yt, zt, Ed));6c= 7.1^ro;IF^> 0) THEN 6xt = —axt6xtELSE xt xt + 6xt;/* similarly for y and z translations */IF (no further steps possible) converged = true;1 /* one loop through all parameters *//* halve the step sizes */Axr = sr/2.0; Ayr = Ayr/2.0; Azr = Azr/2.0;Axt =^t/2 .0; Ayt = Ayt/2.0; Azt = Azt/2.0;1 /* one level completed *7I* xr, yr, Zr, xt, yt, zt now contain the final rigid body parameters */Chapter 5: Implementations and Innovations^ 70Four basic methods of this type were created, with various modifications:• Method AThe initial method utilized an independent a for each parameter. It would decrease andinvert the step size whenever the gradient was positive. The decay provided by a allowedthe iterations to quickly converge to a minimum. A major problem with this method wasthe existence of many local minima which occurred with various features. In order tohelp avoid this, the step sizes were reinitialized to half their initial value that after eachlevel. This erases the knowledge that a minimum was found and may allow the search toescape a local minimum.• Method BMethod B utilized a constant magnitude a (ak -= 1.0, Vk). Therefore the step lengthswere only reduced after each level to provide finer and finer adjustments. The initialstep sizes were intended to be large enough to enable the search to jump over the localminima. Each level was terminated when no step had been taken during one loop throughthe parameters. This will often be earlier than when it is actually stuck in a minimum.This was intended to prevent the search from getting to the extrema of a local minimumat the initial levels.• Method CIn the third technique, a simulated annealing strategy [49] was adopted to deal with thelocal minimum problem. With this procedure the step size was decreased (by a) with eachreversal of direction similarly to method A. At each level in the hierarchy the reheatingprocess would occur, whereby the step sizes would be reinstantiated, and the parameterswould be given a random offset. This offset, in addition to the increased step sizes, wasintended to solve the difficult problem of allowing the search to escape the concavity oflocal minima, while not leaving the valley of a global minimum. Similarly to method B, alevel was completed when no step had been taken on a single loop through the parameters.Chapter 5: Implementations and Innovations^ 71• Method DMethod D incorporated the same technique as method B with the exception of the leveltermination criterion. It continued the search at each level until it no further decreasingsteps could be made (stuck at a relative minimum).Multiple SearchesAnother approach for coping with local minima is the use of multiple starting points. Thisis not a different method, but is the repetition of one method to avoid sensitivity to initialconditions. This brute force ideology is based on the assumption that if there are enoughsearches performed in a disjunct set of search extents then the global minimum will be found.Multiple starting points can also provide a quick approximation of the entire search space.After only a few iterations the relative minima from the different searches can be used to formdecision criteria on how to narrow the search bounds. It must be noted that with all hierarchicalsearch strategies a few iterations can only provide an approximation of the search space, and sothere exists the danger that the region containing the global minimum may be excluded fromfurther consideration at the early levels.5.3.3 VerificationAny verification can only represent the approximate overall behavior of the methods. Becausethe search space is highly dependent on the features segmented, it is possible that the algorithmspresented will lead to erroneous alignments in some cases. This can only be overcome by largeamounts of simulated and real testing. The accuracy of each implemented registration techniquewas rigorously tested and compared. The testing was intended to examine the behavior andeffect of various adaptations of the matching methods.In simulation studies the exact misregistration was examined in a large variety of tests. Inthis way each of the different modifications and methods was tested and compared thoroughly.Then the methods were further tested on actual patient data. The results of all these analysesChapter 5: Implementations and Innovations^ 72are discussed in the next chapter.Simulation StudiesRigid body simulation studies were performed on a single surface composed from actual datacontours. In this way the correspondence of points was known, allowing the exact precision ofthe registration to be measured. This simulated transformation was performed on the centreof the data set, while the registration transformations operate on the centre of mass. This fact,as well as the registering rotations being performed in the same order (not the reverse), meantthe registering parameters would not be the inverse of those that created the simulation. Thecomplete transformation matrix would, however, be the inverse of the original one (in principle).Simulations were made more realistic by the use of artificial noise and other distortions.Noise was introduced into the simulation process because the points composing the transformedfeatures were rounded to integers, and also by the addition of random offsets to each point onthe dynamic surface. Non-overlapping surface regions were created by using subsets of boththe dynamic and static surfaces.Patient Data StudiesWhen the true registration transformation was not known, as occurs with patient data, numer-ous 2D visualization techniques can be used to inspect the results. When the same modalitiesare registered, the subtraction of identical slice planes provides a good indication of the regis-tration. Other techniques for 2D viewing of multi-modality data are "colorwashing" and "colormerging", which can be easily implemented in WiT. With "colorwashing", the greyscale valuesof one image are modified (colored) by the values of the other image. Perault et al. [71] utilizeSPECT to red "wash" greyscale CT. A problem with this is the resulting loss of information [23].The WiT Igraph for "colorwashing" is shown in Figure 5.11. At the top left image A isloaded and sent to the ALU Operation (arithmetic logic unit) operator which can perform thepixel addition, multiplication, subtraction, division, maximum, or minimum operations betweenChapter 5: Implementations and Innovations^ 73images. Image B goes into the ALU Operation operator and the red and blue channel inputsof the rgbMerge operator. An image formed by choosing the maximum value of correspondingpixels in A and B (Output Imageid = max(A2,3,B2,3)) is output from ALU Operation, and thisbecomes the green channel in the RGB image. Therefore the output image is an RGB greyscaleof image B except where image A's pixels (or voxels) exceeds its value, where A will green"wash" B proportionate to A's greater intensity. This can be performed on all the slices in tworegistered data sets.A variation on this is the merging of the different images by putting them on differentcolor (RGB) channels. This results in no relative information loss, but may not be as visuallyinformative. Figure 5.12 contains the WiT Igraph for this procedure. This is a similar Igraph tothat of "colorwashing", but all the data from each image is used. The copy clear operator simplycreates a zero valued image the size of the input image. Therefore image A is shown on thegreen channel and image B on blue. Where the data sets contain equal values a shade of purpleresults. The choice of channels is somewhat arbitrary, as the red channel could be utilized toreplace green or blue. In another method [81], called alternate merging, the two images may beinterleaved on alternating horizontal lines allowing comparison. Figures 5.13 and 5.13 representregistered Ti and T2 weighted MRI slices through the head. Figure 5.15 illustrates an alternatemerging the two scans in which details from each image can be accurately compared. All thesetechniques were constructed quite easily in the WiT environment.read image #1ALU OperationredrgbMerge display colorwashAfilename: Imageread image #2Op: MAX0.+14.91-gr^no 00 f luegreen washfilename: Image BFigure 5.11: The "Colorwashing" of an ImageChapter 5: Implementations and Innovations^ 714Figure 5.12: A WiT Igraph for "Colormerging"Figure 5.13:^A TiWeighted MRI ScanFigure 5.14:^A T2Weighted MRI ScanChapter 5: Implementations and Innovations^ 75Figure 5.15: Alternate Scan Line Merging of Ti and T2 Weighted MRIChapter 6Analysis of Surface MatchingI may now turn my attention to what is the most important subject of all, namely, to thecorrection of the understanding and to the means of making it able to understand things insuch a way as is necessary to the attainment of our end. To bring this about, the naturalorder we observe demands that I should recapitulate all the modes of perception which Ihave used thus far for the indubitable affirmation or negation of anything, so that I maychoose the best of all, and at the same time begin to know my powers and nature which Iwish to perfect.- B. Spinoza, The Treatise on the Correction of UnderstandingThe techniques and methods implemented in the previous chapter were studied and theirresults analyzed. Experimental values for the robustness and accuracy of the methods wereobtained by using simulations. Patient scans were used as the basis of the simulations toprovide a realistic framework. Actual patient data registrations were also performed as a testof true application performance. This chapter presents the details and results of these analyses.6.1 Accuracy of SegmentationWith surface based approaches the accuracy of the segmentation is a major factor contributingto the success or failure of the registration. Segmentation is, however, another extremelydifficult problem, and so could not be thoroughly investigated. Simple segmentation techniquesthat could be automated were used. This was necessary in order to allow a large number ofregistrations to be performed. The resulting accuracy of the registrations were reduced, butthis was inevitable in the time frame available. The segmentation issue was avoided in manysimulations by utilizing the same segmentation for both surfaces, and modifying one or both tomimic different conditions.76Chapter 6: Analysis of Surface Matching^ 776.1.1 Types of Segmentation ErrorErrors in segmentation may be categorized in two major groups:1. Minor ErrorMinor error will be used to indicate those errors where the correct structure is segmented,but the boundary between it and surrounding regions is slightly off. Therefore there isan approximate mapping from the region of erroneous segmentation to the correct one.2. Major ErrorMajor error exists when a certain region is erroneously classified as another region typeor when the magnitude of any error is very large. This type of segmentation can usuallynot be mapped to the correct one. When this type of error occurs there will not be acorresponding feature in the other segmented data set.Although these two classes represent different aspects of the same problem, distinguishingbetween the two will be useful in future analysis. When comparing two segmentations withminor errors there will usually be corresponding features in each segmentation. Major errors aremore of a problem because there is often no similar feature to match between sets. Commonlyminor errors occur when a single threshold is used to perform binary classification in lowresolution images. It is difficult to determine the proper threshold; slight variations of thethreshold value will cause a continuous variation in the surface. The global errors are morevisible in MRI and CT when an automatic segmentation program has been used to extractfinely detailed structures. In this case some regions are either included or excluded erroneously.Minor error is illustrated in Figure 6.1. This PET example is a variation of the segmentationpresented earlier in Figure 5.4, which is shown as the lighter line. The minor error was createdby changing the threshold value from 95 to 70. This change in the threshold value increasedthe size of the segmented brain as more background noise was included.Chapter 6: Analysis of Surface Matching^ 78Figure 6.1: Minor Error in Different Segmentations of a PET Scan6.1.2 Segmentation and TransformationsUnless the segmentation is highly accurate it will introduce biases in the registering transfor-mation. Although this cannot be helped, it can be understood, and it may be used to helpdecide which type of transformations to search for. Scaling is commonly used in the registrationprocess to correct for errors in imaging device calibration. This is probably the transformationmost susceptible to error, because minor errors are very prevalent and they affect the perceivedscale. A surface may be segmented consistently larger or smaller than the identical surface inthe other data set, resulting in a scaling factor that corrects the segmentation error, but onenot based on true spatial scale differences. For this reason scaling factors should be correctedelsewhere, and not included in the transformation.Chapter 6: Analysis of Surface Matching^ 796.2 Theoretical AnalysisThe analyses performed depend to a great extent on the segmentations. In many cases thiscannot be avoided. When possible, more general tests were performed. Most of the simulationtesting was performed using the typical PET data set shown before. A single segmentation wasused for both dynamic and static surfaces (along with various levels of noise and truncation)to enable exact error calculation.6.2.1 Comparison FunctionEach surface was represented as a set of points. The comparison function was chosen to be therms value of the distances to the nearest neighbour. A 1-to-1 mapping of points is not utilizedas errors in segmentation as well variations in voxel size violate the properties required for itsaccurate use. Given two features with N points in each, the distance from a point in one featureto the closest point in the other feature requires 0(N2) operations'. The segmentations usuallycontained from around 2000 points in a PET scan to over 50000 in a MRI scan. Therefore aless computationally expensive approximation may be essential. This was initially performedby precomputing the distances for small regions around each static point.The three-dimensional look up table for a maximum local distance of dmax requires pre-computations of 0(N(2dmax ± 1)3), but then provides an 0(1) calculation of the distance tothe nearest point. The lookup table will then provide exact Euclidean distance within the localregions. This precomputation is only feasible for small distances of up to a dmax of about 10voxels. These precomputed distances are worthwhile when there are a very large number ofpoints (more than 2000) in each set and when the heuristic requires a great number of iterations(> 50).This need for more efficient pre-computation of larger distances led us to the ChamferDistance Transform. Chamfering creates a lookup table (of approximate distances) the entire'In a naive approach each point (of N) must be compared to all the points (N) in the other set to find theminimum when a 1-to-many mapping is allowedChapter 6: Analysis of Surface Matching^ 80size of the data set (M3), versus the small distances (at most 10) that were feasible with thebrute force method. The algorithm was given previously in Section 5.3.2 and requires 0(M3)operations on a MxMxM data set. However, it can calculates exact distances only up to thesize of the filter mask used to create the transform. The mask values (di) are variants of thelocal distances. The distance from a voxel to a features that is beyond the size of the mask willbe approximated by the minimum path of 3D-diagonals (m38zd3), 2D-diagonals (m28.zd2) andthen vertical and horizontal steps (miSzdi). If we assign each coordinate variable to mi so thatm3 < m2 < m1 then for isotropic voxel spacing the error from the actual Euclidean distanceis [10]:2 ,^2^2error =- V771,1 -I- 7722 --r T773 — 771,3d3 — (77/2 — M3)d2^(M1^n't2)d1This expression can be differentiated to find extrema. In Table 6.7 Borgefors summarizesthe accuracies of various choices for the d, values when using a (M + 1) dimensioned cube.However, the data is commonly not isotropic and so must be interpolated to use these valuesappropriately.D istance Function Upper Bound on ErrorChamfer Euclidean 1 \/2- 13- —0.147MChamfer Optimal 1 P.-., 1.314 P.-, 1.628 ±0.104MChamfer (3,4,5) 3 4 5 —0.118MTable 6.7: Distance Transform ApproximationsThe other possibility is to use anisotropic Euclidean values for the DT mask. The errors forvarious anisotropic values were analyzed by performing simulations using a single point featurein the center of a 101 cubic volume and calculating the corresponding DT. An example of the3D distance transform based on a single point feature and isotropic inter-voxel distances isprovided in Figure 6.2. Distances are mapped to a greyscale in which darker pixels indicatesmaller distances. The single point feature is at the center of the volume, and the variationfrom perfect circles reveal the approximation errors.The errors resulting from using various anisotropic mask values are summarized in Table 6.8.Chapter 6: Analysis of Surface Matching^ 81Figure 6.2: A 3D Chamfer Euclidean Distance Transform (101x101x101)In these cases the di values forming the mask are the exact anisotropic Euclidean distances.The maximum and average relative errors are dependent on the ratio of the intra-slice andinter-slice dimensional units, and not on their absolute value. Therefore the ratios 2:2, 3:3,... result in the same errors as shown for the 1:1 ratio and the entries are not repeated. Themaximum error as a fraction of the actual distance was found by comparing all values in thecomputed DT lookup table to the actual distance to the center point, and similarly the averageerror reflects all the values in the entire DT.From this table it can be seen that the use of the exact anisotropic Euclidean distances doesnot degrade the performance of the chamfer DT to a large extent. Although the maximumrelative error grows large (up to 35%) for highly disproportionate voxel spacing (1:7), theaverage relative errors remain quite small (around 9%). These values were calculated froma single point feature. It has been shown that the maximum error occurs at relatively largeChapter 6: Analysis of Surface Matching^ 82Dimensional Units MaxError lActualDist.AvgError lActualDist.Intra-slice Inter-slice1.00 1.00 0.128093 0.0815251.00 2.00 0.199794 0.0878481.00 3.00 0.254452 0.0941271.00 4.00 0.292891 0.0966051.00 5.00 0.320485 0.0966381.00 6.00 0.341011 0.0952881.00 7.00 0.356817 0.0929872.00 3.00 0.165241 0.0838032.00 4.00 0.199794 0.0878482.00 5.00 0.229514 0.0914662.00 6.00 0.254452 0.0941272.00 7.00 0.275336 0.0956943.00 4.00 0.152918 0.0826383.00 5.00 0.177242 0.0850873.00 6.00 0.199794 0.0878483.00 7.00 0.220171 0.0903874.00 5.00 0.146692 0.0821844.00 6.00 0.165241 0.0838034.00 7.00 0.183068 0.0857945.00 6.00 0.142946 0.0819735.00 7.00 0.157888 0.0830826.00 7.00 0.140448 0.081839All values calculated from a single point featureat the center of a 101x101x101 dimension volume.Table 6.8: An Error Analysis of the Anisotropic Euclidean Chamfer DTdistances {10, 11]; therefore when a large number of point features are distributed throughoutthe volume the errors will be smaller.In an application where real numbers result, the distance values can be formed from theanisotropic transform by scaling each point coordinate into DT space and then rounding offto form the lookup table index. This rounding is an approximation that will introduce error,but when the points lie continuously between voxels (when rotated out of the DT planes) itshould statistically provide the same results as if the DT, features or data were interpolatedinto smaller units. With highly disproportionate dimensions the rounding error may have to bereduced by interpolation of some type, and this will be discussed later.Chapter 6: Analysis of Surface Matching^ 83Other existing techniques could be utilized to calculate the distances for the comparisonfunction. One way in which "on the fly" distances calculations could be improved is by the useof an octree partitioning of the points. This would result in small groups of points that wouldnot suffer drastically from the 0(N2) search. It would also not be limited by the finite natureof the precomputed comparison function. Bouts [13] presents an optimized Euclidean distancetransform based on a single propagation of a "distance wavefront" through the volume. Thisis an area for possible future investigation.6.2.2 Bounds of the Parameter SpaceThe parameter space is the set of all possible transformation parameters in which the registeringtransformation may exist. By assuming rigid body motion we have 6 parameters: 3 for rotation,and 3 for translation. The parameters for rotation of the dynamic surface about each axis maybe completely arbitrary in theory, anywhere in the range of 0 to 360 degrees. This can in allpracticality be reduced to a small range of values by visually inspecting the data sets, unlessthe registration is to be performed completely automatically. The implementations created areintended to be semi-automatic. They rely on the assumption that the parameter space hasbeen reduced. A similar case can be presented for the translation parameters which must becontained in [—XSize,XSizej, [—YSize,YSize], [—ZSize,ZSize] of the larger data set foreach respective dimension, while the average translations needed for registration will usuallybe much smaller, on the order of 1/20 of these ranges.6.2.3 The Nature of the Search SpaceThe search space is the comparison function value over the entire parameter space. It is depen-dent on the features and is highly complex. Its six dimensional nature makes it too large to becompletely analyzed, even for simple features. With identical static and dynamic features thespace can be shown to have a global minimum at the true registration point, unless there aresymmetries in a feature, which will then introduce periodicities into the search space. AlthoughChapter 6: Analysis of Surface Matchingperfect symmetries will almost never exist, approximate ones may produce similar effects.The epidermis of the head and brain surface are approximately ellipsoidal, and this results inan approximately periodic search space. Using the PET surface shown previously in Figure 5.4(with distances of 2.0mm inter-slice and 1.0mm intra-slice) this phenomenon is revealed in theone dimensional plot of the search space shown in Figures 6.3 - 6.8. The function is calculatedusing the single surface for both dynamic and static (the DT) surfaces. These figures representthe function when all other parameters are zero. Thus when the rotation parameter is zeroor 360 degrees, or when the translation parameter is zero, they are exactly aligned. Becausethe static and dynamic surfaces are the same, corresponding points are known and can beused to determine the actual distances. In all these figures the dashed line represents theactual rms distance between corresponding points and the solid line is the distance transformapproximation used for the comparison function. The actual distance lines are sinusoidal forrotation as the points follow the rotational path up to the furthest point (where the dataset is completely flipped about one axis) and then return back to the original position. Thetranslational actual distance lines are linear with the displacement, as expected.A periodic search space poses a large problem to the gradient search methods as they producethe deep local minimum which are difficult to distinguish from the true minimum. Althoughthe chamfer distance transform over estimates the actual distance for a single point, it canconsistently underestimate the true distance for corresponding points without a 1-to-1 mappingof points. This can be seen in the 0-45, 135-235, and 315-360 degree ranges of Figure 6.3. In theother ranges the comparison function reveals discontinuities where the function begins rapidlyincreasing towards peaks. With rotation about the X axis beyond 50 degrees, the large Ydimension of the surface comes into play as a large region of the surface leaves the region ofthe finite DT. As the points leave the finite region of the look up table they are assigned themaximum value of the entire distance transform. This shows a major limitation of using adistance transform, or any other finite function. The function is biased towards keeping thesurfaces contained within the look up table, and therefore is only applicable when the dynamic45Actual Distance-40 -.52 25u-c 20.as'F1150.010Chapter 6: Analysis of Surface Matching^ 85DT Value_100^150^200^250^300^350X Rotation (degrees)Figure 6.3: Comparison Function Over X Rotations Only (c(Fs, f (rotx, Fd)))surface is contained within it upon registration.Figures 6.4 and 6.5 represent plots of the Y and Z rotational aspects. With Y rotationthe function suffers slightly from the effect seen in the X rotation plot, but is significantlyless dramatic because it is the major axis of the ellipsoid, and so the points move less duringrotation. The Z rotation plot reveals none of the effects of the finite approximation. The X andY translational plots in Figures 6.6 and 6.7 reveal the finite nature of the transform after 20mm,however these large translations would almost never be needed. Within the Z translation plotof Figure 6.8, the finite effects are apparent before 20mm, and are magnified due to the lowerresolution of this dimension (14.2.0mm compared to 100.1.0mm for X and Y), but again theseeffects appear beyond the normal working ranges needed.Another problem, which is visible in the Z translation plot, is the large inter-slice distance.When the step size (1.0mm in these examples) is below the size of the DT units (2.0mm inter-slice) there are flat regions in the search space. For Z translation this effect will only be large50300 350 50^100^150^200^250^300^350E 40 -€Oiip, 3120 -810504035 -125 -c20 -50^100^150^200^250Chapter 6: Analysis of Surface Matching^ 86Y Rotation (degrees) Z Rotation (degrees)Figure 6.4: c(F,, f (roty, Fd)) Figure 6.5: c(F,, f(rotz,Fd))when the dynamic features are parallel with the DT planes (no X or Y rotations), so all dynamicpoints in a plane are rounded to the same plane. This can introduce variance in the solutionof up to half the inter-slice distance. There are two methods to deal with this, both involvinginterpolation. The first method is to interpolate the data, or segmentation [41], into cubic or atleast less disproportionate distances. The other is to use interpolation on the anisotropic DT [1].The interpolation of the DT requires added computation on each calculation of the comparisonfunction, while the interpolation of the data requires greater precomputation. Either of thesetechniques will eliminate the problem, when there is no rotation. The DT was initially utilizedwithout either of these modifications in order to test its performance, but the registration oftwo data sets both of which have large inter-slice separation and little rotation will require theuse of some form of interpolation. These forms of interpolation are analyzed in Section 6.5.2.These examples have shown some of the weakness inherent in the distance transform, butdo not reveal the local minimum problem that may occur with various features. The localminimum problem is discussed by Jiang et al. [41]. Local minima often appear when majorsegmentation errors create false matches and when the features contain periodic shapes whichcan be aligned in multiple positions.Chapter 6: Analysis of Surface Matching^ 876050E 40S 30U-0g.2010 6050E 40S 30a20la0 0-40^-20^0^20^40X Translation (mm)-40^-20^0^20^40Y Translation (mm)Figure 6.6: c(F,, f (transx, Fa)) Figure 6.7: c(F,, f (transy, Fa))6.3 Simulations: Transformed Real FeaturesArbitrary rotation and translations were applied to surface data to create simulations of datarequiring registration. These were then varied by using only subsets of the surface and addingrandom noise in a manner similar to how Jiang et al. [41] created their simulated data. Asingle surface was used as the basis for both the static and dynamic surfaces; this allows theexact misalignment of identical points to be calculated. This is the most quantifiable methodavailable for testing the process. In this way, a highly controlled and flexible test bed was made,in which the correct result is known to great accuracy.The surface was formed from a single PET scan of a normal patient. It consists of 14 slicesof 100x100 dimensional data. The inter voxel distances were taken to be 1.0mm intra-slice and2.0mm inter-slice to simplify and clarify the results. The 1:2 ratio is a good approximation ofthe actual ratio of 4:7.14 and provided a realistic brain surface when viewed in an orthographicprojection. The data and the extracted features were shown in two dimensions in Figures 5.3and 5.4. The contours, which consist of 2290 points, were used for the following analysis. Theseoriginal contours were used to form the floating point chamfer DT shown as an array of 30 slicesChapter 6: Analysis of Surface Matching^ 88-40^-20^0^20^40Z Translation (mm)Figure 6.8: Comparison Function Over Z Translation Only (c(Fs, f (trans, Fd)))in Figure 6.9. The addition of an eight voxel border in all dimensions was utilized to create amore stable search environment (116x116x30). This is done so that the finite nature of the DTdoes not bias the search when a transformed surface extends slightly beyond a boundary of theimaging data. In this gray-scale representation of the 3D lookup table the darkest regions areclosest to the static features. The 14 slices in the middle contain zero-valued representationsof the surface in 5.4, but they are complicated as the distances from neighboring slices arepropagated throughout the DT formation.6.3.1 MethodsThe four different search methods described in Section 5.3.2 were compared in their basicforms and with modifications. Each method was first compared with identical step lengths, aconstant 0.5 fraction of the dynamic contour points, and no thresholding. The parameters wereindividually tuned to provide appropriate responses for each heuristic. All methods were based9080—7060s_.2 500u_c 400k300020100Chapter 6: Analysis of Surface Matching^ 89Figure 6.9: A Floating Point Anisotropic Chamfer Distance Transformon five hierarchical levels of gross step length adaptation, to keep as much constant as possibleacross techniques.Initial testing was performed without random noise and both the surfaces were exactly thesame, except for the round-off after the simulated rigid body transformation, which averages aquarter voxel in each dimension for each point. The limit of the registration accuracy would beexpected to be the error for round-off, which is 0.6127nm rms. The averages are compiled from729 tests, which consist of rotations from —40 to +40 degrees with 10 degree increments for allaxis. Any simulated translations would be removed by the initial alignment of the centres ofmass, but the search still included the translation parameters. The searches for all of the 729transformations began with initial rotation values of zero, and the centre of mass alignmentprovided the starting values for translation. The results are analyzed in the Table 6.9. It showsthe averages (d) of all the misalignment values (di) and the standard deviation (ad) of theseChapter 6: Analysis of Surface Matching^ 90values. This misalignment was calculated as the rms distance between all corresponding pointson both surfaces after registration. Because the distribution was not normal, the percentage ofall the tests in which the distance (di) was below some stated values is also provided.Method a(rms mm)ad(rms mm)%d< 1.0%d< 2.0%d<5.0%d> 10.0A 0.73 1.15 99.4 99.7 99.9 0.1B 0.90 0.82 86.3 97.0 99.2 0.1C 0.94 1.42 89.2 96.0 99.2 0.6D 0.71 0.19 97.3 99.5 100.0 0.0Averages computed from 729 trials.Table 6.9: Comparison of Basic Methods Without NoiseThe results of this testing show that with very little noise in the dynamic set, each methodprovided good results. The results show that methods A and D, which continue until they getstuck, provide better results. This was expected with the "error-free" data, but this may leadto the finding of more false minima when features have errors. The difference between the dvalues of method A and method D is not significant enough to make any general conclusions,but method D was much more consistent, probably due to its constant step size (a) value.These statements are also applicable with regards to methods B and C, with method B beingthe more consistent for similar reasons. The percentages reveal that the standard deviationsare not very important in regards to accuracy, as A has the most solutions below 1.0mm and2.0mm yet the second highest standard deviation. Thus the standard deviation should only beused as a measure of the consistency of the approach.6.3.2 ThresholdsFor the analysis of different thresholds, method B was utilized. It provides natural stages forrefining the threshold after each hierarchical level. The more simplistic nature of method B, ascompared to the dynamic step sizes of method A and the random offsets of simulated annealing,provides a more stable means for the independent judgment of the thresholds.40-40^-20^0^20Z Rotation (degrees)3Lc.:' 3.65<7,g-3.483.24.2-g- 4Chapter 6: Analysis of Surface Matching^ 91The types of segmentation error will influence the results obtained by using thresholds in thesearch process. Global errors can be best accommodated by the thresholding process. With asmall threshold, outliers and large errors can be ignored. This allows closely matching featuresto be used optimally for fine tuning. This has been examined thoroughly elsewhere by Jianget al. [41). However, a problem may occur at the transition between the minor and globalerrors. If minor errors dominate, the threshold value must remain larger than the magnitude ofthese errors, otherwise the thresholding will be throwing away relevant information. In otherwords, the threshold should remain above the mean error in the segmentation. Although thiscannot usually be calculated, it can often be estimated. This places a lower bound on a suitablethreshold value.The effect of a threshold below the mean segmentation error is shown here in Figure 6.10,illustrating a Z rotation plot of the search space. The static surface is the example segmentation(from Figure 5.4) and the dynamic set is the example of minor error presented in Figure 6.1. ForDT Thresholds of 7,5 and 4mmFigure 6.10: The Effect of Thresholding with Minor Errorthis plot the three threshold values of 7, 5, and 4 reveal the effects. As the threshold decreasesbelow the magnitude of the minor error (between 5 and 4) the comparison function becomesless accurate, to the extreme where the true registration point becomes close to a maximum.For a general assessment of threshold values only random noise was added to each point inChapter 6: Analysis of Surface Matching^ 92the dynamic surface. This provided minor error, but due to its random nature, thresholdingwould not eliminate too much data. On each of the 5 levels of the hierarchy various thresholdswere compared. Testing was performed with a dynamic surface created by adding uniformlydistributed random noise of ±2.0mm to the static surface, which was also affected by round-offerrors after the simulated rigid body transformation. The results are analyzed in Table 6.10with the 729 simulated transformations identical to those in Table 6.9. The resulting actualmisalignment in the registered surfaces was calculated by transforming the noise-free surfaceby the simulated transformation, then using the registering transformation to align them, andfinally computing the rms distance between corresponding points. These results indicate thatThresholds d(rms mm)ad(rms mm)%d< 1.0%d< 2.0%d< 5.0%d> 10.08 6 5 3 2 0.90 1.13 88.8 97.5 98.9 0.55constant 7 0.90 0.82 86.3 97.0 99.2 0.148 7 6 5 4 0.94 1.42 89.2 96.0 99.2 0.55max 12 8 3 2 1.01 1.46 88.5 94.9 97.7 0.67max 10 4 2^2 1.09 2.04 88.9 95.5 96.8 1.51none 1.48 2.08 56.8 88.6 96.2 1.23Averages computed from 729 trials.Table 6.10: General Comparison of Threshold Values With Noisethe use of thresholding can allow the search to more precisely determine the correct registeringtransformation. The constant threshold (7) provided close to the lowest misalignment errorbut provided fewer solutions within 1.0mm than all other threshold options. These results arefeature dependent and only indicate that multi-level thresholding can increase the accuracy.6.3.3 Hierarchies of Scale and ThresholdingAn analysis of thresholding and the use of multiple scales was performed by running the differentalgorithms with and without the use of each. The data used was again the standard PET datasurface used throughout this section. The results for each method are provided in Table 6.11.These tests were run with uniform random noise of ±3.0mm in all dimensions in addition toChapter 6: Analysis of Surface Matching^ 93the round-off error mentioned previously on the whole surface. Only 60% of the two matchingsurfaces was identical. This was done by taking the first 70% of the points (taken top to bottomand left to right from the data set) from the base surface for the dynamic set and excluding thefirst 10% of the surface for the static set. This data was again formed from 729 trials with thesame transformations as used previously. The accuracies are further compared in Figures 6.11,Method Scales ThresholdsMultipleFractionsYesd (mm)Nod (mm)Yesad(mm)Noad(mm)A Yes 1.78 2.14 1.14 1.21No 1.72 2.49 0.54 1.49B Yes 2.05 2.43 1.51 1.13No 1.97 3.00 0.75 2.14C Yes 2.37 2.63 1.43 1.19No 2.24 3.16 0.80 2.02D Yes 1.81 2.21 1.11 0.83No 1.75 2.59 0.59 1.70Averages computed from 729 trials.Scales: Yes = {0.1, 0.2, 0.4, 0.8, 1.0} No = 0.5Thresholds: Yes = {7, 6, 5, 3, 2} No = noneTable 6.11: Comparison of Thresholding and Hierarchies6.12, 6.13, and 6.14, showing the percentage of trials that converge to within specified rangesof the optimal solution. The solid line represents the use of thresholding and multiple scales,the dashed only thresholding, the dotted only multiple scales, and the dotted and dashed nothresholding or multiple scales.The histograms for method B utilizing thresholding in conjunction with multiple scales areshown in Figures 6.15 and 6.16. The non-normal distribution of actual errors is revealed inthis histogram of d, values. The histogram in Figure 6.16 shows the minimum values of thecomparison function and reveals a somewhat random distribution of final values, indicatingthat the differences between a good and poor registration in the comparison function are notvery distinct. Therefore it would be difficult to determine from the comparison function alonewhether the final solution is very accurate.10090802 70§ 6050(.0 40g 30i; 201004.5 4.55 51.5^2^2.5^3^3.5^4Actual Distance (rms mm)9080• 70• 605015 40gg' 306-'0 20101.5^2^2.5^3^3.5^4Actual Distance (rms mm)Chapter 6: Analysis of Surface Matching Figure 6.11: Accuracy of Method A^Figure 6.12: Accuracy of Method BThe results of this table and the figures indicate strongly that the use of thresholdingincreases the accuracy of the found solution. With every method the use of thresholds reducedaverage distance error in the solutions. The use of thresholding reduced the standard deviationwhen multiple scales were not used, but increased it for methods B, C, and D when they wereused. This may be due to the fact that the changes between levels, in both scale and threshold,destabilize the search. The results are dependent on the single set of surfaces, but similar resultswere observed with various levels of random noise. It must be pointed out that random noiseis different from the minor error most commonly found in segmentations, in that even smallthresholds will not exclude accurate data. Therefore in real applications care must be taken toset the smallest threshold above the mean segmentation error. Major errors are usually handledappropriately by thresholding [41].The analysis of using multiple scales does not provide a clear judgment. The features did notcontain many frequency levels, which may be why there was no increase in accuracy. Furtheranalysis with other features would be required to make a decision on this question. The useof the scales does provide much faster searches at the initial levels (due to less points). Thisis very useful if a large number of searches will be carried out. If most of the searches are1009080• 705 60▪ 50co.15 40f?,' 30• 2010908E 80703 60504030?I 2010Chapter 6: Analysis of Surface Matching^ 951.5^2^2.5^3^3.5^4^4.5^5Actual Distance (rms mm)Figure 6.13: Accuracy of Method C1.5^2^2.5^3^3.5^4^4.5^5Actual Distance (rms mm)Figure 6.14: Accuracy of Method Dterminated early, there will be significant time savings from the use of multiple scales.6.3.4 Distribution of Starting PointsThe search methods are all initiated at specific starting points. The human operator mayprovide the starting point, or some automatic strategy may be used. The proximity of theinitial transformation estimate to the optimal solution will affect the searches' success.For this reason, an analysis of various starting positions was carried out with method B.The results are provided in Figure 6.17. For each of the data points five hundred searches wereperformed with rotational starting positions placed randomly within the stated range (0-X).This was performed with the same two surfaces as used previously, having 60% overlappingregions, and +2.0mm added random noise.The percentage of these starts that converged to within actual point correspondence of 1.0and 2.0mm rms are charted. This reveals the increased chances of being trapped in local minimathe further away from the true registration point you begin. These effects will be amplified withmore distorted features.Chapter 6: Analysis of Surface Matching^ 96Figure 6.15: Histogram of Actual Error Testing Method B (d=2.05, crd=1.51)6.3.5 Multiple SearchesMultiple starting points can be incorporated in many ways. The simplest is the repetitionof complete searches started at different positions. From the results provided in the previoussubsection, this has a high probability of increasing the accuracy of the result. This is becauseby using more starting points you increase the odds of one being closer to the true solution,and so there is less chance of getting stuck in a local minima.A sample test of this was performed for 27 searches using Method B for each of the 729simulated registrations. The searches were started with the initial rotations of —20, 0, and 20degrees for each axis, and with no variation in translational starting values. The surfaces had±2.0mm random noise in each direction and 60% overlapping regions. Results are given inTable 6.12.Number d ad %d %dof Searches (rms mm) (rms mm) < 1.0 < 2.01 2.05 1.51 4.12 64.7527 1.66 0.52 5.90 78.33Averages computed from 729 trials.Table 6.12: Analysis of Multiple SearchesChapter 6: Analysis of Surface Matching^ 97Figure 6.16: Histogram of the Comparison Function Testing Method BThe table indicates that the use of multiple searches is an easy way to increase accuracy.The price is the multiplication of the search time. This could be reduced by terminating someof the worst searches at early hierarchical levels. In this way enough searches can be startedthroughout the search space so that no user-specified starting position or estimate is required.The computation will be wasted if the searches converge to the same basins, but this is difficultto know beforehand as the space is feature dependent.6.4 Simulations: Transforming Real DataIn this section we examine simulations that transformed an entire data set and then resampledit. This was done to create new data sets to be registered. The segmentation phase was thenperformed on each data set, as would be the case with real data.Using the registration transformation the dynamic data is then resampled along identicalslice planes as the static data. This allowed visual verification techniques to be utilized. TheWiT Igraph for this process is shown in Figure 6.18.For these simulations method D was used in conjunction with multiple thresholding andscale levels. It was chosen because of its superior performance in many of the previous studies.The thresholds values were the ones used earlier: 7, 6, 5, 4, 2, and the scales were 0.1, 0.2, 0.4,10090Ca80.070cn0-6co 60"6a)Cacp  504010^20^30^40^50^60^70Starting Rotation Range (degrees)80 90Chapter 6: Analysis of Surface Matching^ 98Figure 6.17: Search Sensitivity to Initial Parameters0.8, 1.0 for the respective levels. The results of some of these simulations are summarized inTable 6.13.Due to the nature of the simulations, any large rotations about the x and y axis result ina clipping of the data which composes the surface used for registration. For this reason therotations were kept small for these variables. The results of these tests show that the methodperformed quite well considering the circumstances. The clipping of the surface just mentionedcreates major segmentation errors, and this can be seen in Figure 6.20. An accuracy, wherebythe surfaces are off by an average of one voxel per point, results in the average error of 2.45mmand for an average of two voxels this becomes an error of 4.90mm. These values provide acontext for judging the results, although they do not provide a clear cut evaluation of whetherthe method's performance was acceptable.To provide a better picture of these simulations the data representing the first of the simu-lated trials and its segmentation are provided. The static data is the PET scan shown previouslyChapter 6: Analysis of Surface Matching^ 99Figure 6.18: A WiT Igraph for the Registration of Simulated Datain Figure 5.3 and its segmentation in Figure 5.4. The simulated data and its segmentation areshown in Figures 6.19 and 6.20.The results of registration are shown in which the simulated data has been resampled alongidentical slice planes as the original using the registering transformation in Figure 6.21. Animage in which the registered and static data sets are subtracted to reveal differences is providedin Figure 6.22. The subtracted image is a signed image so that the bright and dark regionsrepresent the most error, but also note the relative differences apparent in a circle outside thebrain. This is created by the variations in the inherent background noise alone. The imagevalues have also been scaled to amplify the differences for visual clarity, and the highly visibledifferences are due to the resampling and clipping of the dynamic data set during this simulation.Chapter 6: Analysis of Surface Matching^ 100Transformation Actual Distance ComparisonMetricTranslation (mm) Rotation (°) d(rms mm)ad(rms mm)x y z x y z (rms mm)-4 -8 -1.7 8 0 15 2.96 0.54 1.285 -9 -2.3 3 -6 10 2.18 0.29 0.913 12 3.8 -8 2 -6 2.58 0.20 0.87-7.4 -15 3.1 -2 9 -10 3.80 0.76 0.89-2.3 4.6 -1.7 1 -3 5 1.85 0.64 0.78Dynamic Scan Created by Resampling Static ScanTable 6.13: Analysis of Simulations Using Transformed Real Data6.5 Patient Data StudiesThe methods were tried on a number of patient data sets requiring registration. In these casesthe exact transformation is not known. Therefore the results had to be visually checked. Theresults of these studies can only be verified to have results to be within approximately 3 degreesfor rotation and 2 pixels for translation as revealed visually [73].6.5.1 Registering to NormsThe registration of an individual patient's data to a "normal" patient can provide an improvedjuxtaposition when comparing the two scans. This can help to reveal abnormal variations andimprove the visualization of relative positions. For these reasons the registration of PET dataof different patients was performed.The data registration was performed on numerous data sets. The results of these testswere promising. The variation of each patient precluded accurately quantifying the error. Thepatients' brain sizes, as well as various levels of radiopharmaceutical uptake provided bothminor and major errors. When the segmentations accurately delineated the brain surface, itwas possible to register the data into a common position.For these tests method D was utilized. The spatial scales: 0.1, 0.2, 0.4, 0.8, 1.0, were usedwith thresholds from a maximum of 7mm down to a minimum of 4mm. The larger minimumvalue was used to compensate for the larger anatomical differences present between people. AChapter 6: Analysis of Surface Matching^ 101Figure 6.19: A Simulated Scan Figure 6.20: Segmented (Dynamic) Sur-facestep size of 8° was used for all rotations, and 8mm for all translations. The use of an 8 voxelborder was also incorporated.The results of two cases are provided in Figures 6.23 & 6.24 and 6.25 Sz 6.26. These two datasets were registered to the normal patient used throughout the testing process (see Figures 5.3and 5.4). For the first data set the optimal registering transformation required was -2, 0.5,3.5 for the x,y, and z translations respectively, and no rotations. The second data setrequired -2.5, -7.5, -4 for translation, and -16 degrees rotation about the x-axis. In comparingthe normal data to the registered ones, the ventricles and mid plane can be seen to be in closealignment, despite the anatomical variations.6.5.2 MRI-MRI RegistrationsThe registration of 2 MRI scans was performed. The two scans were already registered: the Tiand T2 weightings of a single MRI scanning session. Because of this the resulting transformationshould be the identity (0 translation and 0 rotation), at least if an accurate segmentation ofChapter 6: Analysis of Surface Matching^ 102Figure 6.21: Result of Registration Figure 6.22: Difference of Static andRegistered Setseach scan had been provided.The first scan and its segmentation are shown in Figures 6.27 and 6.28, and the secondscan and its segmentation in Figures 6.29 and 6.30. The scans are axial views through the topof the head. The 20 slices have inter-voxel distances roughly of lmm in-plane and 8mm inter-slice. This illustrates the problem with largely disproportionate spacing of voxels discussed inSection 6.2.3 on the nature of the search space. This is revealed in the resulting registrationin which various solutions to the problem were compared. They all represent the same searchconfigurations of method D that were used with the registrations in Section 6.5.1. The onlyvariation is that a minimum threshold value of 2.0mm was used. The results are summarizedin Table 6.14.The interpolation of the data required added time for calculation as well as segmentation.Feature interpolation is possible but it was not available. The interpolation of the static surfacedoubles the size of the DT, and so is computationally expensive. Similarly the interpolateddynamic surface increased search time, although it would be possible to use reduced fractions ofChapter 6: Analysis of Surface Matching^ 103Figure 6.23: PET Scan "X"^Figure 6.24: "X" Registered to Normalpoints to eliminate this added time. The interpolation of the DT itself for only the z dimensionrequires little added computation, and no increase in pre-computation. For this reason, as wellas the fact that it provided the most accurate registration, it is probably the best solution. Itcan be utilized for increased accuracy even when the data is isotropic.DT Technique Inter-slice Distance TransformationStatic(mm)Dynamic(mm)Translation (mm) Rotation (°)ix y i^z x y znormal 8.0 8.0 0.5 0.0 2.75 0 0 0normal 4.0 8.0 -0.5 0.5 0.0 -0.5 0 -0.5normal 8.0 4.0 0.0 1.0 -2.0 0 0 0normal 4.0 4.0 0.0 1.0 -2.0 0 0 0interpolated 8.0 8.0 0.0 1.0 0.0 0 -0 0Inter-slice distances of 4.0mm were createdby interpolating the data and then segmenting.Table 6.14: Analysis of Interpolation TechniquesChapter 6: Analysis of Surface Matching^ 104Figure 6.25: PET Scan "Y"Figure 6.26: "Y" Registered to NormalChapter 6: Analysis of Surface Matching^ 105Figure 6.27: Ti Weighted MRI DataFigure 6.28: Ti Weighted MRI SegmentationFigure 6.29: T2 Weighted MRI DataChapter 6: Analysis of Surface Matching^ 106Figure 6.30: T2 Weighted MRI SegmentationChapter 7 Conclusions and Future WorkRoseau pensant. - Ce n'est point de l'espace que je dois chercher ma dignite, mais c'est dureglement de ma pensee. Je n'aurai pas d'avantage en possedant des terres: par l'espace,l'univers me comprend et m'engloutit comme un point; par la pensee, je le comprends.- B. Pascal, Pensees, Appendix to Chapter XXIIIThe analyzed implementations provided a good method for the registration of multimodalitymedical images. Surface matching allows the data sets, in which no highly discernible corre-sponding points exist, to be matched. Tolerance of small errors from the averaging of a largenumber of points can allow segmentation without the expert knowledge of a radiologist. Theseresults were experimental, and a large amount of work remains in extending and optimizing theimplemented procedures.7.1 Experimental ConclusionsThe segmented surface features can be compared efficiently and simply with the use of ananisotropic chamfer distance transform. This provides accurate results as long as the trans-formed feature remains entirely within the discrete distance transform. With this comparisonfunction, and the implemented gradient descent methods, the registering transformation canbe calculated.These results were checked for accuracy through numerous experiments using both simulateddata and actual patient data. The simulation experiments used numerically based verification,whereas various visual tools were used for the patient data verification. The fully automaticprocedure usually resulted in success below the visual threshold for error.The use of multiple starting points was not examined thoroughly enough to determine preciseguidelines for their use. The feature-dependent nature of the search space necessitates the use107Chapter 7: Conclusions and Future Work^ 108of some higher level techniques for a more accurate understanding. Sparse approximations ofthe search space can be calculated. If the space is well behaved, a case-specific distribution ofstarting points can be determined.Analyses of multiple spatial scales and thresholds were not rigorous enough to provide all-encompassing generalizations. Abstractions of this type are very case dependent and should beused with caution. Multiple thresholds resulted in more accurate registrations in the simula-tions. This should also occur with the real data. The effects of using multiple scales were notas obvious, but this does provide an efficient way to manage a large number of searches. Theinitial stages of a hierarchical search can be performed on the lower resolution data efficientlywith little or no degradation in accuracy. It may be possible to analyze the selected features toprovide appropriate calibration of the threshold and scale levels.7.2 Extensions and Future DevelopmentAlthough the search methods provided good results, there exist many areas for improvementsand other work. These are possible routes that future work might take:1. Numerous other techniques could possibly be used to incorporate knowledge of the prob-lem into the search heuristic.2. It would be possible to modify the implementation to provide a more efficient method interms of computer time and memory requirements. Accuracy reduction in exchange forfaster performance would require justification that should be studied in further detail.3. A more quantitative analysis of the discussed gradient search methods and modificationsrequires the establishment of a performance benchmark. This could possibly be pro-vided by the use of a numerical analysis minimization procedure such as the FORTRANFSQP {101] routine. This mathematical library subroutine solves constrained non-linearsystems. It would provide a good comparison for the optimization routines discussed here.Chapter 7: Conclusions and Future Work^ 1094. A rigorous comparison of surface matching to other current techniques could be performed.This should provide guidelines for the determination of which method or methods arepossible for individual cases, and their chance of success. The use of different methodswill also provide a very good check on the accuracy, which is still extremely difficult tomeasure in actual cases.7.3 Applications of the Implemented TechniquesFuture work will lie in the practical application of these methods in a medical research envi-ronment. This will provide further testing with real data. More diverse applications will alsoprovide insights into the optimization and modifications needed to promote these techniques ina clinical setting.The tools and methods used in these implementations can be utilized for further research.Various research groups are interested in registration for aiding and improving their work. Thesimplicity of the techniques and the use of the WiT environment should allow easy applicationdevelopment.One application is the registration of SPECT and CT to help in the quantitative analysisof SPECT data. This would facilitate the work of a research group at Vancouver GeneralHospital. The added information from the registered data could aid in forming more accuratereconstructions and in segmentation of various regions.Another application is the registration of PET scans of different patients to a normal patientto improve the analysis and visualization of variations. Research in averaging normal patientsto further develop criteria for detecting abnormal variations is being performed. This couldbenefit from the registration of different normal patients into one "standard" position to reducethe variance in the average data set.After future progress with the research applications, the registration may be utilized in aclinical setting. The registration of CT, MRI, SPECT, or PET could aid in a variety of ways.British Columbia's Children's Hospital is one site where the use of registered data could provideChapter 7: Conclusions and Future Work^ 110help in patient diagnosis and data analysis.7.4 SummaryThe registration techniques developed here provide a toolkit for registering multimodality data,but the segmentation tools required for this type of application must still be developed further.The limitations of surface matching were analyzed as well as the performance of various heuristicmodifications. This resulted in a specification of criteria for setting the various parametersassociated with the registration process and a more thorough understanding of their effects.The registration problem is still far from being solved. This thesis provides a simple and robustmethod for registration that can be expanded for use in many different areas.Bibliography[1] Programme AIM. First versions of SW tools for multimodality image registration: WP1.2.1 - software tool for registration by stereotactic fiducial markers, WP 1.2.2 - softwaretool for landmark based registration, WP 1.2.3 - basic SW tool for registration of crispsurfaces. Technical report, Commission of the European Communities DG XIII, 1992.Project A2003, COVIRA, Deliverable D2/1.2T.[2] Programme AIM. Specification reports for multimodality image registration. Technicalreport, Commission of the European Communities DG XIII, June 1992. Project A2003,COVIRA, Deliverable D1/1.2.[3] N.C. Andreasen, editor. Brain Imaging: Applications in Psychiatry. American Psychi-atric Press, Inc., Washinton, DC, 1989.[4] N.C. Andreasen, G. Cohen, G. Harris, T. Cizadlo, J. Parkkinen, K. Rezai, and V.W.Swayze. Image processing for the study of brain structure and function: Problems andprograms. Journal of Neuropyschiatry, 4(2):125-133, 1992.[5] T. Arden and Poon. J. WiT User's Guide. Logical Vision Ltd., Suite 108, 3700 GilmoreWay Burnaby, B.C., Canada, 1993.[6] D. Baraff. Rigid body simulation. Lecture Notes for SIGGRAPH '92 Course - An Intro-duction to Physically Based Modeling, pages H1—H30, July 1992.[7] G.T. Bartoo and W. A Hanson. Multi-modality image registration using centroid map-ping. Annual International Conference of the IEEE Engineering in Medicine and BiologySociety, 11(2):550-551, 1989.[8] R.H.T. Bates, K.L. Garden, and T.M. Peters. Overview of computerized tomographywith emphasis on future developments. Proceedings of the IEEE, 71(3):356-372, March1983.[9] L.M. Bidaut. Accurate registration of various medical imaging modalities: an applicationto PET and MRI for anatomical localization and kinetic modelling. IEEE Melecon, pages1233-1237, 1991.[10] G. Borgefors. Distance transformations in arbitrary dimensions. Computer Vision,Graphics, and Image Processing, 27:321-345, 1984.[11] G. Borgefors. Distance transformations in digital images. Computer Vision, Graphics,and Image Processing, 34:344-371, 1986.111Bibliography^ 112[12] G. Borgefors. Hierarchical chamfer matching: A parametric edge matching algorithm.IEEE Transactions on Pattern Analysis and Machine Intelligence, 10(6):849-865, Novem-ber 1988.[13] E. Bouts. A fast, error free, squared, euclidean distance transform. In VIP '93 Interna-tional Conference on Volume Image Processing, pages 47-49, Utrecht, The Netherlands,June 1993.[14] L.G. Brown. A survey of image registration techniques. ACM Computing Surveys,24(4):325-376, December 1992.[15] G.T.Y. Chen, C.A. Pelizzari, and D.N. Levin. Image correlation in oncology. In V.T.DeVita, S. Hellman, and S.A. Rosenberg, editors, Important Advances in Oncology 1990,pages 131-141. J.B. Lippincott Co., Philadelphia, U.S.A., 1990.[16] K. Chinzei, T. Dohi, T. Horiuchi, Y. Ohta, M. Suzuki, Y. Yamauchi, D. Hashimoto,and M. Tsuzuki. Quantitative integration of multimodality medical images. In SPIEProceedings - Visualization in Biomedical Computing 1992, volume 1808, pages 187-195,Chapel Hill, North Carolina, USA, 1992.[17] A.V. Cideciayan, S.G. Jacobson, C.M. Kemp, R.W. Knighton, and J.H. Nagel. Regis-tration of high resolution images of the retina. In SPIE - Medical Imaging VI: ImageProcessing, volume 1652, pages 310-322, 1992.[18] A. Collignon, D. Vandermeulen, P. Suetens, and G. Marchal. Registration of 3D multi-modality medical images using surfaces and point landmarks. In VIP '93 InternationalConference on Volume Image Processing, pages 27-30, Utrecht, The Netherlands, June1993.[19] R. Dann, J. Hoford, S. Kovacic, M. Reivich, and R. Bajcsy. Evaluation of an elasticmatching system for anatomic (CT/MR) and functional PET cerebral images. Journalof Computer Assisted Tomography, 13(4):603-611, July/Aug 1989.[20] H. Ehricke, G. Daiber, R. Sonntag, S. Wolfgang, M. Lochner, L. Schad, and W. Lorenz.Interactive 3D-graphics workstations in stereotaxy: Clinical requirements, algorithms andsolutions. In SPIE Proceedings - Visualization in Biomedical Computing 1992, volume1808, pages 548-558, Chapel Hill, North Carolina, USA, 1992.[21] P.J. Ell. Nuclear medicine. Postgraduate Medical Journal, 68:82-105, 1992.[22] T.T. Elvins. A survey of algorithms for volume visualization. Computer Graphics,26(3):194-201, August 1992.[23] N.T.S Evans. Combining imaging techniques. Clinical Physics and Physiological Mea-surement, 11(A):97-102, 1990.[24] T.L. Faber and E.M. Stokely. Orientation of 3-D structures in medical images. IEEETransactions on Pattern Analysis and Machine Intelligence, 10(5):626-633, September1988.Bibliography^ 113[25] J.D. Foley, A. van Dam, S.K. Feiner, and J.F. Hughes. Computer Graphics PrinciplesAnd Practice. Addison-Wesley Publishing Company, Inc., New York, 1990.[26] N.D. Gershon. Image understanding and registration of positron emission tomography(PET) images and magnetic resonance (MR) images. Annual International Conferenceof the IEEE Engineering in Medicine and Biology Society, 13(3):1321-1322, 1991.[27] N.D. Gershon. Registration and visualization of three-dimensional magnetic resonance(MR) and positron emission tomography (PET) images. Annals of Biomedical Engineer-ing, 19(5):611, 1991.[28] N.D. Gershon. Three-dimensional registration of magnetic resonance (MR) and positronemission tomography (PET) images. Annual International Conference of the IEEE En-gineering in Medicine and Biology Society, 13(1):217-218, 1991.[29] N.D. Gershon, J.D. Cappelletti, S.C. Hinds, and M.E. Glenn. Image understanding, visu-alization, and registration of magnetic resonance (MR) and positron emission tomography(PET) images. Annual International Conference of the IEEE Engineering in Medicienand Biology Society, 12(3):217-218, 1990.[30] D.C. Giancoli. General Physics. Prentice-Hall, Englewood Cliffs, NJ, 1984.[31] S. Hampshire. The Age of Reason. Penguin Books Canada Ltd., Markham, Ontario,1956.[32] C.J. Henri, A. Cukiert, D.L. Collins, A. Olivier, and T.M. Peters. Towards framelessstereotaxy: Anatomical-vascular correlation and registration. In SPIE Proceedings - Vi-sualization in Biomedical Computing 1992, volume 1808, pages 214-224, Chapel Hill,North Carolina, USA, 1992.[33] L.S. Hibbard, T.L. Arnicar-Sulze, B.J. Dovey-Hartman, and R.B. Page. Computed align-ment of dissimilar images for three-dimensional reconstructions. Journal of NeuroscienceMethods, 41(2):133-152, 1992.[34] D.L.G. Hill, S.E.M. Green, J.E. Crossman, D.J. Hawkes, G.P. Robinson, C.F. Ruff, T.C.S.Cox, A.J. Strong, and M.J. Gleeson. Visualisation of multi-modal images for the planningof skull base surgery. In SPIE Proceedings - Visualization in Biomedical Computing 1992,volume 1808, pages 564-573, Chapel Hill, North Carolina, USA, 1992.[35] W.S. Hinshaw and A.H. Lent. An introduction to NMR imaging: from the bloch equationto the imaging equation. Proceedings of the IEEE, 71(3):338-350, March 1983.[36] K.H. Hohne, M. Bomans, M. Riemer, R. Schubert, U. Tiede, and W. Lierse. A volume-based anatomical atlas. IEEE Computer Graphics & Applications, pages 72-78, July1992.[37] K.H. Hohne and W.H. Hanson. Interactive 3D segmentation of MRI and CT volumesusing morphological operations. Journal of Computer Assisted Tomography, 16(2):285—294, 1992.Bibliography^ 114[38] B.L. Holman, R.E. Zimmerman, K.A. Johnson, P.A. Carvalho, R.B. Schwartz, J.S. Loef-fler, E. Alexander, C.A. Pelizzari, and G.T.Y. Chen. Computer-assisted superimpositionof magnetic resonance and high-resolution technetium-99m-HMPAO and thallium-201SPECT images of the brain. Journal of Nuclear Medicine, 32(8):1478-1484, 1991.[39) B.K.P. Horn. Robot Vision. The MIT Press, Cambridge, Massachusetts, 1986.[40] R.J. Jaszczak. Tomographic radiopharmaceutical imaging. Proceedings of the IEEE,76(9):1079-94, September 1988.[41] H. Jiang and R. Robb. Image registration of multimodality 3D medical images by chamfermatching. In SPIE - Biomedical Image Processing and Three-Dimensional Microscopy,volume 1660-35, pages 356-366, San Jose, CA, USA, 1992.[42] H. Jiang, R.A. Robb, and K.S. Holton. A new approach to 3-D registration of multi-modality medical images by surface matching. In SPIE Proceedings - Visualization inBiomedical Computing 1992, volume 1808, pages 196-213, Chapel Hill, North Carolina,USA, 1992.[43] B. Johnston. The segmentation of medical data. Master's thesis, The University of BritishColumbia, Department of Computer Science, 1993.[44] R.D. Jones and J.R. MacFall. Computers in magnetic resonance imaging. Computers inPhysics, 2(5):25-30, Sept-Oct 1988.[45] T. Jones. Positron emission tomography. Clinical Physics and Physiological Measure-ment, 11(A):27-36, 1990.[46] I. Kapouleas. Error analysis of 3D registration of brain MR and PET images using theinterhemispheric fissure. Annual International Conference of the IEEE Engineering inMedicine and Biology Society, 13(1):227-228, 1991.[47] I. Kapouleas, A. Alavi, W.M. Alves, R.E. Gur, and D.W. Weiss. Registration of three-dimensional MR and PET images of the human brain without markers. Radiology,181(3):731-739, 1991.[48] M.L. Kessler, S. Pitluck, P. Petti, and J.R. Castro. Integration of multimodality imagingdata for radiotherapy treatment planning. International Journal of Radiation Oncology,Biology, Physics, 21(6):1653-67, Nov 1991.[49] S. Kirkpatrick, C.D. Gelatt Jr., and M.P. Vecchi. Optimization by simulated annealing.Science, 220(4598):671-680, May 1983.[50] S. Kovacic, J.C. Gee, W.S.L. Ching, M. Reivich, and R. Bajcsy. Three-dimensional regis-tration of PET and CT images. Annual International Conference of the IEEE Engineeringin Medicine and Biology Society, 11(2):548-549, 1989.Bibliography^ 115[51] E.L. Kramer and M.E. Noz. CT-SPECT fusion for analysis of radiolabeled antibod-ies: Applications in gastrointestinal and lung carcinoma. Nuclear Medicine and Biology,18(1):27-42, 1991.[52] L. Kreel. Medical imaging. Postgraduate Medical Journal, 67:334-36, 1991.[53] J.L. Lancaster and G.D. Fullerton. Physics and medicine: Imaging the body. Computersin Physics, 2(5):16-22, Sept-Oct 1988.[54] S.U. Lee and S.Y. Chung. A comparitive performance study of several global thresholdingtechniques for segmentation. Computer Vision, Graphics, and Image Processing, 52:171—190, 1990.[55] D.N. Levin, C.A. Pelizzari, G.T.Y. Chen, C. Chen, and M.D. Cooper. Retrospectivegeometric correlation of MR, CT, and PET images. Radiology, 169:817-823, 1988.[56] M. Levoy. A hybrid ray tracer for rendering polygon and volume data. IEEE ComputerGraphics & Applications, pages 33-40, March 1990.[57] M. Levoy. Volume rendering using the fourier projection-slice theorem. In ProceedingsGraphics Interface '92, pages 61-69, Vancouver, BC, CAN, May 1992.[58] Y. Li. Orientation-Based Representations of Shape and Attitude Determination. PhDthesis, The University of British Columbia, Department of Computer Science, April 1993.[59] J.J. Little. Recovering Shape and Determining Attitude from Extended Gaussian Images.PhD thesis, The University of British Columbia, Department of Computer Science, April1985[60] D.T. Long, M.A. King, and B.C. Penney. Comparative evaluation of image segmentationmethods for volume quantitation in SPECT. Medical Physics, 19(2):483-489, March-April1999.[61] G.Q. Maguire Jr., M.E. Noz, Lee E.M., and J.H. Schimpf. Correlation methods for tomo-graphic images using two- and three-dimensional techniques. In S.L. Bacharach, editor,Information Processing in Medical Imaging, pages 266-279. Martinus Nijhoff, Dordrecht,The Netherlands, January 1986.[62] G.Q. Maguire Jr., M.E. Noz, H. Rusinek, J. Jaeger, E.L. Kramer, J.J. Sanger, andG. Smith. Graphics applied to medical image registration. IEEE Computer Graphics& Applications, 11(2):20-8, March 1991.[63] G. Malandain and J. Rocchisani. Matching of 3D medical images with a potential basedmethod. Rapports de Recherche - Unite de recherche inria-sophia antipolis, Programme4 - Robotique, Image et Vision(1890):1-36, Mars 1993.[64] 0. Mealha, A.S. Pereira, and B.S. Santos. Data structures for multimodality imag-ing: concepts and implementation. In SPIE - Biomedical Image Processing and Three-Dimensional Microscopy, volume 1660-37, pages 367-374, San Jose, CA, USA, 1992.Bibliography^ 116[65] J.M. Ortega and W.C. Rheinboldt. Iterative Solution of Nonlinear Equations in SeveralVariables. Academic Press, New York, New York, 1970.[66] D.W. Paglieroni. A unified distance transform algorithm and architecture. MachineVision and Applications, 5:47-55, 1992.[67] J.W. Peifer, E.V. Garcia, C.D. Cooke, J.L. Klein, R. Folks, and Ezquerra N.F. 3D regis-tration and visualization of reconstructed coronary arterial trees on myocardial perfusiondistributions. In SPIE Proceedings - Visualization in Biomedical Computing 1992, volume1808, pages 225-234, Chapel Hill, North Carolina, USA, 1992.[68] C.A. Pelizzari, G.T.Y. Chen, D.R. Spelbring, R.R. Weichselbaum, and C. Chen. Accuratethree-dimensional registration of CT, PET, and/or MR images of the brain. Journal ofComputer Assisted Tomography, 13(1):20-26, January/February 1989.[69] C.A. Pelizzari, A.C. Evans, P. Neelin, C.T. Chen, and S. Marrett. Comparison of twomethods for 3D registration of PET and MRI images. Annual International Conferenceof the IEEE Engineering in Medicine and Biology Society, 13(1):221-223, 1991.[70] C.A. Pellizzari and G.T.Y. Chen. Registration of multiple diagnostic imaging scans usingsurface fitting. In I.A.D. Bruinvas et al, editor, The Use of Computers in RadiationTherapy, pages 0-0. Elsevier Science Publishers B.V., North-Holland, 1987.[71] C. Perault, A. Loboguerrero, J. Liehn, and F. Batteux. Automatic superposition of CTand SPET immunoscintigraphic images in the pelvis. In SPIE Proceedings - Visualizationin Biomedical Computing 1992, volume 1808, pages 235-240, Chapel Hill, North Carolina,USA, 1992.[72] A.M. Peters. Recent advances and future projections in clinical radionuclide imaging.The British Journal of Radiology, 63(750):411-429, June 1990.[73] U. Pietrzyk, K. Herholz, and W. Heiss. Three-dimensional alignment of functional andmorphological tomograms. Journal of Computer Assisted Tomography, 14(1):51-59, 1990.[74] M.J.D. Powell. An efficient method for finding the minimum of a function of severalvariables without calculating derivatives. The Computer Journal, 7:155-162, 1964.[75] 0. Ratib, L. Bidaut, H.R. Schelbert, and M.E. Phelps. A new technique for elasticregistration of tomographic images. In SPIE: Medical Imaging II, volume 914, pages452-455, 1988.[76] S.J. Riederer. Recent advances in magnetic resonance imaging. Proceedings of the IEEE,76(9):1095-1105, September 1989.[77] N. Saeed and T.S. Durrani. A new MRI rotation algorithm for the registration of temporalimages. IEEE International Conference on Acoustics, Speech, and Signal Processing,3:1496-1499, May 1989.Bibliography^ 117[78] L.R. Schad, R. Boesecke, W. Schlegel, and et al. Three dimensional image correlation ofCT, MR, and PET studies in radiotherapy treatment planning of brain tumors. Journalof Computer Assisted Tomography, 11:948-54, 1987.[79] C. Schiers, U. Tiede, and K.H. Mime. Interactive 3D registration of image volumes fromdifferent sources. In Proceedings of the International Symposium CAR '89 - ComputerAssisted Radiology, pages 667-670, Berlin (West), Federal Republic of Germany, 1989.[80] W.J. Schroeder, J.A. Zarge, and W.E. Lorensen. Decimation of triangle meshes. Techni-cal report, Corporate Research and Development, General Electric Company Sz ConSolveInc., Schenectady, NY & Lexington, MA, 1992.[81] P.G. Spetsieris, V. Dhawan, S. Takikawa, D. Margouleff, and D. Eidelberg. Imagingcerebral function. IEEE Computer Graphics e4 Applications, pages 15-26, January 1993.[82] M.R. Stytz, G. Frieder, and 0. Frieder. Three-dimensional medical imaging: Algorithmsand computer systems. ACM Computing Surveys, 23(4):421-499, December 1991.[83] T.S. Sumanaweera, G.H. Glover, T.O. Binford, and J.R. Adler. MR susceptibility distor-tion quantification and correction for stereotaxy. In SPIE - Biomedical Image Processingand Three-Dimensional Microscopy, volume 1660-04, pages 30-43, San Jose, CA, USA,1992.[84] U. Tiede, K.H. Hoehne, M. Bomans, A. Pommert, M. Riemer, and G. Wiebecke. Inves-tigation of medical 3D-rendering algorithms. IEEE Computer Graphics^Applications,pages 41-53, March 1990.[85] G.R. Timmens, P.A. van den Elsen, F.J.R. Appelman, and M.A. Viergever. Landmarkbased registration of images: A comparison of algorithms. In VIP '93 InternationalConference on Volume Image Processing, pages 145-148, Utrecht, The Netherlands, June1993.[86] K.D. Toennies, G.T. Herman, and J.K. Udupa. Surface registration for the segmen-tation of implanted bone grafts. In Proceedings of the International Symposium CAR'89 - Computer Assisted Radiology, pages 381-385, Berlin (West), Federal Republic ofGermany, 1989.[87] G. Turk. TR006-92: Re-tiling of polygonal surface. Technical report, University of NorthCarolina at Chapel Hill, Department of Computer Science, 1992.[88] P.A. van den Elsen, J.B.A. Maintz, E.D. Pol, and M.A. Viergever. Image fusion usinggeometrical features. In SPIE Proceedings - Visualization in Biomedical Computing 1992,volume 1808, pages 172-186, Chapel Hill, North Carolina, USA, 1992.[89] P.A. van den Elsen, E.J.D. Pol, and M.A. Viergever. Medical image matching - a reviewwith classification. IEEE Eng. Med. Biol., 0(0):000-000, 1992. (in press).Bibliography^ 118[90] M. van Herk and H. Kooy. Three-dimensional CT-CT, CT-MR, and CT-SPECT corre-lation using chamfer matching. In VIP '93 International Conference on Volume ImageProcessing, pages 25-26, Utrecht, The Netherlands, June 1993.[91] M.W. Vannier. Computers in computer axial tomography. Computers in Physics,2(5):39-43, Sept-Oct 1988.[92] M.W. Vannier and D.E. Gayou. Automated registration of multimodality images. Radi-ology, 169(3):860-861, 1988.[93] J.W. Wallis and T.R. Miller. Three-dimensional display in nuclear medicine and radiology.Journal of Nuclear Medicine, 32(3):534-546, 1991.[94] J. Wilhelms and A. Van Gelder. Octrees for faster isosurface generation. ACM Transac-tions on Graphics, 11(3):201-227, July 1992.[95] G. Wolberg. Digital Image Warping. IEEE Computer Society Press, Los Alamitos, CA,1990.[96] R.P. Woods, S.R. Cherry, and J.C. Mazziotta. Rapid automated algorithm for aligningand reslicing PET images. Journal of Computer Assisted Tomography, 16(4):620-633,July/August 1992.[97] R.P. Woods, J.C. Mazziotta, and S.R. Cherry. MRI-PET registration with automatedalgorithm. Journal of Computer Assisted Tomography, 17(4):536-546, July/August 1992.[98] B. Wowra, L. Schad, W. Schlegel, and S. Kunze. Scopes of computer application in stereo-tactic neurosurgery. In Proceedings of the International Symposium CAR '89 - ComputerAssisted Radiology, pages 297-302, Berlin (West), Federal Republic of Germany, 1989.[99] G. Wyvill, C. McPheeters, and B. Wyvill. Data structure for soft objects. The VisualComputer, 2:227-234, 1992.[100] T.S. Yoo, U. Neumann, H. Fuchs, M. Pizer, T. Cullip, J. Rhoades, and R. Whitaker.Direct visualization of volume data. IEEE Computer Graphics & Applications, pages63-71, July 1992.[101] J.L. Zhou and A.L. Tits. User's Guide for FSQP Version 3.2: A FORTRAN Code forSolving Constrained Nonlinear (Minimax) Optimization Problems, Generating IteratesSatisfying All Inequality and Linear Constraints. Electrical Engineering Department andInstitute for Systems Research, University of Maryland, College Park, MD, USA., March1993.Glossaryanatomical data - Data which represents physical, or structural quantities. Thisis often the characteristics of tissue or bone. CT and MRI provide this type ofdata.artifact - An artificial or erroneous region of the data that is often created as aresult of under-sampling.attenuation - The reduction in the number of photons passing through a object.axial - Planes perpendicular to the long axis of the body (spine).congenital - Any state which has existed from birth.coronal - Frontal planes passing through the long axis of the body.corpus callosum - A section of the brain uniting the cerebral hemispheres.edge-detection - This term is used to indicate the various methods for the isolationof gradients. An exemplar case being the zero-crossings in a convolution withthe Laplacian of a Gaussian.electromagnetic eddies - EM flow induced by a changing magnetic flux actingon a conductor or by a moving conductor in a static magnetic field.fiducial markers - These markers are externally placed objects which show up inthe scans and can be placed in designated positions relative to the object beingimaged. In this way they establish a common reference frame.functional data - Data which represents metabolic or physiological quantities.SPECT and PET provide data of this type. The functionality that is revealed isdependent on the metabolic attributes of the radiopharmaceutical that is used.Fourier transform - A transformation that maps data from a spatial representa-tion into a frequency representation.localization - The correlation of a region to its specific anatomic position.lymphoma - A tumour of lymphoid tissue.magnetic moment - The net magnetic dipole of an atom represented as a vector.non-invasive - Not involving penetration of the skin by surgery or hypodermicneedle.perfusion - Blood flow through an organ or tissue.phantom - A scanned object (used in simulations) for which the exact compositionis known.119Glossary^ 120point spread function - If an infinitely precise point was scanned the imaged pointwould be blurred or spread out to a finite area. The function that describes thisspreading is called the point spread function 1391.reconstruction - The process of transforming the raw scanning device informationinto the 3D voxel representation.region of interest - The general terminology for any region of a medical data setwhich has been delineated for specific examination.rms - see root-mean-square.ROI - see region of interest.root-mean-square - The root-mean-square (rms) value of a set xi, i = 0...n — 1 isdefined as: rms(x)sagittal - Parallel to the plane which bisects a bilateral animal into right and lefthalves.scintigraphy - A 2D imaging technique that utilizes an internal radiation source.shimming - Inconsistent control of the magnetic field over the scanning volume.structural data - see anatomical data.toxoplasmosis - An infection caused by a parasitic microorganism (of the Toxo-plasma genus) that affects the central nervous system.ventricles - Fluid filled cavities in the brain that are connected to the central canalof the spinal cord.voxel - A voxel is a single unit of the sample space which represents a discrete 3Dvolume.

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051366/manifest

Comment

Related Items