Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Deformable motion correction and spatial image analysis in positron emission tomography Klyuzhin, Ivan S. 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2017_february_klyuzhin_ivan.pdf [ 18.18MB ]
Metadata
JSON: 24-1.0340673.json
JSON-LD: 24-1.0340673-ld.json
RDF/XML (Pretty): 24-1.0340673-rdf.xml
RDF/JSON: 24-1.0340673-rdf.json
Turtle: 24-1.0340673-turtle.txt
N-Triples: 24-1.0340673-rdf-ntriples.txt
Original Record: 24-1.0340673-source.json
Full Text
24-1.0340673-fulltext.txt
Citation
24-1.0340673.ris

Full Text

Deformable Motion Correction andSpatial Image Analysis in PositronEmission TomographybyIvan S. KlyuzhinB.Sc./M.Sc., Ural State University, 2006A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Physics)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)December 2016c Ivan S. Klyuzhin 2016AbstractPositron emission tomography (PET) is a molecular imaging modality that allowsto quantitatively assess the physiological function of tissues in-vivo. Subject motionduring imaging degrades the quantitative accuracy of the data. In small animalimaging, motion is minimized by the use of anesthesia, which interferes with thenormal physiology of the brain. This can be circumvented by imaging awake rodents;however, in this case correction for non-cyclic motion with rigid and deformablecomponents is required.In the first part of the thesis, the problem of motion correction in PET imag-ing of unrestrained awake rodents is addressed. A novel method of iterative imagereconstruction is developed that incorporates correction for non-cyclic deformablemotion. Point clouds were used to represent the imaged objects in the image space,and motion was accounted by using time-dependent point coordinates. The quanti-tative accuracy and noise characteristics of the proposed method were quantified andcompared to traditional methods by reconstructing projection data from digital andphysical phantoms. A digital phantom of a freely moving mouse was constructed,and the ecacy of motion correction was tested by reconstructing the simulatedcoincidence data from the phantom.In the second part of the thesis, novel approaches to PET image analysis wereexplored. In brain PET, analysis based on the tracer kinetic modeling (KM) may notalways be possible due to the complexity of the scanning protocols and inability tofind a suitable reference region. Therefore, the ability of KM-independent shape andtexture metrics to convey useful information on the disease state was investigated,based on an ongoing Parkinsons disease study with radiotracers that probe thedopaminergic system. The pattern of the radiotracer distribution in the striatumwas characterized by computing the metrics from multiple regions of interest definedusing PET and MRI images. Regression analysis showed a significant correlationbetween the metrics and clinical disease measures (p<0.01). The e↵ect of the regionof interest definition and texture computation parameters on the correlation wasiiAbstractinvestigated. Results demonstrate that there is clinically-relevant information in thetracer distribution pattern that can be captured using shape and texture descriptors.iiiPrefaceChapters 3 and 4 represent original unpublished material. A paper describing theresults of Chapter 3 is currently in review. I was responsible for the development ofthe presented algorithms and methods, their validation and testing, and the majorityof manuscript composition. V. Sossi was the supervisory author involved throughoutthe project.Chapters 6, 7 and 8 use image data from an ongoing imaging study of Parkinson’sdisease at UBC’s Centre for Brain Health and Pacific Parkinson’s Research Centre.None of the text of the thesis is taken directly from previously published results ofthe study.A version of Chapter 6 has been published [I. S. Klyuzhin, M. Gonzalez, E.Shahinfard, N. Vafai, and V. Sossi, “Exploring the use of shape and texture de-scriptors of positron emission tomography tracer distribution in imaging studies ofneurodegenerative disease”, J. Cereb. Blood Flow Metab., vol. 36, no. 6, pp.1122–34, June 2016]. I was responsible for the selection of the investigated imagemetrics, development of the analysis methodology, image pre-processing and segmen-tation, statistical analysis, and most of the manuscript composition. M. Gonzalez,E. Shahinfard, N. Vafai were responsible for computing the parametric images andMRI/PET image registration and critical manuscript review. Scanning procedureswere performed by the sta↵ members of the UBC PET imaging group. V. Sossi wasthe supervisory author involved throughout the project in the concept formationand manuscript preparation.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiList of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Fundamentals of PET . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.1 PET Imaging Principles . . . . . . . . . . . . . . . . . . . . 11.1.2 PET Isotopes, Radiotracers and Applications . . . . . . . . . 31.1.3 Static vs. Dynamic Imaging . . . . . . . . . . . . . . . . . . 71.2 Physics and Technology of PET . . . . . . . . . . . . . . . . . . . . 91.2.1 Positron Emission and Annihilation . . . . . . . . . . . . . . 91.2.2 Photon Interaction in Matter . . . . . . . . . . . . . . . . . . 111.2.3 Scintillation Crystals . . . . . . . . . . . . . . . . . . . . . . 121.2.4 Scintillation Light Detectors . . . . . . . . . . . . . . . . . . 131.2.5 Coincidence Detection . . . . . . . . . . . . . . . . . . . . . . 151.2.6 2D and 3D Acquisition . . . . . . . . . . . . . . . . . . . . . 181.2.7 Coincidence Data Representation . . . . . . . . . . . . . . . 181.3 Image Quantification Factors . . . . . . . . . . . . . . . . . . . . . . 21vTable of Contents1.3.1 Positron Range . . . . . . . . . . . . . . . . . . . . . . . . . 211.3.2 Photon Non-collinearity . . . . . . . . . . . . . . . . . . . . . 211.3.3 Photon Attenuation . . . . . . . . . . . . . . . . . . . . . . . 231.3.4 Scattered Events . . . . . . . . . . . . . . . . . . . . . . . . . 241.3.5 Random Coincidences . . . . . . . . . . . . . . . . . . . . . . 261.3.6 Inter-crystal Scatter . . . . . . . . . . . . . . . . . . . . . . . 261.3.7 Detector Eciency . . . . . . . . . . . . . . . . . . . . . . . 271.3.8 Parallax E↵ect . . . . . . . . . . . . . . . . . . . . . . . . . . 281.3.9 Dead-time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281.3.10 Radioactive Decay . . . . . . . . . . . . . . . . . . . . . . . . 291.3.11 Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291.4 PET Image Reconstruction . . . . . . . . . . . . . . . . . . . . . . . 301.4.1 Analytic Reconstruction . . . . . . . . . . . . . . . . . . . . 301.4.2 Iterative Reconstruction . . . . . . . . . . . . . . . . . . . . 351.4.3 Quantitative Corrections in Image Reconstruction . . . . . . 401.5 Tracer Kinetic Modeling . . . . . . . . . . . . . . . . . . . . . . . . 461.5.1 One-compartmental Model . . . . . . . . . . . . . . . . . . . 481.5.2 Multi-compartmental Models . . . . . . . . . . . . . . . . . . 511.5.3 Reference Tissue Methods . . . . . . . . . . . . . . . . . . . 531.5.4 Logan Method of Parameter Estimation . . . . . . . . . . . . 541.5.5 Parametric Images . . . . . . . . . . . . . . . . . . . . . . . . 561.6 Contributions of This Thesis . . . . . . . . . . . . . . . . . . . . . . 572 PET Image Reconstruction with Motion Correction . . . . . . . 622.1 Overview of Motion Tracking and Compensation in PET . . . . . . 622.2 Compensation for Head Motion in Brain PET Imaging . . . . . . . 642.2.1 Data-driven Rigid Motion Correction . . . . . . . . . . . . . 652.2.2 External Tracking of Rigid Motion . . . . . . . . . . . . . . . 662.2.3 Rigid Motion Correction Using Motion Data . . . . . . . . . 672.3 Deformable Respiratory and Cardiac Motion Correction in Torso Imag-ing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702.3.1 Deformable Motion Tracking and Gating Techniques . . . . 712.3.2 Deformable Motion Correction of Gated Coincidence Data . 742.4 Awake Animal Imaging Techniques . . . . . . . . . . . . . . . . . . 762.5 Study Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80viTable of Contents3 PET Image Reconstruction with Motion Correction using Unorga-nized Point Clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833.2.1 Image Reconstruction Using Unorganized Point Clouds . . . 833.2.2 Phantom Data . . . . . . . . . . . . . . . . . . . . . . . . . . 893.2.3 Image Reconstruction from Sinogram Data . . . . . . . . . . 913.2.4 List-mode 3D Image Reconstruction . . . . . . . . . . . . . . 923.2.5 Measured Image Quality Metrics . . . . . . . . . . . . . . . . 963.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 963.3.1 Characterization of Images Reconstructed from Noise-free Pro-jections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 963.3.2 Image Quality Comparison between VBF, RecBF and RBFReconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . 973.3.3 One-pass List-mode VBF Reconstruction with DeformationCorrection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1064 Development and Use of a Digital Mouse Phantom for MotionCorrection Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1094.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1094.2 Materials and Methods . . . . . . . . . . . . . . . . . . . . . . . . . 1104.2.1 Method Overview . . . . . . . . . . . . . . . . . . . . . . . . 1104.2.2 Optical Imaging System . . . . . . . . . . . . . . . . . . . . 1114.2.3 Live Mouse Imaging . . . . . . . . . . . . . . . . . . . . . . . 1124.2.4 Digital Phantom Generation, Rigging and Animation . . . . 1124.2.5 Phantom Voxelization, Emission Simulation and Reconstruc-tion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1164.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184.3.1 Depth Camera Evaluation . . . . . . . . . . . . . . . . . . . 1184.3.2 Live Mouse Imaging . . . . . . . . . . . . . . . . . . . . . . . 1194.3.3 Analysis of Simulated Motion . . . . . . . . . . . . . . . . . 1194.3.4 Voxelization and Reconstruction . . . . . . . . . . . . . . . . 1214.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1224.4.1 Rodent Motion Tracking and Phantom Construction . . . . 122viiTable of Contents4.4.2 Strategies for Practical Unrestrained Rodent Imaging . . . . 1265 Spatial Image Analysis in Brain PET Imaging . . . . . . . . . . . 1295.1 Brain PET Imaging in Parkinson’s Disease Studies . . . . . . . . . . 1295.2 Previous Methods of Spatial Image Analysis . . . . . . . . . . . . . 1345.3 Aims and Structure of the Study . . . . . . . . . . . . . . . . . . . . 1355.4 Explored Image Metrics . . . . . . . . . . . . . . . . . . . . . . . . . 1365.4.1 Value Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . 1365.4.2 Shape Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . 1375.4.3 Moment Invariants . . . . . . . . . . . . . . . . . . . . . . . 1385.4.4 Haralick Features . . . . . . . . . . . . . . . . . . . . . . . . 1406 Analysis of Localized Tracer Distribution Using Shape Descriptors 1446.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1446.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456.2.1 Data Acquisition and Pre-processing . . . . . . . . . . . . . 1456.2.2 Single-modality ROIs . . . . . . . . . . . . . . . . . . . . . . 1466.2.3 Mixed PET-MRI ROIs . . . . . . . . . . . . . . . . . . . . . 1466.2.4 Image Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . 1476.2.5 Metric Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 1486.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1506.3.1 Metric Values and Variability . . . . . . . . . . . . . . . . . 1506.3.2 Correlation Between Image and Clinical Metrics . . . . . . . 1516.3.3 Metric Combinations . . . . . . . . . . . . . . . . . . . . . . 1576.3.4 Metric Correlation in the RAC-MRI ROI Space . . . . . . . 1586.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1596.4.1 Relative Metric Performance . . . . . . . . . . . . . . . . . . 1596.4.2 The Use of Mixed ROIs in Image Analysis . . . . . . . . . . 1617 Analysis of Regions with Specific Tracer Uptake Using TextureDescriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1657.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1657.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667.2.1 Clinical and Image Data . . . . . . . . . . . . . . . . . . . . 1667.2.2 Evaluated Metrics . . . . . . . . . . . . . . . . . . . . . . . . 1677.2.3 Investigated Brain Structures and ROIs . . . . . . . . . . . . 167viiiTable of Contents7.2.4 GLCM Computation . . . . . . . . . . . . . . . . . . . . . . 1677.2.5 Methodology of Correlation and Discrimination Analysis . . 1687.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1697.3.1 Correlation Analysis Between HF and DD . . . . . . . . . . 1697.3.2 Analysis of Discrimination Between Control and PD Subjects 1727.3.3 E↵ect of GLCM Direction on Measured Correlation Values . 1727.3.4 E↵ect of GLCM Distance on Measured Correlation Values . 1767.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1778 Analysis of the Metric Behavior with Disease Progression . . . . 1818.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1818.2 Development of a Model for Tracer Binding Loss . . . . . . . . . . . 1828.2.1 Measurement of AR Profiles in the Putamen . . . . . . . . . 1828.2.2 Analytical Model Fitting . . . . . . . . . . . . . . . . . . . . 1838.2.3 Procedure to Generate Synthetic AR Images . . . . . . . . . 1868.2.4 ROI Definition in Synthetic AR Images . . . . . . . . . . . . 1898.3 Model Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1918.3.1 Comparison of Measured and Simulated Image Histograms . 1918.3.2 Comparison of Measured and Simulated GLCMs . . . . . . . 1918.4 Comparison of Measured and Model-predicted Metric Values . . . . 1948.5 Model-based Analysis of the Metric Behavior with Disease Progres-sion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1978.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2058.6.1 Utility of the Proposed Model . . . . . . . . . . . . . . . . . 2058.6.2 Information Captured by Texture Metrics . . . . . . . . . . . 2078.6.3 Importance of the ROI Definition . . . . . . . . . . . . . . . 2088.6.4 Data Variability and Noise . . . . . . . . . . . . . . . . . . . 2099 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . 211Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217AppendicesA Mathematical Definition of Haralick Features . . . . . . . . . . . . 241ixList of Tables1.1 Common positron-emitting isotopes used in PET. Data are takenfrom [1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2 Examples of tracers used in various applications of PET imaging.Additional discussion can be found in Section 5.1. . . . . . . . . . . 51.3 Most common scintillator materials used in PET and their physicalcharacteristics. Energy resolution is measured at 662 keV. . . . . . . 153.1 Image metrics obtained from the reconstructed images of the physicalNEMA phantom after 40 MLEM iterations. . . . . . . . . . . . . . . 1026.1 Maximum values of R2 and ⇢ (given in parentheses) between imagemetrics and clinical metrics obtained in the DTBZ-MRI ROI space(using less a↵ected side of putamen). All subjects were included inthe analysis. The R2(↵) values were obtained by fitting the imagemetrics with the two-term linear functions of UPDRS and DD. Atwo-term exponential function was used with BPND. Absent ↵maxindicates that no trend in the correlation strength was observed. **p<0.01; * p<0.05; no glyph for p>0.05. TValue obtained with three-term exponential fit (BPND only). . . . . . . . . . . . . . . . . . . . 1526.2 Maximum values of R2 and ⇢ (given in parentheses) between imagemetrics and clinical metrics obtained in the RAC-MRI ROI space(using less a↵ected side of putamen). All subjects were included inthe analysis. The R2(↵) values were obtained by fitting the imagemetrics with the two-term linear functions of UPDRS and DD. Atwo-term exponential function was used with BPND. Absent ↵maxindicates that no trend in the correlation strength was observed. **p<0.01; * p<0.05; no glyph for p>0.05. . . . . . . . . . . . . . . . . 1537.1 SI measured between the control and PD subject groups. . . . . . . 175xList of Tables7.2 Values of ⇢ measured using di↵erent GLCM directions in PUT. . . . 1767.3 Values of ⇢ measured using di↵erent GLCM directions in SBB. . . . 176xiList of Figures1.1 Diagram of the imaging process in PET, from radioisotope productionto image analysis. The tracer 18F -fluorodeoxyglucose is used as anexample. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Diagrams showing the di↵erence between static and dynamic PETimaging. In static imaging, coincidence data from the entire scan isused to reconstruct a single image. In dynamic imaging, coincidencedata are split into a sequence of frames, and each frame is recon-structed individually. . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.3 Decay-corrected DTBZ images of the brain obtained from a singledynamic PET scan. Note that despite decay correction, data becomeprogressively noisier with time. Data acquired in the early frames arenoisy because the frame durations are short and the tracer had notyet accumulated in the tissues. Frame number is shown in the leftcorner, frame duration is shown in the right corner. . . . . . . . . . . 101.4 A schematic illustration of the detector block design. APDs or SiPMscan be used instead of the PMTs. . . . . . . . . . . . . . . . . . . . . 161.5 Diagram of the process of coincidence detection in PET. . . . . . . . 171.6 Diagrams of PET cameras set up to work in 2D acquisition mode and3D acquisition mode. In 2D mode, metal septa block oblique gammaphotons. In 3D mode, septa are removed. . . . . . . . . . . . . . . . 191.7 A. Parametrization of an LOR in terms of radial o↵set r, azimuthalangle ', copolar angle ✓, and axial position z. B. Two-dimensionalsinograms consist of a series of radial projections taken at di↵erentangles '. The value of g(r,') represents integrated intensity (Eq.1.18). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22xiiList of Figures1.8 A. Attenuation of gamma photons that travel along line segments[a, b] and [a, c] is determined by the distribution of the linear attenu-ation coecient µ(x) in the medium. B. If one or both of coincidencephotons become scattered, a scattered coincidence event may be de-tected. C. Scattering or absorption of one of the coincidence photonsresults in the detection of a single event. D. A random coincidencemay be recorded if two single events are detected within the coinci-dence window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251.9 Gamma photons that interact in the edge crystals of a detector blockhave a greater chance of escaping the block. Bars above the crystalsdemonstrate a possible distribution of gamma counts detected by therespective crystals, with a uniform impeding gamma beam. . . . . . 271.10 Schematic illustration of the parallax e↵ect. On the left, the impend-ing gamma photon is likely to be detected in a single crystal. On theright, the photon may be detected in any of the three crystals thatare shown in blue color. . . . . . . . . . . . . . . . . . . . . . . . . . 281.11 Frequency responses of several filters commonly used in analytic im-age reconstruction. High frequencies are suppressed in the Butter-worth, cosine and Hamming filters. . . . . . . . . . . . . . . . . . . . 361.12 A. Single transaxial planes of µ-maps of the small animal NEMAphantom shown on the left. The µ-maps were reconstructed usingthree di↵erent methods, indicated on the bottom right corner. B.Images of the small animal NEMA phantom reconstructed with andwithout attenuation correction. . . . . . . . . . . . . . . . . . . . . . 441.13 A. Block diagram of a one-compartmental model (blood is not consid-ered as a compartment). Tracer is delivered from the site of injectionby the blood flow (left block). From the blood, tracer enters the tissuecompartment by di↵usion or active transport (right block). Arrowsindicate the possible directions of tracer flux. B. Block diagram ofa three-compartmental model, with free, specific tracer binding, andnon-specific tracer binding compartments. . . . . . . . . . . . . . . . 50xiiiList of Figures1.14 A. TACs of 11C-DTBZ obtained from a single voxel and from an ROI(size 7⇥7⇥7 voxels) defined around the same voxel. The graphs on theleft were obtained from the target region (striatum), and the graphson the right were obtained from the reference region (occipital cortex).B. Examples of parametric BPND and k2 images of a Parkinson’sdisease subject computed using an RTM (occipital cortex). . . . . . 581.15 Examples of activity and parametric BPND images of 11C-DTBZ;a – image averaged over 5 frames with the combined duration of 30minutes; b – parametric image showing speckle noise; c – parametricimage computed using a greater degree of regularization when fittingthe TACs; d – parametric image computed from temporally-smoothedTACs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593.1 A. Imaged object and background are represented in the image spaceby two unorganized point clouds; ⌦ denotes space occupied by theobject. B. The image function is defined using VBF inside ⌦, andusing RecBF outside ⌦. The SM coecients ain and bik are equalto the intersection lengths of the LOR i with the basis functions. C.Schematic representation of the NEMA-NU4 phantom; images corre-sponding to sections 1, 2 and 3 are used to generate noise-free projec-tions. D. Digital phantom of a deformable (bending) bar used for thevalidation of deformable motion correction, shown in the referenceconfiguration (0 degrees) and two deformed configurations. . . . . . 863.2 Main steps of the algorithm to compute the intersection length be-tween the LOR and implicitly-defined Voronoi cells. . . . . . . . . . 883.3 A. Point clouds used for VBF image reconstruction from sinogramdata, and images that visualize the corresponding map of the ↵ val-ues (logarithm is used to linearize the scale). Graph shows the dis-tribution of the ↵ values in the cloud with sequential LR iterations.B. Point cloud (with defined boundary) used for the reconstructionof the simulated list-mode data with motion correction, only 12.5%of the actual number of points is shown. Point color intensity is pro-portional to the local compression. Graph shows the distribution ofthe ↵ values in the phantom at di↵erent time frames (deformations). 93xivList of Figures3.4 A. Images of the section 2 of the digital NEMA phantom recon-structed using VBF (40 MLEM iterations) and voxelized using gridsof di↵erent sizes. The insets contain zoomed-in images of the edge ofthe phantom (note di↵erent color scale). Dashed line indicates whereimage profiles were measured. B. Profiles through the reconstructedimages; dashed lines plot the standard deviation of the profile values. 993.5 A. Joint histogram of the reconstructed point activity values j (af-ter 40 MLEM iterations) and the corresponding expansion coecients↵j . The data were taken from the points at the center of the digi-tal NEMA phantom (section 2). B. Standard deviation of the re-constructed point activity values (percent of the mean) and post-voxelization pixel values (taken from the same ROI), plotted againstthe MLEM iteration number. . . . . . . . . . . . . . . . . . . . . . . 1003.6 A. A single plane from the section 2 of the physical NEMA phan-tom reconstructed using RecBF, RBF and VBF (voxelized using a128⇥128 grid) with 40 MLEM iterations. Images on the right visu-alize the point clouds that were used in VBF reconstruction. Colorindicates the local value of ↵. B. Di↵erence between the RecBF andVBF images, profile location indicated by the dashed line. C. Con-trast recovery and noise in the VBF, RBF and RecBF images plottedas functions of the MLEM iteration number. . . . . . . . . . . . . . . 1013.7 A. Reconstructed image of the static bar phantom deformed by 180degrees (the average of 8 axial planes is shown), 6 ⇥ 105 events perlist-mode subset, and the point cloud used for the reconstruction(12.5% of the points are shown). B. Reconstructed images of thephantom voxelized with compensation for compression/expansion inthe deformed and reference configurations. C. Single transaxial planes(original and smoothed) in the reconstructed images, with profilesindicated by the dashed lines. . . . . . . . . . . . . . . . . . . . . . . 104xvList of Figures3.8 A. Image of the bending bar phantom reconstructed without motioncorrection, and iterations of the image reconstructed with motioncorrection (the average of 8 axial planes is shown). Each list-modesubset contained 900,000 events. B. The measured contrast recoverycoecients and standard deviation of voxel values in the uniformregion plotted against the list-mode OSEM iteration number (withmotion correction). The contrast recovery plots represent the averagecontrast recovery of 16 line sources (section 1 of the bar phantom). . 1054.1 Main steps of the method to construct the mouse phantom. . . . . . 1114.2 A. The setup for optical imaging and the 3D-printed mouse phan-tom. The top chamber cover (not shown) had thickness 1.5 mm. B.Activity (FDG), X-ray CT and label image components of the Digi-mouse atlas. C. Visualization of the point cloud and surface meshthat correspond to the reference (undeformed) pose of the phantom. 1134.3 A. The point cloud data flow. The (x, y, z) coordinates of the pointcloud in reference configuration were imported to Blender, where ananimation rig was set up. After the animation procedure, the newtime-dependent coordinates were exported from Blender as 1500 dis-crete coordinate sets, one per frame. B. Diagram of the employedanimation rig, in hierarchical order. The points that were used tomeasure the motion parameters (including the angle ✓ between thehead and the trunk) are shown in the right panel. . . . . . . . . . . . 1164.4 A. Depth images of the 3D-printed mouse phantom acquired usingthe TOF and SL cameras with the chamber cover removed. Theprofiles show the depth values along the dashed lines. B. Examples ofcolor images and recovered 3D surfaces from the live mouse imagingexperiment. Points on the head, neck and trunk indicated by themarkers were manually identified (placed) in the acquired frames. . . 1204.5 A. Motion parameters of the observed motion. B. Motion parametersof the simulated motion. . . . . . . . . . . . . . . . . . . . . . . . . . 121xviList of Figures4.6 A. Renderings of the phantom surface inside the virtual chamber,and maximum intensity projections of the corresponding activity andattenuation images for 3 representative motion frames. B. Map ofthe expansion coecient ↵ (single z-plane), and plots of ✓ and localvalues of ↵ against the frame number. . . . . . . . . . . . . . . . . . 1234.7 A. Ground truth image of the phantom in reference configurationand images of the stationary phantom reconstructed using blob andVoronoi basis functions. Top and bottom images represent singleplanes in the x y and x z dimensions, respectively. The locationof the x  z plane is indicated by the dashed line. B. Image of themoving phantom reconstructed without (single x y plane is shown)and with motion correction. . . . . . . . . . . . . . . . . . . . . . . . 1245.1 Single transverse slices and 3D visualization of the MRI and BPNDimages of a healthy control subject (left column), a PD subject onthe year of diagnosis (middle column), and a PD subject 10 yearsafter diagnosis (right column). The 3D visualizations show the BPNDdistributions of DTBZ (red colormap) and RAC (yellow colormap) onthe left side of the striatum, with MRI-defined outlines of the striatalshape. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1315.2 Diagram that illustrates the computation of the GLCM for a stri-atal ROI defined over the DTBZ image of a PD subject. g is thestepping vector that connects voxels with gray values 3 and 4. AP –anteroposterior direction, ML – mediolateral direction. . . . . . . . . 1436.1 Flowchart of the algorithm employed to generate mixed PET-MRIROIs. The main processing steps are shown using the transaxialslices through the representative PET/MRI volume images. . . . . . 1486.2 The surface renderings of ROIMIX(↵) for one control subject and onePD subject (UPDRS 9.0, DD 6, moderate severity) in the DTBZ-MRIROI space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1516.3 Contours of ROIMIX(↵) overlaid on the transaxial slices of DTBZBPND images, for two representative PD subjects and three values of↵. Arrows point out areas of misalignment between ROIMIX(↵ = 1)and regions of high activity concentration. . . . . . . . . . . . . . . . 154xviiList of Figures6.4 Graphs of VOL, BPND, CMP, and J1 in the DTBZ-MRI ROI space forall subjects, evaluated using mixed ROIs of the putamen (less a↵ectedside). Higher DD generally corresponded to more significant metricvariability with ↵. The three subjects with DD of zero correspond tocontrol subjects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1556.5 A. Mean bootstrapped values of R2 with standard deviation (errorbars) and 95% confidence intervals (filled regions), plotted against ↵for BPND (left), RVD (middle) and CMP (right). The correlationwith DD (blue) and UPDRS (green) was evaluated using putamenmixed ROIs; B. Representative scatter plots of log(BPND), RVD andCMP against DD and UPDRS. Non-bootstrapped values of R2 areshown for the cases with the control subjects were included (con-trol+PD) and excluded (PD) from the analysis. . . . . . . . . . . . . 1566.6 Mean bootstrapped values of R2 with standard deviation (error bars)and 95% confidence intervals (filled regions), plotted against ↵ forJ1 (top) and J2 (bottom). The representative scatter plots of metricvalues against DD (blue) and UPDRS (green) are shown for MRI-based putamen ROIs. Non-bootstrapped values of R2 are shown forthe cases with the control subjects were included (control+PD) andexcluded (PD) from the analysis. . . . . . . . . . . . . . . . . . . . . 1586.7 A. The shape of ROIMIX(↵) for one of PD subjects (UPDRS 9.0,DD 6, moderate severity) in the RAC-MRI ROI space. B. Meanbootstrapped values of R2 plotted against ↵ in the RAC-MRI ROIspace, with standard deviation (error bars) and 95% confidence inter-vals (filled regions). Left – DTBZ BPND computed in putamen; Mid-dle - DTBZ J1 computed in putamen; Right – DTBZ J1 computed incaudate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1606.8 3D renderings of single-modality PET and MRI ROIs for subjectsCL01 (left) and CL02 (right). The region of ROI mismatch due tomisregistration is indicated by arrow. . . . . . . . . . . . . . . . . . . 1647.1 The MRI-based ROIs of the caudate (CAU), putamen (PUT), andstriatum (STR), and the corresponding BB ROIs (CBB, PBB, SBB)for a control subject. . . . . . . . . . . . . . . . . . . . . . . . . . . . 170xviiiList of Figures7.2 Examples of the PET-defined and MRI-defined directions used in theGLCM computation for one of the PD subjects. The color of thescatter points represents the relative voxel value in the PUT andCAU ROIs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1707.3 Image metrics computed in PUT and PBB plotted against DD. Solidhorizontal lines represent control subjects, and dots represent PDsubjects. The GLCMs were computed in the anteroposterior directionusing the GLCM distance equal to 3 voxels. . . . . . . . . . . . . . . 1727.4 The absolute values of ⇢ measured between the image metrics and DD.1737.5 Box plots of the ⇢ value distributions obtained by rotating PUT ROIsby a random angle. Each box plot represents 50 independent datarealizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1747.6 Visualization of average GLCMs computed from the DTBZ images ofPD subjects. Top row corresponds to the MRI-based ROIs, bottomrow corresponds to the BB ROIs. . . . . . . . . . . . . . . . . . . . . 1747.7 The PD subject-averaged GLCMs computed in PUT along di↵erentdirections. AP – anteroposterior, ML – mediolateral, IS – inferosuperior.The used GLCM distance was 3 voxels. . . . . . . . . . . . . . . . . 1777.8 Plots of ⇢ against GLCM distance for HF metrics computed in PUTand PBB. Direction-averaged GLCMs were used. . . . . . . . . . . . 1787.9 Direction-averaged GLCMs computed in PUT using the GLCM dis-tances equal to 1 and 5 voxels. The shown GLCMs were computedby averaging the GLCMs for all PD subjects. . . . . . . . . . . . . . 1788.1 Examples of AR profiles measured in the putamens of control and PDsubjects. The profiles are sorted according to DD. Zero correspondsto the anterior side of the putamen, and the background AR is equalto 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1848.2 A. Scatter plot of the AR profile data combined from 37 PD subjects.B. Surface given by the Eq. 8.2 with respect to the data. C. Scat-ter plot and marginalized histogram of the residuals, with overlaidnormal distribution of equivalent variance. D. Plots of ARm for dif-ferent values of DDs, the middle row approximately corresponds tothe clinical DD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187xixList of Figures8.3 Visualization of the average putamen ROI surface from di↵erent di-rections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1898.4 Temporal sequence of synthetic images that model the dopaminergicfunction loss, as revealed by imaging with DTBZ, in the less a↵ectedputamen. The images were generated using model given by the Eq.8.2. Top – images without added noise, middle – images with simu-lated Poisson noise, bottom – noisy images smoothed using a Gaus-sian filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1908.5 A. Box plots of the AR values for all control and PD subjects, ob-tained with PUT and PBB ROIs. The box plots for PD subjects arearranged according to the DD. B. Histograms of the ARs for a rep-resentative set of control and PD subjects. The histograms for PDsubjects are arranged according to the DD. . . . . . . . . . . . . . . 1928.6 A. Histograms of ARm obtained from the synthetic images usingPUTm and PBBm ROIs. B. GLCMs obtained from the syntheticimages using PUT and PBB ROIs and used to compute the HF met-rics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1938.7 Images of the average GLCMs computed from the images of controlsubjects, PD subjects with DD 0–6 years, and PD subjects with DD7–13 years. Top row corresponds to PUT, bottom row to PBB. . . . 1958.8 The simulated plots of the image metrics versus DDm in PUTm (bluegraphs, x axis is DDm), in comparison to the corresponding experi-mental scatter plots in PUT (x axis is DD). Horizontal lines representcontrol subjects. Dashed vertical lines mark the range of DDm thatcorresponds to the clinical DD. HF metrics that had similar graphsto the ones shown are omitted for clarity. . . . . . . . . . . . . . . . 1988.9 The simulated plots of the image metrics versus DDm in PBBm (bluegraphs, x axis is DDm), in comparison to the corresponding experi-mental scatter plots in PBB (x axis is DD). Horizontal lines representcontrol subjects. Dashed vertical lines mark the range of DDm thatcorresponds to the clinical DD. HF metrics that had similar graphsto the ones shown are omitted for clarity. . . . . . . . . . . . . . . . 199xxList of Figures8.10 Simulated graphs of J1 and J2 that were computed from the ARmimages normalized using Eq. 8.6, and the corresponding measuredscatter plots. A. PUT and PUTm ROIs. B. PBB and PBBm ROIs.The unimodal trend suggested by the data is indicated by the dashedline. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205xxiList of Abbreviations3DRP 3D reprojectionAPD avalanche photo diodeAR activity ratioBB bounding boxBP binding potentialCT computed tomographyDD disease durationDV distribution volumeDVR distribution volume ratioECG electrocardiographyFBP filtered back-projectionFORE Fourier rebinningFOV field of viewFWHM full width at half-maximumGLCM gray level co-occurrence matrixHF Haralick featuresHRRT high-resolution research tomographKM kinetic modelingLOR line of responseLR Lloyd relaxationMI moment invariantML maximum likelihoodMLEM maximum likelihood expectation maximizationMLEM-OP ordinary Poisson maximum likelihood expectation maximizationMR magnetic resonanceMRI magentic resonance imagingNECR noise-equivalent count rateNEMA national electrical manufacturers associationxxiiList of AbbreviationsOSEM ordered subset expectation maximizationPD Parkinson’s diseasePET positron emission tomographyPMT photomultiplier tubePSF point-spread functionRBF radial basis functionsRecBF rectangular basis functionsRMS root mean squareROI region of interestRTM reference tissue modelSI separability indexSiPM silicon photomultipliersSL structured lightSM system matrixSNR signal to noise ratioSPECT single-photon emission computed tomographySSRB single-slice rebinningSUV standard uptake valueTAC time-activity curveTOF time-of-flightUPDRS unified Parkinson’s disease rating scaleVBF Voronoi basis functionsAnalyzed brain regions:PUT putamenPBB putamen bounding boxCAU caudateCBB caudate bounding boxSTR striatumSBB striatum bounding boxImage metrics:BPND non-displaceable binding potentialSTD standard deviation of activityxxiiiList of AbbreviationsIOD index of dispersion of activity valuesVOL region volumeSAR region surface areaRVD relative volume di↵erenceVOE volumetric overlap errorRCM relative centre of mass distanceECM region eccentricityCMP region compactnessEXT region extentMBR mean breadthHaralick features:ACRL autocorrelationCTR contrastCRL correlationCLP cluster prominenceCLS cluster shadeDIS dissimilarityENR energyENT entropyHOM homogeneityINF1 information measure 1INF2 information measure 2NHOM normalized homogeneityMPR maximum probabilitySAVG sum averageSENT sum entropyxxivAcknowledgmentsFirst and foremost, I would like to acknowledge my supervisor, Professor VesnaSossi, for her guidance and continual support over the years. Without her, thiswork and thesis would be impossible, literally and figuratively. I will benefit fromwhat she had taught me for the rest of my life. I would also like to acknowledge themembers of my committee, Professors Piotr Kozlowski, Robert Rohling, and JoergRottler, for their valuable time and critical insights that they provided.I would like to thank the UBC PET imaging group for providing valuable help atdi↵erent stages of research: Nasim Vafai, Elham Shahinfard, Katie Dinelle, SiobhanMcCormick, Carolyn English, Nicole He↵ernan, Jess McKenzie and Stephan Blinder.I also thank Dr. Arman Rahmim, who in our many conversations provided verythoughtful and valuable insights on the methods and results of data analysis.My undergraduate supervisor, Professor Felix Blyakhman, played a pivotal rolein my education and life prior to the doctoral studies. I have infinite gratitude for hispatronage, and all the help and support that I received from him over the years. Itwas Professor Blyakhman who made it possible for me to pursue academic researchand doctoral studies abroad. His influence on my career is truly profound. I amalso very thankful to my co-supervisor at the University of Washington, ProfessorGerald Pollack, who played an essential role in my transition to graduate studies.Jerry inspired me to think big, set ambitious goals, and tackle dicult issues.I would like to acknowledge my friends, including fellow graduate students, whocontributed significantly to keeping me sane and focused over the course of this work:Anna Schildt, Jessie Fu, Greg Stortz, Andrew Robertson, Marjorie Gonzalez, NimaHazar, Evgeniya and Guerman Rolzing, Yuliya Talmazan, Visnja Milidragovic, Mar-garita and Scott Lewis, Dave Danchilla, Michael Goertzen, Seth Goetzke, andMichael Ortiz. I would like to particularly thank Dr. Maryam Sadeghi for herwords of encouragement.I would like to express my sincere gratitude to Creighton Watley for providingmuch needed help at a critical time, during my candidacy exam as well as afterwardsxxvAcknowledgmentswhen moving forward was particularly dicult, Creighton was always there, readyto o↵er his support. I am truly honored and grateful to call him a friend, and hiscontribution to the completion of my graduate work will never be forgotten.I thank my dear friends Abhinai and Kendra Srivastava for being generous, kind,and patient hosts over many years. They are the most amazing folk who helped tokeep my feet on the road of progress, and without them there is no knowing whereI might have been swept o↵ to. I would also like to thank Abhinai Srivastava andMukul Dhankhar for generously providing depth-sensing cameras that were used inthis work.Finally, I wish to express my endless gratitude to my sister Marina Davydenko,who was there for the rest of the family while I was away, and who’s propensity andtalent for writing continually inspired me to keep working on this thesis.xxviDedicationThis thesis is dedicated to my father, Sergey Y. Klyuzhin, and to my late mother,Irina Y. Klyuzhina.xxviiChapter 1Introduction1.1 Fundamentals of PET1.1.1 PET Imaging PrinciplesPositron emission tomography (PET) is a molecular imaging technique that enablesin-vivo assessments of biological functions in tissues, such as blood flow, glucosemetabolism, and receptor density. PET is based on the use of biochemically-activemolecular compounds labeled with radioactive isotopes — called radiotracers (ortracers) — that are synthesized in a laboratory and introduced to the organism underinvestigation. Radiation emitted by the isotopes and detected by PET scanners isused to reconstruct the images of tracer distribution in the organism. Applicationsof PET include diagnosis and staging of cancer, assessment of cardiac diseases,investigation of various aspects of neurological disorders, and monitoring therapeuticresponses. Thus, PET has proven to be a valuable tool in medical research as wellas clinical practice.The principle of PET is illustrated in Fig. 1.1. During tracer production, apositron-emitting unstable isotope is produced most often with a cyclotron and at-tached to an organic molecule with known metabolic pathway or molecular target.The tracer is delivered into the organism, usually be means of intravenous injec-tion. Following injection, the tracer is distributed by the bloodstream. With time,the tracer molecules accumulate in organs and tissues of interest according to theirbiochemical properties. The rate and amount of tracer uptake by the tissues are de-termined by the physiological function under investigation. In tissues, radionuclidein the radiotracer decays to a stable state according to the equationp+ ! n+ + + ⌫ (1.1)where p+ is a proton, n is a neutron, ⌫ is an electron neutrino, and + is a positronthat enables the imaging process that follows. After traveling a short distance11.1. Fundamentals of PET(1–10 mm) in the tissues, the emitted positrons annihilate with electrons in thesurrounding medium. Each annihilation reaction produces two gamma particles withenergy 511 keV that propagate from the point of annihilation through the tissuesin nearly opposite directions. The gamma particles are detected by a PET scannerthat incorporates detector crystals arranged in rings and coincidence electronics.Whenever two opposite detectors register two gamma particles at nearly the sametime, the event is recorded as a coincidence — which indicates that the gammaparticles could originate from the same electron-positron annihilation. From thecoincidence data obtained during the scan, image of the radiotracer distribution inthe body can be reconstructed. The physiological parameters of interest are thenmeasured in the process of image analysis.Tracer compounds prepared for injection contain at least three types of molecules:radioactive labeled molecules, unlabeled molecules (without attached radionuclide),and decayed molecules. The substitution of a non-radioactive atom in a compoundwith radioactive isotope may lead to alterations in the chemical properties. In quan-titative PET imaging studies, it is generally required that labeled and unlabeledmolecules, and preferably decayed molecules, follow the same metabolic path in tis-sues. Otherwise, accurate measurements of tracer concentration in tissues can notbe obtained. Since the decayed tracer molecules can not be detected, their metabolicpathway is usually determined via other methods (e.g. tissue assays) during the val-idation stage of tracer development cycle. If labeled and unlabeled tracer moleculesfollow the same chain of reactions in tissues, the total tracer concentration T (t) canbe obtained using the equation:T (t) =1A(t)et (1.2)where  is the fraction of labeled molecules in the injected compound at t = 0,A(t) is the measured activity concentration at time t, and  is the decay rate of theisotope.A fundamental requirement in PET is that the total amount of injected tracer(labeled and unlabeled) should not perturb the system under investigation. Thisis known as the tracer principle, or tracer assumption— only trace amounts of theimaging agent are administered (hence the name); the act of measurement shouldnot a↵ect the system that is being measured.21.1. Fundamentals of PETFigure 1.1: Diagram of the imaging process in PET, from radioisotope productionto image analysis. The tracer 18F -fluorodeoxyglucose is used as an example.1.1.2 PET Isotopes, Radiotracers and ApplicationsA variety of positron-emitting isotopes can be synthesized for use in PET imaging.The choice of isotope for a particular tracer is made according to the followingcriteria:1. the intended chemical structure and composition of the labeled tracer molecule;2. the toxicity of isotope’s decay products;3. the length of decay chain, i.e. whether the isotope decays to a stable stateafter positron emission;4. isotope half-life and the emitted positron energy.Production of isotopes for PET takes place prior to the synthesis of tracer molecules,since nuclear reactions necessary to produce the isotopes destroy molecular com-pounds. The isotopes are typically produced in cyclotrons by bombarding stablenuclei with protons. For example, synthesis of 18F is usually done via a (p,n) reac-tion caused by proton bombardment of 18O-enriched water. The process producesfree 18F ions dissolved in water. Due to a relatively short half-life of isotopes used inPET, tracers with attached isotopes are synthesized in dedicated chemistry stations.31.1. Fundamentals of PETIsotope Half-life (m) Max + Energy (MeV) + Range in Water (mm)18F 109.7 0.635 2.411C 20.39 0.97 4.113N 9.97 1.19 5.415O 2.04 1.72 8.2Table 1.1: Common positron-emitting isotopes used in PET. Data are taken from[1].There exist several positron emitters that a) are isotopes of elements found innaturally occurring organic molecules, and b) have decay properties suitable forimaging. These isotopes listed in Table 1.1 have been particularly useful in PET.Among the listed isotopes, 18F and 11C are used most widely. They undergo thefollowing decay reactions:18F! 18O+ + + ⌫ (1.3)11C! 11B+ + + ⌫ (1.4)Isotopes used in PET (including those listed in Table 1.1) have di↵erent half-livesand emit positrons with a continuous spectrum of energies. Isotope half-life de-termines the procedural logistics and places limits on possible clinical applications.The half-lives of isotopes listed in Table 1.1 are relatively short compared to otherradionuclides. This dictates that these isotopes must be produced in a cyclotronlocated near the imaging site, and that the synthesis and quality control of tracersmust be performed rapidly. These considerations substantially increase the opera-tional costs of PET imaging. Additionally, with short-lived isotopes imaging mustbe performed shortly after the tracer injection, in order to allow the tracer to dis-tribute in tissues before significant reduction of activity occurs. For example, it isimpractical to use 15O to label molecules that require tens of minutes to equilibratein the tissues. A positive aspect of using isotopes with short half-lives is the reducedexposure to radiation.Compared to other imaging modalities, PET has relatively high sensitivity andspecificity. The sensitivity aspect comes from the fact that very low tracer concen-trations can be detected by a PET camera. Frequently quoted detection limits ofradiolabeled molecules are on the order of 1011 to 1012 mol per liter. This makesit possible to use PET to investigate endogenous molecular compounds that havenatural concentrations in the picomolar range, for example neurotransmitters and41.1. Fundamentals of PETRadiotracer Target Application[11C]-RAC D2/D3 receptors Dopamine receptor imaging, extracellulardopamine assessment[11C]-DTBZ VMAT2 membraneproteinDopaminergic terminal imaging, dopamineproduction[11C]-PiB Amyloid beta peptide Imaging of beta-amyloid plaques[11C]-DASB Serotonin transporterproteinImaging of the serotonergic system[11C]-PMP Acetylcholinesterase(AChE)Assessment of AChE activity[11C]-MP Dopamine transporterproteinAssessment of dopamine reuptake in synapses[18F]-FDG Glycolytic pathway Assessment of glucose metabolism in tissues[18F]-FDOPA Dopaminergic path-wayAssessment of dopamine synthesis and storageTable 1.2: Examples of tracers used in various applications of PET imaging. Addi-tional discussion can be found in Section 5.1.their corresponding receptors. In the development of new pharmaceuticals, highsensitivity of PET enables the use of relatively low molecular concentrations of la-beled drug molecules, minimizing possible (but yet unknown) side e↵ects that thedrug may cause.The high specificity aspect is due to the fact that radiotracers have well-definedmolecular and metabolic targets. Higher order molecular compounds in living or-ganisms typically have highly specific functions. By attaching radioactive isotopesto synthetic functional equivalents (or antagonists) of naturally-occurring endoge-nous compounds, tracers with inherited high specificity can be produced. This way,a wide range of biochemical probes can be produced that characterize specific tissuefunctions in normal and pathologic states.Table 1.2 provides a very brief list of commonly used radiotracers, along withtheir targets and biomedical applications. Many tracers are used across variousdisciplines (oncology, cardiology, neuroscience) that probe di↵erent metabolic path-ways and receptors. We shall consider [18F]Fluorodeoxyglucose (FDG) in detail,because it is the most commonly used tracer in clinical PET imaging.FDG is the most commonly used radiotracer primarily due to its applicationsin oncology. It is well-known that tumor cells have substantially higher glucosemetabolism than healthy cells. Therefore, a tracer with similar metabolic path toglucose, such as FDG, will have higher uptake in tumors, allowing to di↵erentiatethem from healthy tissues. In patients su↵ering from Alzheimer’s disease, imaging of51.1. Fundamentals of PETthe brain with FDG can be used to assess the degree of neuronal functional atrophy.Although FDG is used to measure glucose metabolism, its metabolic fate isdi↵erent from that of glucose. In mammalian cells, glucose is metabolized to pyru-vate, which is further metabolized in the tricarboxylic cycle into water and carbondioxide. Similarly to glucose, FDG is phosphorylated into FDG-6-P in the firstmetabolic step (this enables the use of FDG as an imaging agent). However, thereactions that follow (de-phosphorylation or further metabolism) are di↵erent fromthe glycolytic cycle, and occur on a much slower time scale. E↵ectively, FDG-6-Pbecomes metabolically trapped in the intracellular space for several hours. Thus, aPET image of [18F]FDG actually reflects the distribution of FDG-6-P rather thanthat of glucose. Nevertheless, [18F]FDG is commonly used as a marker of glucosemetabolism: a scale factor can be applied, called the lump constant, to convert therate of [18F]FDG metabolism to that of glucose. In mammals, the lump constantapproximately equals to 0.46 [2].There is significant e↵ort in PET research to develop new radiotracers for quan-tification of various aspects of tissue function. Of particular interest is the devel-opment of radiotracers that may be classified as disease biomarkers. Such tracersposses high specificity and sensitivity towards a particular aspect of normal or ab-normal tissue function, and can be used to quantify subject’s response to therapy[3]. An example of such biomarker is PiB, which binds to -amyloid plaques in neu-ronal tissue. This property makes the tracer useful in studies of Alzheimer’s disease.Tracking of disease progression and therapeutic response with imaging biomarkersmay be used to devise a treatment plan tailored individually to each patient. Thereis also a substantial interest in the development of theranostic compounds that canbe used for simultaneous therapy and diagnostics [4].During the tracer development cycle, tracers must pass thorough clinical andpre-clinical trials that determine their suitability for medical or research use. Oftennewly developed radiotracers fail to become widely adopted imaging probes [5],either due to synthesis complexity, toxicity, or long-term retention in the organism.Characterization of tracer’s kinetics and metabolic pathways is an important step inthe tracer validation process. Tracer kinetics are investigated using dynamic PETimaging protocols.61.1. Fundamentals of PET1.1.3 Static vs. Dynamic ImagingTwo protocols of coincidence data acquisition in PET are used: static and dynamic.In static imaging, the entire coincidence data acquired during the scan are recon-structed into a single image (Fig. 1.2), which represents the average distribution ofradiotracer over the period of data acquisition. The detailed time course of activitydistribution in the tissue in this case remains unknown. If the tracer distributionchanges drastically during the scan, the resulting images are of limited diagnosticvalue. Thus, static imaging is best suited for tracers that tend to accumulate intissues (e.g. FDG), or for experimental protocols in which tracers reach a steadystate. To avoid the phase of rapid tracer re-distribution, a time interval is usuallyallowed between the tracer injection and the beginning of the scan. For example, instatic imaging with FDG, this interval is usually between 30 and 60 minutes.In raw format, data acquired by a PET scanner represents the number of co-incidence counts measured by di↵erent detector pairs. Count images reconstructeddirectly from these data do not preserve absolute activity quantification. To maxi-mize the quantitative accuracy of static imaging, a number of corrections must beapplied to the coincidence data prior to image reconstruction. These corrections(attenuation, scatter, randoms, deadtime, eciency, decay) are discussed in Section1.4.3. With all necessary corrections applied, voxel values in the reconstructed im-ages represent absolute radioactivity concentration which can be related to tracerconcentration.Assuming that the tracer assumption holds true, and that the tracer is imagedat steady state, the following types of information can be obtained from staticquantitative images:• the average concentration of activity (tracer) in any part of the image volumeover the duration of the scan;• the location of regions in the body with preferential tracer uptake;• the spatial distribution of the tracer within regions of interest, such as specificorgans or tumors.Static PET imaging with FDG is routinely used in oncology for tumor detection.Methods of static image analysis are discussed in Chapter 5. In order to obtainadditional information from PET scans relating to tracer kinetics, dynamic imagingprotocol must be considered. Immediately after the tracer injection, the time course71.1. Fundamentals of PETof tracer distribution and accumulation in tissues is determined in part by the phys-iology, and in part by the inherent kinetic properties of the tracer molecule. Therelative influence of di↵erent properties on tracer kinetics may change over time.For example, the delivery of tracer to tissues may be determined by blood perfusionimmediately after the tracer injection and relatively independent from it at latertimes. In other words, the tracer distribution soon after the injection may be verydi↵erent from that at the end of the scan. By measuring the time course of radio-tracer concentration by the means of dynamic PET imaging, valuable informationregarding the physiological properties under study can be gained.In dynamic PET imaging, scans are typically initiated immediately after thetracer injection, and the acquisition of coincidence data is divided into a sequence ofrelatively short time frames (on the order of minutes) (Fig. 1.2). Depending on thetracer and study protocol, the total duration of dynamic scans may be between 5and 180 minutes. The frames are reconstructed into a time-series of quantitativelyaccurate activity concentration images, with all necessary corrections applied to thecoincidence data in each frame. The timing of the frames, called frame definition,must take into account the following aspects:1. the change in tracer distribution is more rapid at the beginning of the scan(when the tracer gets delivered to tissues) than near the end;2. due to the isotope decay, progressively fewer counts are detected by the scannerwith time.With these considerations in mind, frame durations are typically chosen to be longertowards the end of dynamic sequences. For example, dynamic DTBZ image sequenceshown in Fig. 1.3 had frame definition (4⇥60 s, 3⇥120 s, 8⇥300 s, 1⇥600 s). Notethat dynamic images are typically noisier than static images, due to a limited numberof coincidence events acquired in each frame.If absolute quantification of PET data is maintained throughout a dynamic scan,voxel values from reconstructed frames describe the change in tracer concentrationover time in a particular location. In dynamic imaging protocols, the time courseof tracer concentration in the blood is usually measured during the scans, since itdrives the tracer distribution in the tissue. From the time courses of tracer concen-trations in the tissue and in the blood, a variety of physiological parameters can beestimated using kinetic modeling (KM), which is described in detail in Section 1.5.81.2. Physics and Technology of PETFigure 1.2: Diagrams showing the di↵erence between static and dynamic PET imag-ing. In static imaging, coincidence data from the entire scan is used to reconstructa single image. In dynamic imaging, coincidence data are split into a sequence offrames, and each frame is reconstructed individually.Examples of physiological parameters that can be estimated from dynamic PETimaging combined with KM include:• rates of glycolytic metabolism;• levels of tissue perfusion by the blood;• association and dissociation rates between radioligands and receptors;• tracer distribution volumes and binding potentials.Measurement of these and other biological parameters from dynamic images can bebeneficial in many medical studies where quantitative and process specific informa-tion is required to characterize disease signatures and/or e↵ects of therapies.1.2 Physics and Technology of PET1.2.1 Positron Emission and AnnihilationThe decay of a proton-rich radionuclide X via positron emission is described by theequationAZX !AZ1 Y + + + ⌫e (1.5)where Y is the daughter nucleus, Z is the number of protons, + is a positron, ⌫e is anelectron neutrino. The amount of energy released during the decay is determined bythe isotope properties. This energy is shared between the positron and the neutrinoin a random proportion. Neutrinos leave the system rapidly without significantinteraction with matter.91.2. Physics and Technology of PETFigure 1.3: Decay-corrected DTBZ images of the brain obtained from a single dy-namic PET scan. Note that despite decay correction, data become progressivelynoisier with time. Data acquired in the early frames are noisy because the framedurations are short and the tracer had not yet accumulated in the tissues. Framenumber is shown in the left corner, frame duration is shown in the right corner.101.2. Physics and Technology of PETPositrons, on the other hand, undergo multiple Coulomb interactions in tissuematter that randomly change their trajectories. With each Coulomb interaction,trajectory deviations may be significant, since positrons have the same rest mass aselectrons. These interactions gradually reduce the positron energy until in reachesthermal energy levels (this process occurs on the time scale of nanoseconds). At thispoint, there is an increasingly high likelihood that a positron will annihilate withan electron. The average distance that positrons travel prior to the annihilationis called the positron range (mentioned previously in Section 1.1.2 and reported inTable 1.1). It is often quantified as the FWHM (or the entire probability densityfunction) of distances between the points of positron emission and annihilation.This should not be confused with the total path length that positrons traverse priorto the annihilation— which may be much longer than the positron range. A morethorough discussion of the positron range and its e↵ect on PET images can be foundin [1, 6].In the process of positron-electron annihilation, the mass of the particles is con-verted to energy of emitted gamma photons — either via direct annihilation, or viaproduction of a short-lived orbiting positron-electron pair called positronium [7]. In99.5% of cases, positron-electron annihilation produces two gamma rays with energy511 keV that are emitted in almost opposite directions (back-to-back) [8]:+ + e !  +  (1.6)The other 0.5% correspond to production of three gamma photons (when the particlespins in positronium are anti-parallel) — this type of reaction is ignored in PET dueto the low rate of occurrence. When two photons are produced, their directionsare not perfectly collinear since the parent particles have non-zero momentum. Thedistribution of angles between the emitted gamma rays is approximately Gaussian,with the mean and FWHM equal to 180 and 0.4 degrees, respectively [9].1.2.2 Photon Interaction in MatterAfter positron annihilation, the resulting gamma photons may interact with sur-rounding matter. Photons undergo two types of interactions: photoelectric ab-sorption and Compton scattering. In photoelectric absorption, a gamma photon isabsorbed by an atom, and the energy is transferred to one of the atom’s core elec-trons that leaves the atom. The probability of photoelectric absorption increases111.2. Physics and Technology of PETwith the number of protons in the nucleus, and decreases with photon energy. For511 keV photons in tissues, the rate of photoelectric absorption is relatively low:less than 1% of photons undergo this interaction [7].Compton scattering is the main mechanism of photon-matter interaction in PET,wherein a gamma photon interacts with a free or loosely bound outer shell (valence)electron of an atom. During the interaction, the photon is deflected (scattered), andpart of its energy is transferred to the electron. The energy of the scattered photoncan be determined using the equation:E0 =E1 + Em0c2 (1 cos ✓)(1.7)where E is the energy of the photon before the interaction, m0 is the rest mass ofthe electron, and ✓ is the scattering angle. The probability of scattering througha given angle depends on the energy of the impeding photon (511 keV for the firstinteraction) and is given by the Klein-Nishina equation. E↵ects of photoelectricabsorption and Compton scatter on PET data are further discussed in Section 1.3.1.2.3 Scintillation CrystalsPET scanners consist of multiple gamma detectors designed to capture annihilationphotons that pass through the body. Gamma detectors typically consist of scin-tillation crystals coupled to photo-sensitive elements, such as the PMTs. Photonsthat enter the scintillation crystals undergo a series of interactions with the atomsof the scintillator material. The atoms become excited due to these interactions,and transition back to the ground state by emitting scintillation photons that aredetected (and amplified) by the photo-sensitive elements. The amount of generatedlight is proportional the energy deposited by a gamma particle in the scintillatorcrystal.Over the past decades, several scintillator materials have been developed andused in PET. Scintillators are characterized by the following properties:1. Stopping power – the thickness of scintillation material that is required to ab-sorb a unit amount of radiation. This parameter determines the thickness ofscintillation crystals in the detectors, and the sensitivity of a scanner.2. Conversion eciency – the fraction of energy deposited by a gamma particlethat is converted to detectable light.121.2. Physics and Technology of PET3. Linearity – the degree of proportionality between the deposited energy andlight output.4. Energy resolution – how well the material is able to discriminate between dif-ferent gamma energies. This property determines the ability of a scanner todiscriminate between scattered and non-scattered gamma photons.5. Decay time – the time required for the excited electrons to return to the groundstate. This property determines how fast the light pulses decay in the scintilla-tor material. With shorter decay times, more gamma particles can be detectedper unit of time.Scintillators that are most commonly used in PET are listed in Table 1.3, to-gether with their most relevant physical characteristics. There is no single materialthat can be considered the best in terms of all performance aspects. Instead, thereexists a trade-o↵ between di↵erent desirable characteristics, such as the high stop-ping power, high light yield, and short decay time. The use of bismuth germanate(BGO) has been common in the past generation of PET scanners due to its highlight yield and stopping power. The majority of new PET scanners use LSO (cerium-doped lutetium oxyorthosilicate) and LYSO (cerium-doped lutetium-yttrium oxy-orthosilicate) due to their excellent timing resolution (short decay time), high lightyield, and acceptable stopping power [10]. The combination of high stopping powerand light output in particular allowed to reduce the size of PET detectors to substan-tially increase the imaging resolution, without the loss of sensitivity. Short decaytime enables the use of shorter coincidence windows, leading to greater dynamicrange of acceptable count rates, and lower impact of random coincidences. Thedownside of LSO is that it is radioactive (due to the presence of naturally-occurringisotope 176Lu, which decays by -emission, followed by a cascade of gamma emis-sions). This results in low levels of background counts. A review of scintillatormaterials in PET can be found in [11].1.2.4 Scintillation Light DetectorsScintillation crystals are coupled to photo-sensitive detectors that capture and mea-sure the energy of the scintillation light. In dedicated PET scanners, high-voltagePMTs are most often used as the light detectors. The advantage of using PMTs inPET is that they provide linear signal amplification, good signal to noise ratio (SNR)131.2. Physics and Technology of PETand high gain: amplification on the order of 105-107 is achieved on the detector levelbefore the signal is passed down the processing pipeline (e.g. energy integrators andcoincidence electronics). The disadvantage, however, is that PMTs are relativelylarge. In early PET scanner designs, each scintillation crystal was coupled to aseparate PMT. The bulkiness of PMTs imposed limitations on the minimum crystalsize and scanner resolutions that could be achieved. There was also a considerationof high cost associated with using hundreds of PMTs in a scanner.The issues of size and cost were partially addressed by the crystal block-sharingdesign introduced by Casey and Nutt [12]. The design is schematically illustrated inFig. 1.4. A monolithic crystal block is split into an 8-by-8 grid by partial saw-cuts.The grid elements are used as individual detector crystals with a common base,which is coupled to a 2-by-2 grid of PMTs. In this arrangement, light produced bya gamma photon in one of the grid crystals propagates throughout the entire block,and is detected by all four PMTs. The amount of light detected by each PMTdepends on the geometric arrangement and the distance between the interactioncrystal and the PMT. Using the so-called Anger-type logic, it is possible to identifythe interaction crystal from the fraction of light detected by each PMT. For thearrangement shown in Fig. 1.4, the interaction crystal coordinates x and y can becomputed using the equationsx =(B +D) (A+ C)A+B + C +D(1.8)y =(A+B) (C +D)A+B + C +D(1.9)where A, B, C, and D are the (integrated) light signals measured by the respectivePMTs. The block-sharing design is used in the majority of commercially avail-able PET scanners, since it allows to significantly reduce the cost and size of PETdetectors.In recent years, semiconductor-based light sensors have been gaining popularityfor use in PET detector designs. In part, this has been driven by the desire to de-velop hybrid scanners for simultaneous magnetic resonance imaging (MRI) and PETimaging [13]: unlike PMTs, semiconductor photodiodes are not a↵ected by externalmagnetic fields. However, standalone semiconductor photodiodes can not be used ingamma detectors since they do not amplify the light signal. Instead, semiconductorphotodiodes with internal gain are used, called avalanche photodiodes (APDs). In141.2. Physics and Technology of PETScintillatorNaI(NaI:Tl)BaF2BGO(Bi4Ge3O12)GSO(Gd2SiO5:Ce)LSO(Lu2SiO5:Ce)LYSO(LuYSiO:Ce)Light yield (photons/MeV) 41000 1400 9000 8000 26000 26000Attenuation length (mm) 28.8 23 10.5 14.3 11.6 20Decay Time (ns) 230 0.8 300 60 40 50Energy Resolution (%) 12 9.5 23 7.6 11.4 11.4µ (cm1) @ 511 keV 0.35 0.45 0.96 0.70 0.86 0.87Peak emission wavelength(nm)410 220 480 430 420 420Table 1.3: Most common scintillator materials used in PET and their physicalcharacteristics. Energy resolution is measured at 662 keV.an APD, high voltage is applied across a layer of photo-sensitive semiconductor.When a scintillation photon hits the semiconductor, an electron-hole pair is pro-duced. The electron and the hole are accelerated in the opposite directions by theelectric field, and enough energy is imparted to produce secondary electrons andholes. This process repeats several times and produces an avalanche of electronsand holes that amplifies the signal by several orders of magnitude. Compared toPMTs, APDs have a higher quantum eciency [14] and are much smaller in size: asingle APD is typically only a few millimeters thick [15]. However, their gain is oneor two orders of magnitude lower than the gain of PMTs.Silicon photomultipliers (SiPMs) represent the next iteration of the APD tech-nology that is currently in active development. The gain of SiPM is comparableto that of PMTs. SiPMs consist of thousands of APD micro-cells that are a fewmicrometers in size. The cells operate in Geiger mode, and become activated byincoming scintillation photons. The output signal from an SiPM is proportional tothe number of activated cells. In principle, SiPMs are able to resolve single photoninteractions (subject to background noise). A single SiPM-based photodetector ar-ray consists of several SiPM pixels, ranging in size from 1 to 4 mm (each containingthousands of micro-cells). In addition to small size and high gain, SiPMs o↵er goodlight-pulse timing resolution. Studies have demonstrated that sub-nanosecond (240ps) timing resolution can be achieved with SiPMs coupled to LYSO crystals [16].1.2.5 Coincidence DetectionThe goal of coincidence detection is to identify those gamma photons that are likelyto originate from the same annihilation event. Schematic representation of thecoincidence detection process is illustrated in Fig. 1.5. A gamma detector is set to151.2. Physics and Technology of PETA BC DxyDetector BlockPMTsFigure 1.4: A schematic illustration of the detector block design. APDs or SiPMscan be used instead of the PMTs.operate in a coincidence mode with detectors located on the opposite side of thefield of view (FOV). The detectors are connected to the same timing circuitry. If agamma photon is detected at time t, and another photon is detected by one of theopposing detectors during the time window [tt, t+t], the photons are assumedto originate from the same annihilation event. It also follows that the event mustbe located somewhere along the line, or close to the line connecting the detectors.Thus, in the process of coincidence detection, positional information regarding thedetected radiation is gained (this process is called electronic collimation).In PET, a line that connects two gamma detectors is called the line of response(LOR). Neglecting positron range, non-collinearity, gamma attenuation and scatter,the number of coincidences recorded along an LOR must be proportional to theintegral of activity along that LOR. This approximation constitutes the so-calledline-integral projection model, widely accepted and used in PET. Mathematically,it can be expressed by the equationE[yj ] = ajZLOR jf(x)dx (1.10)where E[yj ] is the expected number of coincidences acquired along the LOR j perunit of time, f(x) is the radioactivity concentration, and aj is the proportionalitycoecient that takes into account various LOR-dependent factors such as attenu-ation and detector eciency. In more elaborated models, line integral in Eq. 1.10is replaced with volume integral inside a tube that connects the detectors. Fromcoincidence counts measured along multiple LORs that pass through the subject atdi↵erent angles and locations, an image of activity distribution can be reconstructed161.2. Physics and Technology of PETFigure 1.5: Diagram of the process of coincidence detection in PET.using one of image reconstruction algorithms.When a detected coincidence event corresponds to a real positron-electron an-nihilation, it is called a true coincidence event. On the other hand, random eventsmay be recorded when two unrelated gamma photons happen to be detected at thesame time. This phenomenon is discussed in detail in Section 1.3.5. In addition,scattered events may be recorded when one or both of the annihilation photonshave been Compton-scattered prior to the detection (Section 1.3.4). The combinedtrue, random and scattered coincidences constitute a set of prompt coincidences. Acommonly used metric of scanner performance in PET is the noise-equivalent countrate (NECR), defined asNECR =T 2T +R+ S(1.11)where T is the true coincidence rate, R is the random coincidence rate, and S is thescattered coincidence rate. Ideally, NECR should be equal to T and increase linearlywith activity. In practice, NECR is always lower than T , and the discrepancy growswith higher activity.Even in state-of-the-art scanners, the majority of positron-electron annihilationevents that occur in the scanner’s FOV are not detected due to gamma attenuation,limited detector eciency, finite solid angle of the PET camera, and other factorsthat are considered in greater detail in Section 1.3. The sensitivity of a typical PETscanner operating in 3D mode (discussed in the next section) is on the order of 3 to10 percent.171.2. Physics and Technology of PET1.2.6 2D and 3D AcquisitionIn early PET tomographs, scans were performed with metal septa installed betweenthe detector rings (Fig. 1.6). The purpose of the septa was to absorb coincidencephotons that traveled along oblique LORs relative to the tomograph’s axis (LORsthat connected di↵erent detector rings). However, the septa did not e↵ectively blockannihilation photon pairs that traveled between and interacted in adjacent rings.Thus, for a scanner with N rings, the coincidence LORs were assigned to N directplanes that were co-planar with the rings, and N  1 cross planes that representedthe space in-between the rings. Coincidence data obtained in this manner wereessentially a combination of 2N  1 separable 2D datasets, and each set could bereconstructed using standard 2D tomographic image reconstruction techniques.The desire to increase the number of detected coincidences per scan resulted inthe development of 3D acquisition mode, in which the septa are removed. In 3Dmode, photons can freely travel and interact in di↵erent rings. Thus, for a givenamount of injected activity and scan duration, a much greater number of coinci-dence events can be acquired compared to 2D. However, 3D acquisition promptsthe consideration of the following complications:• Not all possible directions are represented in the 3D coincidence data, leadingto problems in image reconstruction (elaborated in Section 1.4.1).• Detection and counting of oblique coincidences increases the fraction of ran-dom and scattered coincidences, typically by a factor of ⇠3.• Image reconstruction from 3D coincidence data is computationally demandingand requires more sophisticated algorithms compared to 2D.By acquiring data in 3D mode, 5 to 7 times greater sensitivity can be achievedcompared to 2D [17], and images with substantially higher SNRs can be obtained.Therefore, the majority of newly developed PET scanners operate in 3D mode.1.2.7 Coincidence Data RepresentationThe coincidence data are typically collected and stored in either list-mode format(more common for 3D acquisitions) or histogram format (2D and 3D acquisitions).Although the histogram format was historically the first method of data storage,the list-mode format that gained popularity relatively recently represents a morestraightforward and flexible approach.181.2. Physics and Technology of PETFigure 1.6: Diagrams of PET cameras set up to work in 2D acquisition mode and3D acquisition mode. In 2D mode, metal septa block oblique gamma photons. In3D mode, septa are removed.In list-mode acquisition, coincidence events are recorded individually into a filein the order that they are detected, forming a list of coincidences interspersed withtime stamps and other information (e.g. gating information, as elaborated in Section2.3.1). Each coincidence entry in the list contains at least the coordinates of theLOR (or histogram bin) along which the event was detected. Depending on thescanner, a coincidence entry may also include the energies of the detected gammaphotons, as well as the time delay between the interactions. This information is usedin the time-of-flight (TOF) PET imaging. Besides prompt coincidences, delayedcoincidence events or single event rates may be reported that allow to estimate thefraction of random coincidences along di↵erent LORs.The coordinates of coincidence LORs may be represented in di↵erent ways. Forexample, an LOR can be defined by the address of the interaction crystals thatincludes the ring number, the block number, and the crystal number. Alternatively,an LOR in a cylindrical coordinate system can be represented using four parameters[r,, z, ✓] (as illustrated in Fig. 1.7A), where• r is the radial o↵set,• ' 2 [0,⇡] is the azimuthal angular coordinate,• z is the axial position (counted from the origin to the average position of theinteraction rings),• ✓ is the copolar angle between the LOR and the transaxial planes.191.2. Physics and Technology of PETThis representation is particularly suitable for histogramming coincidence eventsinto projections along di↵erent directions, as discussed next. The time resolutionof list-mode data is typically on the order of milliseconds, and thus if needed it canbe accurately split into frames — a feature that comes to be particularly useful indynamic studies.In the histogram format, coincidence events are histogrammed according to theircoordinates. Histograms of coincidence data that are organized using the parametersr,,z, and ✓ are termed sinograms. Each sinogram bin corresponds to a uniquesub-range of these parameters, such that a single bin represents several distinctLORs/detector pairs. For example, coincidences recorded along LORs with similarvalues of ✓ may be added to the same bin [18]. Histogramming of a list-mode fileproceeds by considering the coordinates of each event in the list, and incrementing byone the value of the corresponding bin. Since the timing information from individualevents is discarded, this approach considerably reduces the storage requirements atthe expense of spatial and temporal resolution (this may not be the case if thenumber of sinogram bins is greater than the number of acquired events).Sinograms represent the projections of radioactivity distribution along di↵erentdirections, as described by Eq. 1.10. Thus, sinogram space (or coordinates) aretypically referred to as “projection space”. This fact can be utilized in analyticmethods of PET image reconstruction, wherein the necessary projection data canbe taken directly from sinograms. A 3D sinogram can be interpreted as a stack of2D sinograms that correspond to di↵erent fixed values of ✓ and z. 2D sinograms aretraditionally visualized as images where rows correspond to bins with di↵erent r andcolumns correspond to di↵erent values of . As shown in Fig. 1.7B, a cylindricalsource (or a point source) in a 2D sinogram appears in the shape of a sinusoidalcurve, explaining the origin of the term.Prior to wide adoption of the list-mode format, most PET scanners performedon-the-fly histogramming of coincidence events. In other words, sinograms werecomputed automatically by specialized hardware from the detected coincidenceevents, and the exact information about the detector pair location and coincidencetime was discarded. In dynamic imaging protocols, this implied that frame defini-tions had to be made prior to the scans. The scanner would then output separate2D or 3D sinograms for each frame. However, the optimal frame definition (in termsof image quality or KM applicability) may not be known prior to the scan. In thisrespect, list-mode acquisition provides a clear advantage since it allows to re-frame201.3. Image Quantification Factorsand re-histogram the data as many times as necessary. In addition, list-mode acqui-sition can be leveraged for more accurate motion correction, as discussed in Section2.2.3.The use of sinograms in early tomographs was driven partially by storage consid-erations. In modern PET scanners, list-mode files may include tens or hundreds ofmillions of events, and file size may be on the order of several gigabytes. The samedata in sinogram format may only require tens of megabytes of memory, since thestorage requirement is not determined by the number of coincidence events (⇠ 108),but by the number of bins (⇠ 106). These figures strongly depend on the scannerand data acquisition and processing software. While compression of coincidence datarepresented a considerable advantage in the past, modern consumer-grade computersystems are capable of storing terabytes of data at a relatively low cost, and thuslist-mode acquisition and storage have become commonplace.1.3 Image Quantification Factors1.3.1 Positron RangeAs discussed in Section 1.2.1, after emission positrons travel some distance in tissuesprior to annihilation with electrons. Since positrons travel in random directions, aspatial uncertainty in the coincidence data is introduced that is proportional to thepositron range: source nuclei are always displaced away from the LORs along whichcoincidence events are recorded. Therefore, a positron-emitting point source willalways appear blurred in images, regardless of the scanner’s intrinsic resolution. Thedegree of blurring will vary depending on the energy range of emitted positrons. Forexample, greater amount of blurring is expected with 15O-based tracers than with18F-based tracers. Positron range introduces a fundamental limit on the resolutionthat is achievable in PET.1.3.2 Photon Non-collinearityThe e↵ect of gamma photon non-collinearity discussed in Section 1.2.1 is similar tothat of positron range: a degree of spatial uncertainty is introduced into coincidencedata and reconstructed images. From simple geometric considerations, it is clearthat the degree of uncertainty increases with distance that photons travel awayfrom the point of annihilation. Thus, the e↵ect of non-collinearity is expected to be211.3. Image Quantification FactorsFigure 1.7: A. Parametrization of an LOR in terms of radial o↵set r, azimuthal angle', copolar angle ✓, and axial position z. B. Two-dimensional sinograms consist of aseries of radial projections taken at di↵erent angles '. The value of g(r,') representsintegrated intensity (Eq. 1.18).221.3. Image Quantification Factorsgreater in larger PET cameras. It can indeed be shown that the FWHM of non-collinearity -induced spatial blurring can be determined using the equation [19]:FWHM = 0.0022D (1.12)where D is the detector ring diameter.Similarly to positron range, photon non-collinearity imposes a limit on the reso-lution that can be achieved in PET. Positron range and photon non-collinearity canbe incorporated into statistical PET system models (Section 1.4.2). This allows topartially compensate the reconstructed images for these phenomena, however theire↵ect on the images can not completely eliminated.1.3.3 Photon AttenuationAs discussed in Section 1.2.2, gamma photons may experience two types of inter-action with matter prior to being detected: photoelectric absorption and Comptonscattering. If either type of interaction occurs with one or both of annihilation pho-tons, a coincidence event that should have been recorded along a particular LORends up being missed. The phenomenon in which one or both annihilation photonsare absorbed or deflected from the original trajectory via Compton scattering iscalled attenuation. Although attenuation is the basis of such imaging modalities asX-Ray and CT, in PET it reduces the number of acquired coincidence counts andis detrimental to the image quality.The loss of coincidence counts that occurs due to attenuation along a particularLOR can be estimated analytically. The total probability of any interaction thata photon might experience per unit of traveled distance is quantified by the linearattenuation coecient (µ), which depends on the material and the energy of thephoton. Consider a photon that travels through matter along a line segment [a, b]as depicted in Fig. 1.8A. The probability pab that the photon does not interact withmatter while traversing the segment is given by the equationpab = ebRaµ(x)dx(1.13)where the integral is taken along the segment, and µ(x) is the linear attenuation co-ecient at location x. For two annihilation photons traveling from the annihilationorigin b along segments [b, a] and [b, c], the combined probability of non-interaction231.3. Image Quantification Factorsis equal to the product of the individual probabilities pba and pbc:p = pbapbc = e aRbµ(x)dx+cRbµ(x)dx!= ecRaµ(x)dx(1.14)This equation demonstrates that the amount of attenuation along an LOR is inde-pendent from the origin of annihilation. In other words, photons emitted anywherealong the LOR become attenuated by the same amount.A large fraction of coincidences (60–90%) may be lost due to attenuation, par-ticularly in 3D acquisition mode where oblique LORs pass through a lot of tissue.Approximately 7 cm of tissue can attenuate half of emitted 511 keV photons. There-fore, attenuation correction (discussed in Section 1.4.3) is required in quantitativePET imaging. Although quantification accuracy may be recovered through correc-tions, but the loss of coincidence counts results in increased noise. For example, iftwo objects are imaged that have the same activity but di↵erent attenuation, theimage of the object with greater attenuation will be noisier.1.3.4 Scattered EventsIf one or both of the annihilation photons are scattered, the pair may still be de-tected as a coincidence along one of the scanner’s LORs that does not pass throughthe annihilation origin (Fig. 1.8B). This type of coincidence events are termed scat-tered coincidences. Attenuation and scattered coincidences are related phenomena:some of attenuated photons may be detected as scattered coincidences. The pres-ence of scattered coincidences in the data reduces contrast and introduces bias inreconstructed images. Some LORs that pass through low activity regions (or don’tpass through the imaged object) may actually contain more scattered coincidencesthan true coincidences. The total contribution of scattered events to detected countsis quantified in terms of the scatter fraction SF , defined as:SF =ST + S(1.15)where S is the number of scattered coincidences, and T is the number of truecoincidences. Depending on the scanner geometry, object size and accepted energywindow, scatter fractions on the order of 10–20% have been measured in 2D PET,and on the order of 30–50% in 3D PET [17]. Therefore, scatter correction is requiredin most quantitative PET imaging scenarios.241.3. Image Quantification FactorsA BC Dabcab(x)(x)Figure 1.8: A. Attenuation of gamma photons that travel along line segments [a, b]and [a, c] is determined by the distribution of the linear attenuation coecient µ(x)in the medium. B. If one or both of coincidence photons become scattered, a scat-tered coincidence event may be detected. C. Scattering or absorption of one of thecoincidence photons results in the detection of a single event. D. A random coin-cidence may be recorded if two single events are detected within the coincidencewindow.251.3. Image Quantification FactorsIn the process of Compton scattering, photons lose energy according to Eq. 1.13.Scattered events can thus be filtered out based on the measured interaction energyin the detector crystals. However, detectors have a finite energy resolution (onthe order of 10–20%), and using a narrow window of accepted energies may resultin many true coincidence events being rejected. Therefore, additional methods toaccount for scatter are typically used, as discussed in Section 1.4.3.1.3.5 Random CoincidencesThe most frequent type of event detected in PET scanners is a single event, in whichonly one photon is detected (Fig. 1.8C). This may occur if the other annihilationphoton a) is directed or scattered outside of the PET camera, b) passes throughthe detectors without being detected, or c) rejected due to insucient detectionenergy. Each detected single photon has a chance of being detected simultaneouslywith another single photon (i.e. within the coincidence window), in which casethe scanner registers a coincidence (Fig. 1.8D). This category of coincidences thatdo not correspond to actual annihilation events are called random (or accidental)coincidences. The presence of random coincidences in PET data produces bias toreconstructed images, in the form of nearly uniform background activity.For a pair of detectors with single event rates Si and Sj , the expected rate ofrandom events Rij detected along the corresponding LOR is given by the equation:Rij = ⌧SiSj (1.16)where where ⌧ is the duration of the coincidence time window. From this equation itfollows that, since the singles rates are proportional to activity, the rate of randomcoincidences is proportional to the square of injected activity, as opposed to scatteredcoincidences. In a typical scan, single event rate may be on the order of millions ofevents per second. The fraction of random events can be reduced either by loweringthe amount of activity, or using a shorter coincidence window.1.3.6 Inter-crystal ScatterGamma photons are detected from the light that they generate in the scintillationcrystals. Electrons in the crystals are excited through interactions with the gammaparticles, and transitions back to the ground state are accompanied by the emissionof visible light quanta. Thus, the location of an impeding gamma photon can be261.3. Image Quantification FactorsCountsDetector blockFigure 1.9: Gamma photons that interact in the edge crystals of a detector blockhave a greater chance of escaping the block. Bars above the crystals demonstratea possible distribution of gamma counts detected by the respective crystals, with auniform impeding gamma beam.identified. However, there is a chance that after entering the first crystal, the photonbecomes scattered into a neighboring crystal where it deposits the remaining energy.As a result, the detected coincidence event is mispositioned, i.e. assigned to anincorrect LOR. This phenomenon is referred to as inter-crystal scatter. In detectorblocks, counts near the center of a block become “blurred” to a degree to whichinter-crystal scatter occurs. In reconstructed images, this results in loss of contrastand spatial resolution. Near the edges of a crystal block, gamma photons may bescattered outside of the block without being detected. Therefore, fewer counts aretypically detected in the edge crystals of a detector block, compared to the centralcrystals.1.3.7 Detector EciencyIn a PET scanner, the sensitivity or eciency of gamma photon detection variesbetween di↵erent detector blocks as well as between individual detector elements.Typical di↵erences in sensitivity are on the order of 10%. The variations in sensi-tivity may originate from:• di↵erent photon incidence angles;• crystal and light guide imperfections;• variations in the performance of PMTs, APDs or SiPMs.271.3. Image Quantification FactorsFigure 1.10: Schematic illustration of the parallax e↵ect. On the left, the impendinggamma photon is likely to be detected in a single crystal. On the right, the photonmay be detected in any of the three crystals that are shown in blue color.In reconstructed images, sensitivity variations may manifest as artefacts or increasednoise unless they are taken into account by detector normalization.1.3.8 Parallax E↵ectConsider two cases of interaction between a photon and a detector block shown inFig. 1.10. In the first case, the photon hits the detector at zero angle relative to thesurface normal. Ignoring the inter-crystal scatter, the photon will be detected inthe crystal upon which it impacts. In the second case, the photon hits the detectorat a 45-degree angle. There is a chance that the photon penetrates the first crystalwithout interaction, and instead gets detected in one of the neighboring crystals.This results in the loss of precision of spatial event positioning. This exampledemonstrates that events recorded along LORs with higher angles of incidence havelower spatial resolution, a phenomenon called “parallax e↵ect”. In reconstructedimages, parallax e↵ect manifests as a radial reduction of resolution away from theimage center.1.3.9 Dead-timeHigh count rates may a↵ect the ability of a tomograph to accurately process andrecord coincidence events. In scintillation crystals, rapid successive interactionswith di↵erent gamma photons cause pulse pile-up (scintillation pulse inseparability).Crystal identification and energy discrimination processes require time. Coincidenceelectronics also have a maximum rate of event processing. The combined period of281.3. Image Quantification Factorstime during which coincidence events are not detected is termed dead-time. Theremay be a processing bottle-neck caused by one of the components in the processingchain (usually at the crystal level). Since radioactivity is a random process, there isalways a chance that annihilation events are missed due to dead-time, even at lowcount rates. At high count rates, dead-time may cause significant bias in quantifi-cation. The amount of detected activity must be chosen with this consideration inmind.1.3.10 Radioactive DecayIn quantitative PET imaging, particularly in dynamic PET, the goal is to obtainimages of activity concentration that can be related to tracer concentration. Whilethe total number of tracer molecules is expected to remain the same during thescan, the number of positron emissions and number of acquired coincidences persecond are reduced with time, according to the law of radioactive decay. In orderto account for the changing ratio between the activity and tracer concentration,decay correction must be applied to the acquired count data. Decay correction doesnot compensate for the loss of image quality that occurs due to the reduction ofcoincidence counts. Therefore, one must take decay into account when designingtracers and imaging protocols.1.3.11 MotionImage quality in PET imaging is largely determined by the number of coincidencecounts used to reconstruct an image. On the other hand, the amount of activitythat can be safely administered to a patient is limited. In fact, there is an ongoinge↵ort to increase the sensitivity of PET scanners, so that the administered dosecan be further reduced. Scans are therefore performed over extended time periods(30–90 minutes) in order to acquire sucient number of counts, and in certaincases to allow the tracer molecules to equilibrate/accumulate in the target tissues.Substantial patient motion may occur during the scans, leading to the followinge↵ects:• loss of e↵ective image resolution and contrast due to motion blur;• mislocalization and over-estimation of regions with high tracer uptake, e.g.metastatic tumors or receptor-rich brain structures;291.4. PET Image Reconstruction• a mismatch between the attenuation map and emission data, which may leadto erroneous estimates of activity concentration;• in dynamic imaging, motion between frames may lead to incorrect estimatesof biology-related tissue parameters.Chapter 2 provides an overview of e↵ects that rigid and non-rigid motion may havein PET imaging.1.4 PET Image ReconstructionThe objective of PET imaging is to obtain the distribution of radioactivity (tracer)concentration in the body. The radioactivity distribution is parameterized as a 3Dvolume image represented by an array of voxels. The value of each voxel correspondsto the number of + decays that occurs in the region occupied by the voxel, per unittime. Voxel activity values are estimated using image reconstruction algorithms.Two types of reconstruction algorithms are used in PET: analytic and itera-tive. Analytic methods reconstruct images by applying an inverse transform to theprojection (sinogram) data. Iterative methods, on the other hand, go through asequence of progressively more accurate image estimates. The estimates are madeusing a statistical model of the coincidence data acquisition process. Analytic anditerative reconstruction algorithms can be applied to data acquired in 2D and 3Dmode. In addition, several iterative algorithms have been developed that work di-rectly with list-mode coincidence data, without the need to histogram coincidencesinto projections. A variety of image reconstruction algorithms of both types havebeen developed, not only for use in PET but also in single-photon emission CT(SPECT) as well as CT. Here, only most widely used algorithms are discussed,namely the filtered back-projection (FBP) algorithm, and the maximum-likelihoodexpectation maximization (MLEM) algorithm and its variants. The considerationof the sinogram and list-mode MLEM algorithms is particularly useful here, sincethey are directly related to some of the most common motion correction methodsin PET, as discussed in Chapter 2.1.4.1 Analytic ReconstructionAnalytic reconstruction in PET is based on the mathematics of CT, where a line-integral projection model is assumed. Given a two-dimensional distribution of ra-301.4. PET Image Reconstructiondioactivity concentration f(x, y), the amount of radiation g⇠ detected along a line(an LOR) ⇠ is modeled as being equal to the following line integral:g⇠ =Z⇠f(x, y)dl (1.17)where l is the coordinate along the line. In order for this model to be applicable tohistogrammed PET coincidence data, gamma attenuation, detector normalization,random and scattered coincidences must be taken into account. In addition, onemust account for the so-called arc e↵ect: due to the curvature of the detector ring,LORs near the edge of the FOV are spaced closer together than LORs near thecenter. To make the projection bins become evenly spaced in the radial direction,arc correction is performed by re-sampling the sinogram counts. With necessarycorrections applied, g⇠ may approximately represent the number of true coincidencecounts in the sinogram bin ⇠. For any point on the bin projection line, it holdstrue that cos' + y sin' = r, where r is the radial o↵set of the line, and ' is theazimuthal angle (similar notation to the one used in Section 1.2.7). Therefore, Eq.1.17 can be written in the formg(r,') =1Z11Z1f(x, y)(x cos'+ y sin' r)dxdy (1.18)where  is the Dirac delta function, f(x, y) is the radioactivity distribution, ' isthe angle of the projection line, and r is the radial o↵set of the projection line.This projective transformation is called Radon transform, or the X-ray transform.Therefore, in the line-integral projection model of PET, sinogram data representsRadon transform of the unknown radioactivity distribution. The goal of analyticreconstruction is to compute the unknown image f(x, y) by applying inverse Radontransform to the measured projection data g(r,').The standard method to compute f(x, y) is to perform FBP of the sinogram data.FBP can be derived using the similarity between Radon and Fourier transforms. Letfˆ(u, v) denote a 2-dimensional Fourier transform of the function f(x, y), where uand v are Cartesian coordinates in the frequency space. Using the identity f(x, y) =311.4. PET Image ReconstructionFT1(fˆ(u, v)), we can write:f(x, y) =+1Z1+1Z1fˆ(u, v)e2⇡i(xu+yv)dudv (1.19)The function fˆ(u, v) can be considered in polar coordinates, where u = r cos' andv = r sin'. With polar substitutionu = r cos' (1.20)v = r sin' (1.21)Eq. 1.19 becomesf(x, y) =⇡Z0+1Z1fˆ(r,')e2⇡ir(x cos'+y sin') |r| drd' (1.22)where fˆ(r,') is the Fourier transform of the function f(x, y) considered in polarcoordinates. The central slice theorem (also known as the Fourier slice theorem)establishes the equality between the Fourier transform of g(r,') and fˆ(r,'), for agiven angle ':fˆ(r,') = gˆ(r,') (1.23)where gˆ(r,') is the one-dimensional Fourier transform of g(r,') with respect tor. Note that on the right side of the equation, variable r represents the frequencyaxis (this variable reassignment is justified by the theorem). Using the central slicetheorem in Eq. 1.22 yields:f(x, y) =⇡Z01Z1gˆ(r,')e2⇡ir(x cos'+y sin') |r| drd' (1.24)The inner integral in this equation represents the inverse Fourier transform of theproduct gˆ(r,') |r| with respect to r:g0(x cos'+ y sin',') =+1Z1g(r,')e2⇡ir(x cos'+y sin') |r| dr (1.25)321.4. PET Image ReconstructionNote that the function g0 is obtained by taking the Fourier transform of g (for agiven projection angle '), multiplying it by |r| in the frequency domain, and takingthe inverse Fourier transform. Mathematically, this represents a one-dimensionalfiltering operation performed in frequency space. Using the function g0, Eq. 1.24can be re-written in the form:f(x, y) =⇡Z0g0(x cos'+ y sin',')d' (1.26)Analytical solution for f(x, y) can be obtained using the FBP operation describedby Eq. 1.26:1. projection data for a given angle ' are Fourier-transformed,2. multiplied by the ramp filter |r| in the frequency domain (filtering step),3. Fourier-transformed back to the spatial domain, and4. back-projected into the image space (voxel or pixel grid) by successively addingthe contributions from all projection angles '.In 2D reconstruction, sinograms of direct and cross planes are reconstructed individ-ually, and the images are stacked together to form a volume image. FBP representsa gold standard of analytic image reconstruction in PET.Taking the Fourier transform of g for a given ', multiplying it by |r| in thefrequency space, and taking the inverse Fourier transform represents filtering theprojection data with a high-pass ramp filter. Forward-projecting an image andthen back-projecting it without ramp filter applied corresponds to a smoothingoperation using the radially-symmetric and shift-invariant kernel h(r) = 1/r. Inother words, forward- and back-projecting of the image produces the original imagef(x, y) convolved with h(r). This e↵ect occurs due to the fact that the samplingdensity is greater in the low-frequency region of the spectral domain, compared tothe high-frequency region. The use of the ramp filter accounts for the samplingnon-uniformity, however it also amplifies high frequency components of the imagethat may lead to high levels of noise, particularly if the sinogram data are noisy dueto a low number of counts. Therefore, instead of the ramp filter other filters areoften used that reduce the amplitude of high-frequency components in the image.Examples of such filters are shown in Fig. 1.11.331.4. PET Image ReconstructionThe function f(x, y) can also be obtained by back-projection filtering. In thiscase, sinogram data are back-projected first, and the ramp filter is applied to theback-projected image in the frequency domain. Yet another alternative is to usethe central slice theorem directly for image reconstruction: in this case the functiongˆ(r,') can be interpolated on a rectangular grid, and the inverse Fourier transformis taken.Forward-projection of an image corresponds to coincidence data acquisition inPET. The sampling frequency of the image space is determined by the distancebetween the adjacent projection lines, or between the LORs. According to theNyquist theorem, if a function is sampled using a sampling distance d, then thehighest-frequency component fc represented in the recovered function (or its Fouriertransform) is determined by the equationfs =12d(1.27)An imaging system cannot recover frequencies greater than fs, known as the Nyquistfrequency. If the distance between adjacent detectors (LORs) is x, then the max-imum image resolution that can be achieved is equal to 2x. Therefore, a recon-structed image of a point source will always appear blurred due to a finite scannerresolution. A function that quantifies the degree of point-source blurring is calledthe point-spread function (PSF) of a scanner. Due to the parallax e↵ect, in PETscanners the PSF is not shift-invariant (not spatially uniform): the PSF becomeswider away from the central axis the FOV.Coincidence data acquired in 2D mode can be readily reconstructed using 2DFBP. However, in 3D acquisition mode the data are represented by 3D sinograms,where coincidences are arranged in 2D sets of (r,')-projections that correspondto di↵erent values of ✓ and z. There are two problems that arise in analytic imagereconstruction from 3D projections due to the limited spatial extent of PET cameras:• projections can not be obtained for the full range of ✓;• projections corresponding to di↵erent values of ✓ are truncated; higher valuesof ✓ contain fewer projections.In the 3D re-projection (3DRP) algorithm [20], these problems are circumventedby first reconstructing a subset of the data using conventional 2D FBP. The re-constructed images are stacked together to form a volume image, which is then341.4. PET Image Reconstructionre-projected along those projections that are truncated in the original 3D sinogram.The resulting combination of acquired and synthesized data is reconstructed using3D FPB.Another method to reconstruct 3D sinogram data is to re-bin the data into a setof 2D sinograms, and then use 2D reconstruction algorithms (analytic or iterative).The simplest method of rebinning is called the single-slice rebinning (SSRB). Con-sider a coincidence that was recorded between two detectors with axial coordinatesz1 and z2. In SSRB, the coincidence is reassigned to a ring plane (direct or cross)that is closest to the axial coordinate (z1+ z2)/2. The parametric LOR coordinatesr,  are unchanged in this operation, z may change slightly, and ✓ becomes zero [21].This operation reduces the spatial resolution of the data. Fourier rebinning (FORE)is a frequently used 3D to 2D rebinning method that is more accurate than SSRB[22]. Prior to rebinning, the coincidence data must be corrected for attenuation,random coincidences, scattered events and detector normalization.The advantages of analytic reconstruction include relative ease of implementationand high speed. It also provides a desirable property of linearity, in the sense thatchanges in the projection data correspond to proportional changes in the images.For this reason, analytic reconstruction is traditionally used to measure the intrinsicscanner resolution. On the other hand, due to the reliance on the line-integralprojection model, analytic reconstruction has a very limited ability to take intoaccount various physical e↵ects, such as the statistical nature of the coincidencedata collection, positron range, and non-uniform resolution. In addition, due to theincompleteness of the projection data, analytic reconstruction produces images withhigh levels of noise and streak artifacts. The use of smoothing filters to reduce noisealso reduces the resolution of the image. These issues (noise in particular) promotedthe development of iterative image reconstruction methods.1.4.2 Iterative ReconstructionIterative reconstruction algorithms aim to iteratively solve a system of equationsthat describes the physics of the process of coincidence data acquisition. Iterativealgorithms do not necessarily assume the line-integral model, and enable modelingof statistical noise, non-uniform resolution, positron range, and other physical e↵ectsdirectly in the image reconstruction process. Let the projection data (2D or 3D) berepresented by a column-vector Y = {yi}, where i = 1 . . . I is the sinogram bin orLOR index. Measured coincidence data may be treated as a realization of a vector351.4. PET Image ReconstructionRampButterworthCosineHammingFrequency0.25000.250.50.751.00.5 0.75 1.0MagnitudeFigure 1.11: Frequency responses of several filters commonly used in analytic imagereconstruction. High frequencies are suppressed in the Butterworth, cosine andHamming filters.of random Poisson variables Y¯ = {y¯i} that describe the expected number of countsin di↵erent bins. Further, let the unknown image of the activity distribution berepresented by a real-valued column-vector x = {xj}, j = 1 . . . J . Linear models ofdata acquisition are used in PET, where the expected number of counts is relatedto the unknown activity distribution through the equation:E[yi] = y¯i =JXj=1pijxj + E[ri] + E[si] (1.28)where pij are the elements of an I⇥J matrix P that describes the probability that adecay event that occurs in the voxel j is detected along the LOR i. In other words,the matrix P quantifies the probabilistic contributions of radioactivity in di↵erentvoxels to true coincidence counts in di↵erent LORs (or sinogram bins). P is calledthe system matrix (SM). The vectors ri and si represent the contributions of randomand scattered coincidences, respectively.The system of equations defined by Eq. 1.28 is typically very large. For example,an image matrix with a modest size 128⇥128⇥96 voxels contains ⇠ 106 elements;the number of sinogram bins may be on the same order of magnitude. The matrixP is ill-conditioned or ill-posed. Due to these two factors, a sensible solution toEq. 1.28 in general can not be obtained by matrix inversion techniques. Iterativemethods to estimate x are used instead.Iterative methods to solve linear systems are based on optimizing a cost func-361.4. PET Image Reconstructiontion. A cost function that is typically used in emission tomography is the likelihood(conditional probability) of measuring data y given the activity distribution x. ForPoisson-distributed data measured in I detectors, the likelihood is given by theequation:p(Y|x) =IYi=1p(yi|x) =IYi=1(y¯i)yiyi!ey¯i (1.29)and the log-likelihood is defined asL(Y|x) =IXi=1(yi log(y¯i) y¯i  log(yi!)) (1.30)The problem of image reconstruction can be formulated as the problem of optimiz-ing the cost function given by Eq. 1.30, with solution obtained in the maximumlikelihood (ML) sense:xˆ = argmaxxL(Y|x) (1.31)One method to obtain the solution is to use the expectation maximizationmethod. This corresponds to the most widely used algorithm of iterative imagereconstruction in PET –MLEM, which was first proposed for use in emission to-mography by Shepp and Vardi [23]. Assuming that the coincidence data have beenpre-corrected for random and scattered events, it can be shown [23, 24] that x thatmaximizes the likelihood can be obtained using the following iterative image updateequation:xm+1j =xmjIPi=1pijIXi=1pijyiJPj=1pijxmj(1.32)where pij are the elements of the SM, and xmj represents the m-th iteration of thevoxel j. Each successive iteration of Eq. 1.32 increases the likelihood L(Y|x). Thus,the agreement between the estimated image and true image is expected to increasewith each iteration. In the vector-matrix notation, Eq. 1.32 can be written asxm+1 =xmSPTYPxm(1.33)where T represents the transpose, and S is a vector of sensitivity values sj =IPi=1pijthat quantifies the probability that a decay event in voxel j is detected by the system.371.4. PET Image ReconstructionAlthough in principle the SM can be measured directly by imaging a (moving) pointsource, in practice it is estimated computationally using knowledge of the underlyingphysics of the measurement system. Various approximations are typically madewhen computing the SM, and these approximations may have a substantial e↵ecton the accuracy and quality of the reconstructed images.The denominator product Pxm represents the forward-projection step that es-timates the expected counts from the current image iteration. The multiplicationof the correction ratio Y/Pxm by the transpose of P represents the backprojectionstep. Note that since the correction ratios and P are always positive, voxel valuesin the image estimates can never become negative (given that the initial image es-timate is positive). Thus, the reconstruction process is consistent with the physicalnature of the data. The first estimate of x can be initialized as a vector of ones.In each iteration, the entire projection data is used in the forward-projection andback-projection steps. The algorithm is usually terminated prior to the convergence,when the di↵erence between two successive iterations becomes smaller than a pre-defined threshold. The convergence rate depends on the number of image voxelsand projection bins. Typically, on the order of 100 iterations are required.In the ordered-subset expectation maximization (OSEM) algorithm introducedby Hudson and Larkin [25], the projection data are split into several subsets. Forexample, in 3D sinograms, di↵erent subsets may correspond to di↵erent values of ✓and z. Each OSEM iteration uses data from only one subset, and the subsets arecycled between iterations. The OSEM algorithm has a faster image convergencerate compared to MLEM, and the acceleration factor is equal to the number of usedsubsets. However, OSEM is not necessarily convergent, as the obtained solution isstrongly biased by the subset used in the last iteration. Methods to deal with thenon-convergence issue have been proposed [26].In order to reduce the noise in the reconstructed image and to improve conver-gence, regularized versions of the MLEM algorithm have been developed [27, 28].The log-likelihood function in Eq. 1.30 is appended by adding a regularization termR(x), also called a penalty function:Lreg(Y|x) = L(Y|x) R(x) (1.34)where  is the tuning parameter that controls the amount of regularization. Reg-ularized iterative reconstruction o↵ers a better trade-o↵ between image resolution381.4. PET Image Reconstructionand noise compared to the unregularized MLEM.Pre-correction of histogrammed coincidence data for random and scattered eventsalters the statistical properties of the data and may introduce statistical biases. Forexample, if a sinogram that estimates the number of random coincidences is sub-tracted from the prompt coincidence sinogram, the values of some bins may becomenegative. Negative values must be set to zero prior to the reconstruction— thisintroduces a positive bias and is detrimental to the image accuracy. Several meth-ods to compensate for the altered data variance have been proposed [29]. In theordinary-Poisson MLEM (MLEM-OP), scattered and random coincidences are ex-plicitly incorporated into the forward-projection step of the MLEM algorithm. Thisis achieved through the inclusion of the respective terms in the denominator of theimage update equation:xm+1j =xmjIPi=1pijIXi=1pijyiJPj=1pijxmj + ri + si(1.35)where si and ri are the estimated contributions of scattered and random coincidencesto the LOR i, respectively. The denominator in Eq. 1.35 includes all measuredcoincidence data (trues + randoms + scatter). OP reconstruction produces unbiasedand statistically accurate image estimates. However, it requires access to raw promptcoincidence data that may not necessarily be provided by the scanner.Algorithm given by Eq. 1.35 uses histogrammed coincidence data. However, asdiscussed in Section 1.2.7, it is sometimes preferred to store coincidence data in thelist-mode format that preserves full spatial and temporal resolution. The MLEM-OP algorithm can be modified to reconstruct images directly from the list-modecoincidence data:xm+1j =xmjIallPi=1pijIXi=1pij1JPj=1pijxmj + ri + si(1.36)where I represents all prompt coincidence events in the list-mode file, and wherethe sensitivity sumNPn=1pnj is taken over all possible LORs of the system. In high-resolution systems with hundreds of millions of possible LORs, it is computationallyprohibitive to compute this sum directly. Therefore, a random subset of the system’s391.4. PET Image ReconstructionLORs is taken, and the sensitivity values are estimated using only that subset [30].Besides random and scatter coincidences, modeling of other physical e↵ects thattake place during the imaging process is achieved by including these e↵ects in the SMP. Mathematically, this can be represented as factorization of the SM into multiplecomponents. For example, if the matrix for a particular system includes the e↵ectsof gamma attenuation, positron range, and parallax, it can be represented asP=PgeomPattPposPpar (1.37)where Pgeom is a purely geometric component, and Patt, Ppos, Ppar are the com-ponents that model the corresponding e↵ects. Iteratively reconstructed images are“automatically” corrected for image/data degradation factors incorporated in theSM. This is opposed to analytic reconstruction, where individual corrections for var-ious factors must be applied to the projection data. Note that the attenuation andpositron range components depend on the distribution of matter in the FOV, whilethe geometric and parallax components are constant for a given camera. Therefore,geometric and detector-related components of the SM can be pre-computed andstored, and components that depend on the matter distribution must be computedseparately for each scan. Often the geometric components of P are computed basedon the line-integral approximation. Thus, if no other components are considered,the SM may be equivalent to the projection model used in analytic reconstruction.1.4.3 Quantitative Corrections in Image ReconstructionNormalizationIn a PET scanner, the eciency of gamma photon detection varies between thedetectors and entire detector blocks. There are multiple sources of this variabilityon both levels. For example, crystal imperfections may strongly a↵ect the eciencyby hindering the propagation of scintillation light. On the block level, there maybe variations in the gains of the scintillation light detectors (PMTs and APDs).Finally, detector crystals invariably have di↵erent e↵ective surface areas and depthsof interaction. In order to maintain quantification and to avoid artefacts, thesedi↵erences in sensitivity must be taken into account prior to image reconstruction.The process of correcting the coincidence data for variable detector ecienciesis referred to as detector normalization. The standard approach to normalizationis to multiply the coincidence data by the normalization coecients that represent401.4. PET Image Reconstructionthe inverse of relative detector eciencies. In the sinogram space, normalizationis achieved by multiplying the prompts sinogram by the normalization sinogramthat contains the normalization coecients. The normalization sinogram can eitherbe measured directly, or broken up into di↵erent components that are estimatedexperimentally or numerically.In the direct normalization scheme, the normalization sinogram is measured byexposing the detectors to a uniform + source of simple well-known geometry. Fromthe source geometry, the expected number of coincidences in each sinogram bin orLOR is computed under the assumption of uniform eciency. The normalizationcoecients are then computed as the ratio between the expected and measurednumber of counts. Random coincidences and dead-time e↵ects must be avoidedduring the acquisition of the normalization data. Therefore, normalization sourcestypically have relatively low activity, which in turn necessitates normalization scansof long duration (up to several hours) in order to obtain the data of sucient sta-tistical quality. The accuracy of direct normalization methods may be compromisedby a) scattered events [31], since they have di↵erent energies and incidence anglescompared to true events, and b) pulse pile-up, which causes event mispositioningwithin the detector block. The latter aspect imposes count-rate dependency on thenormalization factors.In the component-based normalization scheme first implemented by Ho↵man etal [32], the normalization coecients are factorized into terms that represent di↵er-ent components of the overall detection eciency. An example of such factorizationis given by the equation [33]:NCij = "i"jcicjrijaij (1.38)where• NCij is the normalization coecient for the LOR formed by the detectors iand j;• "i, "j are the intrinsic crystal eciencies that depend on the PMT gains andscintillation crystal properties;• ci, cj describe the systematic variation in crystal eciency within each blockdetector;411.4. PET Image Reconstruction• rij are the radial geometric factors that account for eciency variations dueto the di↵erent radial o↵sets of di↵erent LORs;• aij are the axial geometric factors that account for di↵erent photon incidenceangles in the axial direction.Component-based normalization alleviates the issues of scatter and pulse pile-upthat are encountered in direct normalization.In addition to detector normalization, the reconstructed images must be multi-plied by the overall calibration (scale) factor in order to achieve absolute quantifi-cation, i.e. in order for the voxel values to represent activity concentration. Cal-ibration accounts for finite sensitivity of the scanner and positron branching ratiofor the used isotope. The calibration factor is determined by scanning and recon-structing an image of a uniform source of known activity [34]. If all other necessarycorrections are in place, the calibration factor is the ratio of the true activity in thesource and the (mean) voxel values in the reconstructed image of the source.AttenuationGamma photons are attenuated by matter according to Eq. 1.14. Without correc-tion for attenuation, activity values in the reconstructed images are underestimated.The amount by which annihilation gamma photons are attenuated along a particu-lar LOR is called the attenuation factor (AF) for that LOR (or sinogram bin). TheAFs are determined by the equation:AFi = e Ri µ(x,y,z)dl (1.39)where the integral is taken along the LOR i, and µ(x, y, z) is the distribution ofthe linear attenuation coecient in the space between the detectors. Note thatdue to the property described by Eq. 1.14, AFs in PET describe the attenuationof pairs of coincidence photons, and not single photons. When coincidence dataare represented in the sinogram format, attenuation correction is performed priorto image reconstruction by dividing the counts in each bin by the respective AFs.In list-mode image reconstruction, AFs can be incorporated directly into the SM(diagonal matrix Patt in Eq. 1.37).The AFs are traditionally obtained from performing a blank and transmissionscans that utilize an external rotating source of gamma photons. The blank scan421.4. PET Image Reconstructionis performed without any object inside the scanner. During the blank scan, thegamma source is rotated inside the camera in a spiral, and the number of detectedphotons is recorded in LORs passing through the source. The transmission scanis performed in a similar fashion, but with the subject placed inside the scanner(hence the transmission scan is usually performed right before the emission scan).The AFs are computed as the ratios of detected gammas between the blank andtransmission scans.If the external source has di↵erent gamma energy than 511 keV, it is necessaryto reconstruct the AFs into a volume image of linear attenuation coecients, calledthe µ-map (Fig. 1.12A). The µ-values for the gamma energy of the source are thenconverted to µ-values for 511 keV photons [35]. From the adjusted µ-map, the AFsfor 511 keV photons are computed using the equationAFi = ePjµjaij(1.40)where µj is the µ-value of the voxel j, and aij is the length of intersection betweenthe LOR i and voxel j. Noise and errors in µ-maps propagate to reconstructedimages. Therefore, µ-maps are often segmented into distinct tissue classes withwell-known attenuation coecients for 511 keV photons (e.g. soft tissue, bone, air)[36]. Forward-projection of the segmented µ-map provides a less noisy estimate ofthe AFs. Examples of images reconstructed with and without attenuation correctionare shown in Fig. 1.12B.Scattered EventsThe most straightforward method to reduce the scatter fraction is to use detectorswith high energy resolution, and to reject gamma photons that have energy lessthan 511 keV. However, even with energy discrimination in place, a relatively largenumber of scattered events may remain in the data (due to a relatively low energyresolution of modern PET detectors) [17]. Di↵erent methods were developed tocorrect the reconstructed images for scatter.Some of the most common and accurate methods are based on simulating thescatter process. One of such methods is the single scatter simulation [37]. Start-ing from the known µ-map and activity distribution estimates, the algorithm usesthe Klein-Nishina equation to estimate the distribution of scattered coincidencesacross di↵erent LORs. The scatter estimates can be either a) subtracted from the431.4. PET Image ReconstructionFigure 1.12: A. Single transaxial planes of µ-maps of the small animal NEMAphantom shown on the left. The µ-maps were reconstructed using three di↵erentmethods, indicated on the bottom right corner. B. Images of the small animalNEMA phantom reconstructed with and without attenuation correction.prompts sinogram, b) reconstructed and subtracted from the images, or c) useddirectly in the image reconstruction process, e.g. in the si term of Eq. 1.35. Themain approximation of the single scatter method is that it only simulates one (first)scattering interaction per gamma photon. The gamma energy in the first interac-tion is known to be 511 keV, and thus the scattering cross-section can be readilycomputed. Gamma photons that are Compton-scattered more than once are likelyto lose a significant fraction of their energy, and such events can be rejected basedon their energy of interaction in the detector crystals. Thus, the single scatter ap-proximation is reasonably accurate when it is combined with energy discriminationtechniques.Other methods of scatter correction include:• using multiple energy windows to estimate the distribution of scattered coin-cidences [38];• subtraction of the convolution-based scatter estimates from the sinogram orimage data [39];441.4. PET Image Reconstruction• fitting a Gaussian function to the scattered tails of coincidence counts in theprojection space [40].The ecacy of scatter correction may be assessed by measuring the backgroundactivity concentration in the images, i.e. activity in the space not occupied by thesubject.Random CoincidencesOne of the traditional methods to estimate the fraction of random coincidences indi↵erent LORs or sinogram bins is to use a delayed coincidence window [41]. Themethod works as follows. A time delay is introduced into the coincidence circuitrythat is much greater than the length of the prompt coincidence window. During thedelayed coincidence window, no true annihilation event could be detected. Coinci-dence data that are recorded during the delayed window therefore consist entirelyof random events. However, the distribution of random events in the projectionspace must be the same between the delayed window and the prompt window. Themeasured delayed coincidence rates are thus histogrammed and subtracted from theprompt sinograms, or used in the scatter term ri in Eq. 1.35. The downside of themethod is that the estimated distribution of random events may be quite noisy, andthe noise will propagate into the reconstructed images. Another commonly usedmethod that is less susceptible to noise is to estimate the random coincidence ratefrom the single event rate using Eq. 1.16 [42].Other CorrectionsCorrection for deadtime can be performed by multiplying the counts by deadtimecorrection factors, which can be estimated from the single event rates. Typicallydeadtime correction schemes are provided by the scanner manufacturers as theyreflect the performance of the associated electronics and data processing, in additionto the deadtime due to the signal integration time on the detector level.Positron range can be incorporated into the SM as a blurring kernel that widensthe detector pair response (rows of the SM) [43], or its e↵ects can be reduced usingimage deconvolution at the expense of increased noise [44]. Similarly, photon non-collinearity, inter-crystal scatter and parallax e↵ect can be modeled using position-dependent blurring kernels in the projection space and image space [45]. Decaycorrection is applied as a multiplicative factor computed using the law of radioactive451.5. Tracer Kinetic Modelingdecay. Various techniques of motion correction in PET are reviewed in detail inChapter 2.1.5 Tracer Kinetic ModelingStatic PET imaging shows how the tracer distributes in the body, and where thetracer’s preferential binding sites are located. Although this information in itselfmay be of interest, it does not reveal the complete picture. For example, the ratesof tracer binding and entrapment in di↵erent tissues can not be obtained from staticimaging. Indeed, the rates of binding of radioligands to their target receptors couldreveal valuable information regarding the biochemical processes under study, suchas the balance between the endogenous and exogenous ligands, or the physiologicalresponse to an external stimulus. As opposed to static PET imaging, dynamicimaging combined with tracer KM enables the investigation of these and otheraspects of tracer kinetics.The main measure of interest obtained from the dynamic image frames is thechange of activity concentration over time, termed the time-activity curve (TAC).If the frame images are correctly aligned, two types of TACs can be extracted: 1) aregion-of-interest (ROI) -based TAC, the mean activity concentration measured overtime in an ROI defined identically in all frames, and 2) a voxel-based TAC, thechange of activity concentration with time in individual voxels.The rate of tracer accumulation in tissues depends on the rate of tracer supplythrough the vascular system. Therefore, in addition to image-derived TACs oneadditional measure is required to estimate the binding rates — the concentration oftracer (activity) in blood over time, called the input function. The input functioncan be measured directly during the scans by drawing blood samples at regulartime intervals. Less invasive methods to measure the input function have also beendeveloped, as discussed below.Using KM, it is possible to estimate the tracer kinetics based on the TAC and in-put function measurements. KM is based on the notion of compartments — distinctphysiological states that the tracer molecules can assume. Consider, for example, abiologically active ligand that can transfer between the tissues and the blood. Thepossible physiological states of such a ligand may include (but not limited to):1. unbound (free) state in the blood plasma;461.5. Tracer Kinetic Modeling2. unbound extravascular state;3. bound to an endogenous molecule.The ligand may be in only one state in any given time. The direction and proba-bility of molecular transitions between di↵erent states determine the rates of tracerflux between the compartments. The flux is determined by the intrinsic kineticproperties of the tracer and by the biological state of the organism on the micro-scopic level. In molecular imaging studies, it is often of greater interest to estimatethe tracer transfer rates between di↵erent compartments in the tissue (e.g. boundversus unbound), rather than the overall tracer accumulation rate in the tissue.The activity concentration measurements in TACs represent combined activityfrom all compartments. This is evident from the consideration of a tissue volumeencompassed by a single voxel. Since the resolution of PET is relatively coarse, onthe macroscopic scale each voxel is likely to represent a combination of tissue types.More importantly, on the microscopic scale, each voxel contains a variety of mi-croscopic intracellular and extracellular interfaces and structures. Such microscopicstructures may include, for example, blood and lymph capillaries, capillary walls,cellular membranes, intracellular space, etc. The voxel’s activity concentration valueis measured based on the radioactivity that emanates from molecules in di↵erentphysiological states (compartments). An image-derived TAC can therefore be rep-resented as a sum of unknown TACs that correspond to di↵erent compartments inthe voxel or the ROI. The goal of KM is to estimate the compartmental TACs andthe tracer transfer rates between the compartments.The modeling aspect in KM comes from choosing the number of compartments,and possible tracer transitions between the compartments. These parameters di↵erdepending on the research objective and the system under study. Once the numberand transitions between the compartments are chosen, the model is mathematicallyformulated in a set of ordinary di↵erential equations, with one di↵erential equationper compartment. The following assumptions are made in all models:• At any given time, the tracer concentration is uniform within the analyzedvolume, i.e. there are no spatial concentration gradients within compartments;• Model parameters are assumed to be constant with respect to the study (scan)duration;• The input function is the same for all tissues in the body.471.5. Tracer Kinetic ModelingModels in KM are typically visualized using block diagrams. Blocks correspondto di↵erent compartments, and directions of the tracer flux are indicated by arrows.The fundamentals of KM can be illustrated using a one-compartmental model thathas a block diagram shown in Fig. 1.13A. Input from the blood is illustrated onthe left, and the block on the right represents the compartment of interest, e.g.the tissue. Blood is not typically considered as a compartment, since the tracerconcentration in the blood (the input function) is measured experimentally. Towrite the di↵erential equation of the model, let B(t) represent the input function(tracer concentration in the blood), and F (t) represent the time-dependent tracerconcentration in the tissue compartment. A standard assumption that is made inKM is that the rate of tracer e✏ux from a compartment is proportional to the tracerconcentration in that compartment. Under this assumption, the rates of tracer e✏uxfrom the blood and from the tissue are equal to K1B(t) and k2F (t), respectively,where K1 and k2 are the unknown rate constants. The rate of tracer concentrationchange in the tissue compartment dF (t)/dt can then be expressed as:dF (t)dt= K1B(t) k2F (t) (1.41)where the law of mass conservation was used, as the positive flux direction wastaken to be from the blood to the tissue. The rate constants K1 and k2 reflect thekinetic properties of the tracer molecule, and often the goal of KM is to estimatethese parameters (a more detailed interpretation of the rate constants depends onthe particular system under study). The di↵erential equations that constitute themodel are solved to obtain the values of the unknowns, i.e. the rate constants andthe compartmental TACs. The initial conditions are typically set to be zero sincethere is no radioactivity in the organism prior to the tracer delivery. Methods toestimate the parameters of the one-compartmental and more complex models arediscussed below.1.5.1 One-compartmental ModelThe one-compartmental reversible model described by Eq. 1.41 is used when thetissue can be represented by a single compartment. For example, it is used in bloodperfusion imaging that aims to assess the amount of blood delivered to tissues bycapillaries per time. Perfusion is measured in units of volume (of blood) delivered perunit of time per unit of volume (or weight) of tissue. A suitable tracer for perfusion481.5. Tracer Kinetic Modelingimaging must be easily di↵usible and not specifically bind in the tissue. An exampleof such tracer that has been extensively used in cardiac perfusion imaging is 15O-water.With easily di↵usible tracer, the constant K1 is strongly governed by perfusion.The constants K1 and k2 can be obtained from Eq. 1.41 that can be solved to gainan explicit expression for F (t):F (t) = K1B(t) ⇤ exp(k2t) (1.42)where ⇤ denotes the convolution operation. It is assumed that the TACs measuredfrom the images reflect the activity concentration in the tissue compartment; thecontribution activity in the blood is often considered to be negligible. Therefore,F (t) corresponds to the image-derived TAC, and B(t) represents the arterial inputfunction. The rate constants K1 and k2 can be estimated by fitting Eq. 1.42 tothe data. If the images and the input function are not decay-corrected, the esti-mated constants incorporate the rates of radioactive decay. With decay correctionapplied, the constants solely characterize the combined blood flow and blood-tissuepermeability.A biological parameter of interest that is frequently sought after is the distribu-tion volume (DV), which relates the equilibrium tracer concentrations in the bloodand in the tissue. Consider a system where the tracer distribution between the bloodand the tissue has equilibrated, and the values of F and B are constant. From Eq.1.41 it follows that in this case K1B(t) = k2F (t), and the DV can be computedusing the equationDV =FB=K1k2(1.43)Another interpretation of the DV is that it quantifies the volume that a unit amountof tracer occupies in the blood, relative to the volume in the tissue. Often it is easierto estimate the DV than the individual rate constants. For example, if the state ofequilibrium can be achieved experimentally via continuous tracer infusion, the DVcan be computed as the ratio of F to B.In perfusion imaging, two confounding factors of the input function measurementmust be considered. These factors are caused by the fact that the input functionis typically sampled from a di↵erent location compared to the target region. First,there may be a time delay between the true input function in the target region andthe measured input function. Second, the measured input function may be more dis-491.5. Tracer Kinetic ModelingA BNFB SB FK1k2k6 k5k2k4k3K1(non-specific)(free) (specific)(blood)(tissue)(blood)Figure 1.13: A. Block diagram of a one-compartmental model (blood is not consid-ered as a compartment). Tracer is delivered from the site of injection by the bloodflow (left block). From the blood, tracer enters the tissue compartment by di↵usionor active transport (right block). Arrows indicate the possible directions of tracerflux. B. Block diagram of a three-compartmental model, with free, specific tracerbinding, and non-specific tracer binding compartments.persed than the true input function. Both of these phenomena are introduced due tothe non-equal distances (and blood flow rates) between the sites of injection, arterialblood sampling, and compartmental analysis. Several methods have been proposed[46, 47] to account for these e↵ects prior to estimating the kinetic parameters usingEq. 1.42. The di↵erence between the true and measured input functions becomesrelatively low after only a few minutes. Therefore, in receptor-ligand studies thatrequire longer scans these e↵ects are typically ignored.Receptor-ligand studies are subject to another type of limitation when it comesto measuring the input function. In compartmental models, the input function B(t)is supposed to represent the concentration of “native” (unaltered) tracer moleculesin the blood. However, it is possible for the native tracer molecules to becomemetabolized in the periphery and re-introduced into the blood stream. The inputfunction will be over-estimated if the labeled metabolites are included in the bloodsamples. Typically a metabolite fraction is estimated from the drawn blood samplesand the input function is adjusted accordingly. Here one has to also rely on theassumption that the tracer metabolites do not enter the tissue compartment. In lightof these potential complications, investigation of the peripheral tracer metabolismis an important step in the development and characterization of new tracers.501.5. Tracer Kinetic Modeling1.5.2 Multi-compartmental ModelsIn PET imaging studies that probe tracer biding to available receptors/transporters(which under most conditions can be assumed to be proportional to density ofavailable receptors/transporters), one-compartmental models may not accuratelydescribe the tracer kinetics. Once a radioligand enters the tissue, it can be in atleast two states: free and bound to the target receptors. Therefore, at least atwo-compartmental model is, in principle, necessary to adequately describe the sys-tem. Three-compartmental models are sometimes employed to further discriminatebetween the various states of the tracer.The block diagram of a three-compartmental model with reversible binding isshown in Fig. 1.13B. Tracer molecules delivered by the blood plasma (input functionB) enter the free unbound state (compartment F ) when they first cross into thetissue. From the free state, the tracer may become reversibly bound to the targetreceptors. This is referred to as specific binding (compartment S). The free tracermolecules may also become non-specifically bound (compartmentN) to binding sitesthat do not contain the target receptors. There is a competition for the binding sitesbetween the tracer molecules and the endogenous ligand in the specific compartment,but not in the non-specific compartment. For example, 11C-RAC (dopamine D2receptor antagonist) competes for the binding sites with endogenous extra-cellulardopamine. Thus, the amount of dopamine a↵ects the concentration of 11C-RAC inthe specific binding compartment only.The considered three-compartmental model is described by the following systemof di↵erential equations:dFdt= K1B  (k2 + k3)F + k4S  k5F + k6N (1.44)dSdt= k3F  k4S (1.45)dNdt= k5F  k6N (1.46)where the kj ’s are the rate constants (as shown in Fig. 1.13B), B is the arterial inputfunction, F , S and N are the activity (tracer) concentrations in the free, specificand non-specific compartments, respectively. The time dependencies of B, F , S andN are omitted for notational simplicity. The decay rates are not included in theequations, as it is assumed that the activity concentration measurements are decay-511.5. Tracer Kinetic Modelingcorrected. The rate constants can be estimated by iteratively solving the system,using the input function and the image-derived TAC as the inputs. The TAC isassumed to represent the sum of activity in the F , S and N compartments.A two-compartmental model is often used when the exchange rate between thefree and non-specific compartment is much faster compared to the specific compart-ment. Essentially, the compartments F and N are assumed to be in a constant equi-librium, and constitute a singe pool for receptor binding. For the two-compartmentalmodel, the system of di↵erential equations simplifies to:dFdt= K1B  (k2 + k3)F + k4S (1.47)dSdt= k3F  k4S (1.48)Often the aim of modeling is to estimate the constant k3 that quantifies the ra-dioligand binding rate, and constant k4 that quantifies the rate of dissociation.The diagnostic value of the constants is that they reflect the density of the targetreceptors/transporters which may become a↵ected by disease. A metric of radioli-gand binding that combines both constants in the non-displaceable binding potentialBPND, defined asBPND =k3k4(1.49)The BPND quantifies the propensity of the tracer molecules to bind at the targetsites, and is often used as the primary parameter that relates the imaging outcomesto the underlying physiology. As follows from Eq. 1.49, it equals to the ratio ofspecifically bound radioligand to (free + non-specific) nondisplaceable radioligandin tissue. The change in BPND may reflect the change in the receptor density orin the amount of the endogenous ligand. Thus, the BPND is often computed fordi↵erent brain regions in di↵erent subjects groups, and compared before and afterintervention. There are several definitions of binding potential that have di↵erentmeaning [48]. In the adopted nomenclature, binding potential BP without subscriptrefers to the “true” in vitro measurement of the ratioBP =BmaxKD(1.50)where Bmax is the total concentration of receptors in a sample of tissue, and KD isthe radioligand dissociation constant at equilibrium.521.5. Tracer Kinetic Modeling1.5.3 Reference Tissue MethodsCompartmental models rely on the knowledge of the input function. In practice,the input function measurement by arterial blood sampling may be confoundedby several factors. First, blood sampling requires the availability of medical sta↵and equipment to run the blood assays. Second, invasive procedures reduce patientcomfort and are dicult to perform with uncooperative subjects (or awake animals).Third, some degree of discrepancy between the measured and true input functionsis always expected, as discussed above. Thus, there is interest in the developmentof methods that avoid blood sampling.Many of the currently used methods to estimate the rate constants are based onthe reference tissue model (RTM). The RTM is built on the assumption that theblood flow and the distribution volume of the non-specifically bound compartmentis similar between the receptor-rich target region and a reference region, where theconcentration of receptors is assumed to be marginal. Under this assumption, asimple one-compartment model should adequately describe tracer kinetics in thereference region. When written for the reference region, Eq. 1.41 becomes:dFRdt= KR1 B  kR2 FR (1.51)where FR is the activity (tracer) concentration in the free (tissue) compartment ofthe reference region, B is the input function assumed to be the same in the referenceand target regions, and KR1 , kR2 are the reference region’s rate constants. From thisequation, the input function can be expressed asB =1KR1✓dFRdt+ kR2 FR◆(1.52)and substituted in the di↵erential equations of the target region’s compartmentalmodel. This approach was proposed by Cunningham et al [49]. For example, con-sider the two-compartmental model given by Eqs. 1.47 and 1.48. Substitution of Bfrom Eq. 1.52 yieldsdFdt=K1KR1✓dFRdt+ kR2 FR◆ (k2 + k3)F + k4S (1.53)dSdt= k3F  k4S (1.54)531.5. Tracer Kinetic ModelingA constraint is placed on the system that the distribution volumes are equal inthe reference and target regions, i.e. KR1 /kR2 = K1/k2. After directly computingthe values of dFR/dt from the reference region’s TAC, the system can be solvediteratively to obtain the values of K1/KR1 , k2, k3 and k4. The value of BPNDis computed as BPND = k3/k4. The described approach was used for 11C-RACimaging in rats [50] and humans [51].When choosing the reference region for RTM-based analysis, care must be takento not violate the underlying assumptions. Equation 1.52 is not applicable to thereference region if the receptor concentration levels in the region are non-negligible(i.e. a degree of specific binding is present). Although the density of receptorsmay be small compared to the target region, this may nevertheless introduce biasin the estimates of the rate constants. Modifications have been proposed to theRTM [49] that take into account specific tracer binding in the reference region. Ifthe rate constants k3 and k4 are large compared to k2, the non-specific and specificcompartments are combined into a single compartment, and the resulting model iscalled the simplified reference tissue model [51, 52]. The simplified RTM estimatesthree parameters: R1, k2 and BPND, where R1 = K1/KR1 .1.5.4 Logan Method of Parameter EstimationWith new tracers that have not undergone a thorough characterization, it may behard to choose the appropriate number of compartments for modeling. The acquiredTACs are typically too noisy to determine the appropriate model from the data. Forexample, with radioligands labeled with 11C, the telltale characteristics of the TACsindicative of reversible or non-reversible binding may only manifest themselves afterseveral isotope half-lives, when the level of statistical noise become substantial.Logan et al. [53] proposed a method that does not require a specific compartmentalmodel to estimate the distribution volume and other kinetic parameters, applicableto tracers that undergo reversible binding.Without the loss of generality, the Logan method can be derived using an exam-ple of a two-compartmental model governed by di↵erential equations 1.47 and 1.48.The equations can be written in the vector-matrix notation:d ~A(t)dt= K ~A(t) + ~QB(t) (1.55)541.5. Tracer Kinetic Modelingwhere ~A(t) = [F (t), S(t)]T is the column-vector of compartmental TACs, ~Q =[K1, 0]T , and K is the matrix of inter-compartment rate constants:K ="(k2 + k3) k4k3 k4#(1.56)Multiplying Eq. 1.55 by the inverse of K and integrating from 0 to T yieldsTZ0~A(t)dt = K1 ~QTZ0B(t)dt+K1 ~A(T ) (1.57)This is a vector equation where rows correspond to di↵erent compartments. Mea-sured from the images TAC represents the sum of activities from all compartments,denoted as M(t). Using a column-vector of all ones ~⌫ = [1, 1]T , M(t) can be ex-pressed as the product M(t) = ~⌫T ~A. To add the rows of Eq. 1.57, the left and rightsides of the equation can be multiplied by ~⌫:TZ0M(t)dt = ~⌫TK1 ~QTZ0B(t)dt+ ~⌫TK1 ~A(T ) (1.58)This equation can be expressed as a linear relationship:TR0M(t)dtM(T )= aTR0B(t)dtM(T )+ b (1.59)where a = ~⌫TK1 ~Q is the slope coecient, and b is the o↵set that is irrelevant forfurther consideration. The coecient a is equal to the DV. For a two-compartmentalmodel, the explicit expression for DV in terms of the rate constants isa = DV =K1k2✓1 +k3k4◆=K1k2(1 +BPND) (1.60)Although a two-compartmental example was used to derive Eq. 1.59, it can beused in a compartment-independent manner. The terms in Eq. 1.59 representmeasurements that can be obtained from the image-derived TAC and the inputfunction. The ratio on the left side is plotted against the ratio on the right side, anda linear region in the plot is identified. The slope of the graph measured from the551.5. Tracer Kinetic Modelinglinear region equals to the DV of the tracer. In Logan-based analysis that is basedon the input function, the DV takes the role of principal biology-related metric.Similarly to compartmental models, it is possible to use the reference regionmodel in Logan-based analysis. In fact, inclusion of the reference region allows toestimate the value of BPND, under certain assumptions (discussed below). Substi-tution of Eq. 1.52 into Eq. 1.59 yieldsTR0M(t)dtM(T )=DVDVRTR0FR(t)dt+FR(T )kR2M(T )+ b (1.61)where DVR = KR1 /kR2 is the distribution volume of the reference region. The dis-tribution volume ratio (DVR) is the ratio of the distribution volumes of the targetand reference regions:DV R =DVDVR=K1k2⇣1 + k3k4⌘KR1kR2(1.62)An assumption is made that the ratios of the constants K1 and k2 are equal in thetarget and reference regions. The expression for DVR simplifies toDV R = 1 +k3k4= 1 +BPND (1.63)Therefore, by measuring the slope in Eq. 1.61 one can obtain the estimates of DVRand BPND. Note that in order to compute the slope directly a prior estimate of thekR2 is required. However, if Eq. 1.61 is written in the formTR0M(t)dtM(T )= DV RTR0FR(t)dtM(T )+DVKR1FR(T )M(T )+ b (1.64)and the term FR(T )M(T ) is treated as a separate variable, the DVR and the coecientDVKR1can be estimated using multivariate regression. Due to the simplicity of theRTM-based Logan analysis, it is widely adopted in brain PET imaging studies.1.5.5 Parametric ImagesCompartmental modeling and Logan methods can be applied on the regional oron the voxel level (Fig. 1.14A). In the former case, an ROI is defined in dynamic561.6. Contributions of This Thesisimages, and a single regional TAC is computed— the mean activity concentrationin the region versus time. In the latter case, the TACs of individual voxels areused to estimate the rate constants for every voxel in the image. The resultingimages (maps) of kinetic parameters (DV, k2, BPND) are called “parametric” images(Fig. 1.14B). The TACs for individual voxels are noisier than ROI TACs, since thelatter are obtained from averaging the values of multiple voxels. This may lead toexcessive noise in the kinetic parameters on the voxel level, especially if the KMsare over-fit to the noise in the TACs. To reduce noise, the dynamic images maybe smoothed temporally or spatially prior to the parametric processing, or strongerregularization is applied when fitting the model to the data (Fig. 1.15). This can bemediated by computing the mean values of kinetic parameters in the ROIs definedover parametric images. The advantage that parametric images have over ROI-based KM is that they reveal the spatial distribution of the biological parametersof interest.1.6 Contributions of This ThesisThis thesis addresses two aspects of PET imaging: correction of the acquired imagesfor deformable motion, and image analysis in the clinical context.The first part of the thesis (Chapters 2, 3, 4) focuses on the development and val-idation of novel PET image reconstruction and motion correction techniques thatcan be used to perform quantitative imaging of unanesthetized, unrestrained ro-dents. Brain PET imaging of small animals has proven to be invaluable in medicalresearch. Pre-clinical imaging can enhance our understanding of various physio-logical processes in-vivo, and frequently serves as an important step before humanstudies. For example, small animal models are routinely used in the studies ofParkinsons and Alzheimers diseases [54, 55]. The traditional use of anesthesia inpreclinical neuroimaging may introduce bias in the in-vivo studies of the normal anddisease-a↵ected brain physiology [56–61]. On the other hand, the use of full or par-tial restraints may introduce stress-induced change in the behavior, and may limitthe types of the studies that can be conducted. Imaging of completely unrestrainedrodents could alleviate these issues, however there exists a lack of image reconstruc-tion methods and phantoms capable of handling complex (deformable, non-cyclic)rodent motion that this work attempts to address.Chapter 2 contains a short review of motion tracking and motion correction571.6. Contributions of This ThesisFigure 1.14: A. TACs of 11C-DTBZ obtained from a single voxel and from anROI (size 7⇥7⇥7 voxels) defined around the same voxel. The graphs on the leftwere obtained from the target region (striatum), and the graphs on the right wereobtained from the reference region (occipital cortex). B. Examples of parametricBPND and k2 images of a Parkinson’s disease subject computed using an RTM(occipital cortex).581.6. Contributions of This Thesis0 1 2 3 4BPND0 1 2 3 4x104Activity (Bq)a b c dFigure 1.15: Examples of activity and parametric BPND images of 11C-DTBZ;a – image averaged over 5 frames with the combined duration of 30 minutes; b –parametric image showing speckle noise; c – parametric image computed using agreater degree of regularization when fitting the TACs; d – parametric image com-puted from temporally-smoothed TACs.methods in PET. The connection between PET image reconstruction methods andmotion correction techniques is discussed. The e↵ect of anesthesia on the brainphysiology is reviewed, along with the previous approaches to motion correction inawake rodent imaging. Aims of study are formulated in the context of previouswork.In Chapter 3, a novel approach to iterative image reconstruction with correctionfor deformable motion is proposed, wherein unorganized point clouds are used tomodel the imaged objects in the image space, and motion is modeled explicitly byusing time-dependent point coordinates. The image function is represented usingconstant basis functions with finite support determined by the boundaries of theVoronoi cells in the point cloud. The quantitative accuracy and stability of theproposed approach is tested by reconstructing noise-free and noisy projection datafrom digital and physical phantoms. The point-cloud based MLEM and one-passlist-mode OSEM algorithms are validated. The results demonstrate that imagesreconstructed using the proposed method are quantitatively stable, with noise andconvergence properties comparable to image reconstruction based on the use ofrectangular and radially-symmetric basis functions.In Chapter 4, a novel method is developed to construct a digital phantom ofa freely-moving mouse. The pattern and kinematic parameters of motion of a livemouse confined to a small chamber are recorded using depth-sensing cameras. The591.6. Contributions of This Thesismouse phantom is constructed in a reference configuration as a volumetric pointcloud, and motion is simulated using an animation rig that includes skeletal andharmonic coordinate-based deformation modifiers. The observed motion is approx-imately reproduced in the phantom, and the di↵erences between the observed andsimulated motion are analyzed. To generate simulated coincidence data, the phan-tom is voxelized and used in a Monte-Carlo gamma emission simulation. The phan-tom and the motion correction method proposed in Chapter 3 are validated byreconstructing motion-corrected images from the simulated list-mode data. Poten-tial future applications of the phantom and the image reconstruction method arediscussed.In the second part of the thesis (Chapters 5, 6, 7, 8), novel approaches to theanalysis of high-resolution brain PET images are explored. PET images relatedto neurodegeneration are most often quantified using KM and mean voxel valuescomputed within an ROI. However, KM may not always be feasible as it requiresprolonged scanning and knowledge of the input function. Additionally, the meanoperator may not be able capture the disease-related spatial information from theimages. In contrast, this work considers previously unexplored KM-independentshape- and texture-based image metrics, computed from high-resolution PET im-ages. These metrics are investigated in terms of their ability to convey useful infor-mation on the state of neurological disease. The analysis is based on images derivedfrom an ongoing Parkinsons disease (PD) study, with co-registered MRI and PETimages, the latter obtained with dopaminergic tracers which are predominantly con-centrated in the striatum.Chapter 5 provides a brief review of tracers and image analysis methods pre-viously employed in brain PET studies, and considers their limitations. A list ofpromising shape- and texture-based image metrics is compiled that have not beenthoroughly explored in brain PET.In Chapter 6, metrics that characterize the geometrical shape of the functionallyactive (spared by the disease) regions are investigated. The study is performed usingdata from PD subjects imaged with 11C-dihydrotetrabenazine and 11C-raclopride.A novel approach is employed to generate a variety of regions from the combinedPET data and MRI segmentations. Univariate and bivariate regression analysisis performed between the clinical measures of the disease and the shape metricscomputed from the PET-MRI regions. The correlation coecients are comparedbetween di↵erent metrics, brain structures, and tracers.601.6. Contributions of This ThesisChapter 7 focuses on metrics that characterize image texture. Regression anddiscrimination analysis is performed between several texture metrics and clinicaldisease severity. Various parameters of the texture characterization are explored—such as the distance and direction along which the metrics are computed, and themethod of ROI definition. The e↵ect of these parameters on the correlation anddiscrimination coecients is examined.In Chapter 8, the behavior of the texture metrics with disease progression isanalyzed. A spatio-temporal model of the dopaminergic function loss in PD sub-jects is established. The model is used to predict the metric dependence on diseaseprogression with zero natural variability between subjects, extended range of dis-ease severities, and controlled image noise. Di↵erences between the predicted andobserved metric values are examined, and the influence of possible confounding fac-tors is considered. The problem of metric selection in future PD imaging studies isdiscussed.61Chapter 2PET Image Reconstruction withMotion Correction2.1 Overview of Motion Tracking and Compensationin PETMotion during the scan is encountered in many PET imaging scenarios. In brainPET imaging [62], head motion encountered during the scan can be on the orderof a few centimeters [63] and thus can introduce a substantial level of blurring inthe reconstructed images. In thoracic PET [64], motion is primarily introduced bythe respiratory and cardiac cycles. During normal respiration, the amplitude of thediaphragm movement is typically on the order of 15–20 mm [65]. Deep inspirationmay result in 7–13 cm diaphragm movement [66]. Motion of the diaphragm inducesvarious extents of translational and rotational movements in thoracic and abdominalorgans [67]: superior regions of the lungs move less compared to the inferior regions[68]. In cardiac PET imaging [69], two components of the heart motion must beconsidered. The first component is the nearly-rigid motion caused by respiration[70]. The second component is the cardiac deformation and contraction associatedwith the pumping action that includes shearing and radial thickening. The meandisplacement of a heart ventricle wall was measured [71] to be 11.2 mm at the base,6.9 mm at the midpoint and 2.6 mm at the apex. The organs adjacent to the heartalso become impacted by the cardiac motion. For example, motion up to severalmillimeters was observed in lung tumors located close to the heart [72].Respiratory and cardiac motions degrade contrast of the regions with preferen-tial tracer uptake, and additionally these regions may appear displaced comparedto their actual location in the body [73]. In oncological PET, these e↵ects maysignificantly impact the accuracy of cancer diagnosis and staging [74]. Lesions maybe mislocalized and their size may be over-estimated due to the motion blur. For622.1. Overview of Motion Tracking and Compensation in PETexample, the mean displacement of lung tumors due to respiration was measured tobe ⇠0.9 cm in the inferosuperior, anteroposterior and mediolateral directions [75].The loss of contrast may also result in the increased false negative rates of lesiondetection.The problem of motion has become especially prominent with the appearance ofhigh-resolution tomographs. Early PET scanners had spatial resolution on the orderof 1 cm FWHM; thus, motion on a similar scale did not significantly degrade the im-age quality. The advancement of PET technology and the reduction of detector sizeover the past decades resulted in modern PET scanners (such as the High-resolutionResearch Tomograph, HRRT [76]) having resolution on the order of 2–4 mm. Mod-ern state-of-the-art small animal scanners achieve sub-millimeter resolution. Withsuch resolutions, motions that could previously be ignored become a hindrance torealizing the full diagnostic potential of the high-resolution PET imaging.PET scans can last for tens of minutes, and it is often impossible to completelyeliminate body motion during this extended time period without the use of anesthe-sia or hard restraints that significantly reduce patient comfort and increase stress.Anesthesia is not widely used in human imaging since it is fairly invasive and mayalter the tracer uptake (as elaborated in Section 2.4). The internal organ motion(cardiac, respiratory) can not be eliminated in principle. Therefore, rather thantrying to eliminate motion, a more practical approach is to remove the e↵ect ofmotion from the reconstructed images, i.e. to take motion into account before orafter image reconstruction.A multitude of motion correction methods have been developed over the pastdecades, and no single gold standard to perform motion correction exists. Theimplementation of motion correction depends on the motion type, the format ofcoincidence data provided by the scanner, and the availability of external motiontracking hardware. In terms of the motion type, all motion correction methodscan be divided into two categories: rigid and non-rigid (deformable). Rigid motioncorrection has been primarily used in brain imaging studies [77, 78]. Non-rigidmotion correction is primarily used in thoracic imaging. Motion correction methodscan be designed to operate either on the reconstructed image data, coincidence data,or a combination of both. Each of the methods has its advantages, limitations, andareas of applicability.In the image-based methods, coincidence data are acquired over a set of short-duration frames, and data from each frame k = 1...K are reconstructed individually.632.2. Compensation for Head Motion in Brain PET ImagingThe number of frames can be defined either prior to the scan, or it can be setdynamically during or after the scan according to the motion tracking data (ifavailable). The reconstructed images of each frame Ak, will be misaligned dueto motion, and may be very noisy if only a small number of counts is acquired perframe. To obtain a motion-corrected image, the images Ak are registered manuallyor automatically to a common space and added. The registration parameters canbe derived from the motion data, or by matching the images to a common template.Many di↵erent types of templates can be used. For example, the template could berepresented by one of the (high-statistics) frame images from the same scan, or byMRI and CT images. The main strength of this technique is that it can be used forrigid motion correction even when external motion tracking is not available. Thedisadvantage is that it does not account for motion within the frames.Event-based motion correction can be implemented when the coincidence dataare recorded in the time-stamped list-mode format, and when external motion track-ing is available that can be synchronized with the list-mode data. Then, the motionstate of the scanned object is known at the time of the detection of each coincidenceevent. In this case, motion can be taken into account by applying the inverse spatialtransformation to the LOR corresponding to each individual event in the list-modedata. Event-based motion correction typically produces images with better imagequality compared to the image-based motion correction.More detailed description of the motion correction methods requires their consid-eration in the application-specific manner. Development of motion tracking methodswas not part of this thesis work: a brief overview will be presented for the sake ofthoroughness as most of these methods could be used to provide motion data re-quired as input to the newly developed motion correction approach.2.2 Compensation for Head Motion in Brain PETImagingIn brain studies, dynamic PET imaging is very often performed to measure tracerkinetics. Dynamic imaging entails the acquisition of multiple frames and tracerkinetics often impose scan times that can be between 60 and 120 minutes long.Some degree of head motion is likely to occur during that time, especially in subjectssu↵ering from movement disorders such as PD, even though some type of headrestrain is often used. For example, non-rigid head restraints that use molds and642.2. Compensation for Head Motion in Brain PET Imagingthermoplastics [79] are relatively common. Non-rigid restraints can achieve several-fold reduction of motion [80], but they do not completely eliminate it: translationsin the range between 5 to 20 mm and rotations from 1 to 4 degrees were observed,depending on the type of the restraint and the duration of the scan. Therefore, evenwhen using head restraints, motion correction is required to obtain accurate images.Motion of the head is treated as being rigid, determined by 3 translation and 3rotation parameters, and the non-rigid movements of the face, jaw and neck areneglected. The goal of motion correction is therefore to estimate the transformationparameters and apply them to the acquired data in inverse.2.2.1 Data-driven Rigid Motion CorrectionIn dynamic PET scans that consist of several frames, a relatively large head dis-placement may occur over the duration of the scan. However, individual frames areacquired over time periods that are relatively short (1–5 min), and the intra-framemotion may be considered to be relatively small compared to the inter-frame mo-tion. When external motion tracking is not available, the inter-frame motion canbe corrected using image registration.In the first step, images of individual frames are reconstructed. For the imagesto be quantitatively accurate, one must account for the di↵erence in head positionsbetween the reconstructed emission frames and the µ-map. It is typically assumedthat the first few frames of the scan contain relatively little movement and maintaina good alignment with the µ-map. Thus, the first frames are reconstructed withattenuation correction and added to produce a reference, to which other framesreconstructed at first without attenuation correction can be registered. After regis-tration, the frames are forward-projected, multiplied by the AFs, and reconstructedanalytically or iteratively with attenuation correction.While relatively straight-forward, the major limitation of this approach is thefact that the intra-frame motion is not taken into account. In principle, frames canbe further sub-divided and registered to reduce the intra-frame motion; however,the subdivision reduces the number of acquired counts per image, and automaticimage registration becomes unreliable due to excessive image noise. In addition, ithas been shown that images reconstructed from a relatively low number of countshave a significant bias [81].652.2. Compensation for Head Motion in Brain PET Imaging2.2.2 External Tracking of Rigid MotionSeveral methods have been employed to track the head motion externally during thescans. One of the most popular methods is based on optically probing the positionof several markers attached to the head. Examples of such tracking devices include3dMD (3dMD), AlignRT (VisionRT Ltd), and Polaris Spectra and Vicra PositionSensors (NDI Ontario). Polaris Vicra is one of the most widely used systems forPET imaging [82]. The system uses near-infrared light ( = 880 nm) to illuminateand track the location of four reflective markers that are attached to a wearablehead cap. The system can be polled for the marker positions (returned as an ASCIIstring) at a rate of 60 samples per second. The accuracy of the Polaris Vicra systemin tracking precisely known motions has been reported to have the root mean square(RMS) error on the order of 0.25 mm [83]. Although the accuracy of marker-basedmotion tracking is relatively high, it may have a weak point in the attachment ofmarkers to the head. For example, if an elastic cap is used, it may shift during thescan. Additionally, there may be relative motion between the skin and the skull inthe scalp region. Therefore, several marker-free methods of motion tracking havebeen developed.One approach is based on tracking feature points in the images acquired frommultiple cameras. This method has been implemented to obtain motion-correctedimages from the HRRT [63]. Two cameras were set up to capture concurrent videosof the subject’s head (face) from di↵erent directions. The image sequences in thevideos were synchronized with the acquisition of coincidence data. Scale-invariantfeature points [84] were identified in the images, near the nose, eyes or ears. Thehead pose was estimated using stereo triangulation.Several techniques developed recently performed head tracking using consumer-grade depth-sensing cameras. Depth-sensing cameras measure distances to the ob-jects that are in their FOV, by the means of structured light (SL) analysis or TOFmeasurement. SL cameras consist of a light emitter and a receiver that are separatedby a known distance. The emitter projects a known light pattern (typically of near-infrared wavelength) into the scene, and the distorted reflected light is capturedby the receiver. From the analysis of reflected light pattern, the topology of theimaged object or surface can be reconstructed. One of the first consumer-orientedSL cameras, called Kinect (Microsoft, PrimeSense, operating range > 0.5 mm), hasbeen used in PET to track head motion [85]. Accuracy of position and orientation662.2. Compensation for Head Motion in Brain PET Imagingmeasurement on the order of few millimeters/degrees was achieved. Multiple Kinectsensors working simultaneously have been used for 3D tracking of the head duringradiotherapy [86]. Another group used a SL system that was custom-designed totrack the head inside a scanner’s gantry [87]. The authors reported that the accu-racy of their system was similar to that of Polaris Vicra, with RMS errors of 0.09degrees for 20-degree axial rotations and 0.24 mm for 25–mm translations. A seconditeration of the Kinect system (Kinect v.2, operating range 1050 cm) that uses theTOF principle was evaluated for head motion tracking in PET/CT by Noonan etal. [88]. The investigators were able to achieve <0.5 mm position accuracy and0.2-degree RMS orientation accuracy.In combined brain PET/MRI scanners, MRI data obtained simultaneously withPET data can be used to measure motion. The two most commonly used techniquesare echo planar imaging and cloverleaf navigator sequences [89]. Echo planar imag-ing enables relatively fast (10–100 ms per slice) acquisition of a sequence of completevolume images during the PET scan. The volumes can be rigidly co-registered tothe initial volume in the sequence, and the resulting motion estimates can be usedto correct the PET data. Cloverleaf navigators are a special type of MRI navigatorsthat resemble a cloverleaf in k-space [90], which makes them suitable for rigid mo-tion estimation with full 6 degrees of freedom. In [89], authors obtained magneticresonance (MR) motion estimates from echo planar imaging and cloverleaf navigatorsequences every 3 s and 20 ms, respectively, and reported excellent delineation ofspecific brain structures in motion-corrected PET images.2.2.3 Rigid Motion Correction Using Motion DataWhen motion tracking data are available, more accurate methods of motion correc-tion can be implemented. One of the most common methods used with the scannersthat acquire coincidence data in a sinogram mode is based on using motion-triggeredacquisition frames [91, 92]. A scanner is interfaced with an external motion trackingsystem that monitors the head movement in real time during a scan. Whenever thehead motion exceeds a pre-determined threshold, the motion tracking system sendsa trigger to the scanner, and the scanner begins coincidence data binning into a newsinogram. A typical duration of the acquisition frames triggered in this way is onthe order of tens of seconds. During image reconstruction, sinograms correspondingto di↵erent frames are rebinned in accordance with motion tracking data, added,and reconstructed. The rebinning procedure may produce sinograms with missing672.2. Compensation for Head Motion in Brain PET Imagingvalues in the bins where the number of counts is expected to be non-zero. Therefore,it may be necessary to perform re-projection of the data to estimate the counts inthe empty bins (similarly to 3DRP described in Section 1.4.1). Re-projection offrames with low number of counts may increase the noise in the data, and introducea trade-o↵ between the accuracy of motion correction and the SNR. Using low mo-tion threshold may result in the acquisition of high number of low-statistic framesand a substantial increase of image noise.With PET scanners that acquire and store data in the list-mode, event-basedrigid motion correction can be implemented, wherein the LORs corresponding toindividual coincidence events are geometrically re-adjusted to account for motionthat was recorded at the time of the event. Event-based motion correction is cur-rently the state-of-the-art technique used in brain PET imaging [78, 93, 94]. Duringa scan, list-mode coincidence data are acquired by a scanner, and the head motionis recorded externally as a function of time by a motion tracking system. The list-mode and motion data are synchronized using a common clock. After the scan, thelist-mode and motion data are transferred to a reconstruction computer, where therest of the processing can be carried out o↵-line.From the motion data, the time-dependent rigid transformation Dk is computedthat describes the translation and rotation of the head at the time tk, relativeto a reference. A coincidence event recorded at the time tk along the LOR withcoordinates rk is corrected for motion by applying the inverse transformation D1k :rcorrk = D1k rk (2.1)where rcorrk are the (motion-corrected) coordinates of the LOR along which the coin-cidence event would have been detected had the motion not occurred. By processingthe entire list-mode file, one obtains motion-corrected coincidence data that can bebinned to produce a motion-corrected sinogram. Images are reconstructed usinganalytic or iterative (sinogram, list-mode) methods.One notes two issues that need be taken into account when using event-basedmotion correction:• Due to motion, events that would have been detected along one of the scan-ner’s available LORs may exit the camera without being detected. The LORscorresponding to the missing events could either a) fall in the gaps between thedetectors, b) exceed the maximum allowed ring di↵erence for the detection, or682.3. Deformable Respiratory and Cardiac Motion Correction in Torso Imagingc) be directed radially or axially out of the FOV of the scanner.• After the inverse transformation of the LOR coordinates, the new coordinatesmay correspond to a non-existing detector pair. Although such events couldbe rejected, in some scanners the fraction of such events may be considerable(up to 10% in the HRRT), and is desirable to use all detected events for imagereconstruction to increase image quality.To take these aspects into account in the image reconstruction process, it isnot sucient to only adjust the coordinates of the recorded coincidence events —additional quantification corrections must be applied. These corrections were con-sidered in detail in the work by Rahmim et al [94]. When performed correctly,event-based motion correction can produce images where blurring due to motion isvirtually eliminated.Direct experimental comparison of di↵erent motion correction methods is nottrivial because those methods are used on di↵erent scanners, and at di↵erent imag-ing centers. The quality of images from di↵erent scanners may be predominantlydetermined by the physical characteristics of a particular scanner, rather than bythe motion correction technique. Monte-Carlo simulation of the coincidence data ac-quisition from a moving phantom can be employed to compare images produced bydi↵erent motion correction methods. In such studies, event-based motion correctionhas been shown to produce the best match with ground-truth images [95, 96].Although event-based motion correction is expected to provide the most accuratecorrection for motion in most cases, it may not be particularly beneficial whenperformed on a scanner with low spatial resolution. Depending on the diagnostictask, simpler motion correction methods may also be adequate. For example, inregion-based analysis where the mean activity concentration is measured within alarge ROI, image-based motion correction may be sucient if blurring due to themotion is smaller than the size of the ROI. The mean value from the ROI is notexpected to change significantly in this case.2.3 Deformable Respiratory and Cardiac MotionCorrection in Torso ImagingDeformable motion correction is required in PET imaging of deforming objects andorgans, such as the lungs and heart. An important characteristic of the respiratory692.3. Deformable Respiratory and Cardiac Motion Correction in Torso Imagingand cardiac motions is that they are cyclic, as opposed the motion of the head.Therefore, the deformation configurations that the organs undergo can be repre-sented by a series of repetitive motion phases, and PET coincidence data acquiredat di↵erent repetitions of the same phase can be combined together. The motionphases that determine the grouping of coincidence data are termed “gates”. Forexample, in cardiac imaging, the cardiac cycle is typically divided into 50- to 100-millisecond gates, and the total acquisition time may last anywhere between 5 and60 minutes [77]. In choosing the gates for coincidence data acquisition, care mustbe taken to account for the fact that respiratory motion has hysteresis: the organsfollow di↵erent trajectories during inspiration and expiration. The binning of co-incidence data into gates requires time-resolved data acquisition or interfacing of amotion-tracking device with the scanner.When the sinogram data is not gated, partial motion compensation may beachieved by incorporating a model of the motion into the SM [97]. The expectednumber of sinogram counts can be expressed asE[Y] = PA (2.2)where Y = [y1, y2, . . . , yI ] is the vector of sinogram counts yi, A is the vector ofcounts aj in the emission count image, and P is the SM. In scans with deformablemotion, the image A is time-dependent. Consider the separation of the entire scanduration into t = 1, 2, . . . , T discrete time intervals of equal length. The total numberof acquired counts can then be represented as Y =TPt=1Yt, where Yt is the numberof counts acquired during the interval t. The expected total number of counts canbe re-written in terms of the interval counts:E[Y] = E"TXt=1Yt#=TXt=1E[Yt] =TXt=1PAt (2.3)whereAt is the count image that corresponds to the interval t. Assuming that trans-formation Wt exists such that At = WtA, where A is the sought-after count imagein reference configuration, the expected number of total counts can be representedby the equationE[Y] =TXt=1PWtA = TXt=1Pt!A = PwA (2.4)702.3. Deformable Respiratory and Cardiac Motion Correction in Torso Imagingwhere Pt is the interval-specific SM, and Pw =TPt=1Pt is the time-weighted SM. TheSM Pw can be used in the expectation maximization algorithm to obtain the imageA from the non-gated coincidence data Y. Therefore, motion can be compensatedby using a SM that represents a linear combination of SMs computed for most fre-quently occurring object configurations. The time weights represent the fractions oftime that the object spends in the respective configurations. A method similar tothe one described was employed by Reyes et al. [98] for respiratory motion correc-tion. A respiratory motion model was developed that could be fitted to individualpatients, and the fitted model was incorporated into the SM of the expectation max-imization algorithm. The advantage of this approach is that it can be employed toreconstruct non-gated coincidence data, and the entire dataset is used to reconstructa single motion-compensated image with relatively high SNR. However, even withfitting of the model to the individual subjects, this method is unable to account forvariations in the motion pacing between di↵erent subjects and scans.The majority of deformable motion correction methods developed to-date arebased on using gated coincidence data. The motion that occurs within the gates isassumed to be small and is ignored in most methods. Typically, an external motion-tracking device provides a signal that determines to which gate the coincidence datashould be binned. The most common methods of respiratory and cardiac motiontracking are reviewed below.2.3.1 Deformable Motion Tracking and Gating TechniquesA variety of techniques have been developed and tested to track respiratory andcardiac motions. Due to a relative ubiquity of full-body PET scanners used in theclinic, the number of techniques that are being used is relatively large. However,most of the techniques operate based on similar physical principles (mechanical,optical, electromagnetic). As opposed to rigid motion where the goal is to obtain6 parameters that completely describe position and orientation, respiratory andcardiac motion cycles are most commonly tracked by measuring a single parameterthat describes the phase of motion at any given time. The goal of respiratorygating is to obtain the time course of internal organ motion that occurs due tothe lung volume change and the motion of the diaphragm. Tracking the movementof internal organs is typically not feasible without the use of markers that requiresurgical implantation [97]. Most methods used in practice are non-invasive, and712.3. Deformable Respiratory and Cardiac Motion Correction in Torso Imagingare based on the observation that internal organ motion is strongly correlated withthe external motion of the body. Thus, an external device attached to the chest orabdomen often act as a surrogate for internal motion. Signals from such devices areacquired during the scan, and are synchronized with the coincidence data acquisitionin real time or post-acquisition.Some of the devices that have been utilized for respiratory motion trackinginclude:• An elastic belt placed around the torso of the patient and coupled to a pressuresensor [68].• A set of reflective markers placed on the top of the subject’s thorax, and acamera that optically tracks their position during the scan [99].• A depth-sensing SL camera that tracks the motion of the chest in 3D withoutthe use of reflective markers [100].• A spirometry device that measures the volume of air moved during respiration[101].• Noninvasive miniature accelerometers that measure the acceleration of severalpoints on the chest wall [102].It has also been demonstrated that it is possible to use time-resolved coincidencedata for motion tracking and gating. Such methods are referred to as data-drivengating. In one approach [103], a series of short duration (100 ms) sinograms wasgenerated from list-mode coincidence data, and the respiratory signal was derivedfrom the sum of selected bins in the sinogram space. The bins were selected bythe means of spectral analysis. The motion tracking data obtained this way waswell-correlated with the data obtained using optical tracking.Simultaneous PET/MRI acquisition can be leveraged for respiratory motiontracking. For example, navigator pulses can be used to track motion along a par-ticular direction, e.g. to track the infero-superior motion of the diaphragm, and theappropriate PET gate is determined from the diaphragm position. One-dimensionalMR navigators only require processing of a single line of k-space passing through theorigin, and the entire excitation-readout sequence only requires time on the orderof milliseconds [104]. In addition to motion tracking, MRI can be used to derivemotion fields, which can be used in PET image reconstruction [105].722.3. Deformable Respiratory and Cardiac Motion Correction in Torso ImagingMotion of the heart is more complex and can be decomposed into three com-ponents: a) cardiac motion, i.e. movement of the heart muscle that produces thepumping action, b) rigid motion of the heart caused by respiration, and c) motioncaused by patient movement. In cardiac PET imaging, both rigid and deformablecomponents of the heart motion must be taken into account in order to obtainaccurate images. The rigid component of the heart motion is approximately lin-early correlated with the movement of the diaphragm due to respiration. Therefore,dual respiratory-cardiac gating is typically performed in cardiac PET imaging, withdi↵erent sensors tracking respiratory and cardiac motions [106, 107].Cardiac motion cycles are most often tracked using electrocardiography (ECG)measurement devices. The R wave in the ECG readings is typically used as the ref-erence gating signal, since it has the greatest amplitude: the time intervals betweenevery two sequential R waves are divided into di↵erent gates (usually between 4 and8 gates are used). As an alternative to ECG, it has been shown that myocardialmovements can also be detected using accelerometers (implemented as microelec-tromechanical sensors) attached to the patient’s sternum [102]. In simultaneousPET/MRI scanners, MR tagging can be used to measure the deformation of theheart and to track the cardiac motion for coincidence data gating [108].Depending on how the signal from a motion-tracking device is used, respiratoryand cardiac gating methods can be generally categorized as time-based or amplitude-based. In time-based gating, there is a trigger that works at the pre-determinedphases of motion (e.g. at the extrema of the signal). The time periods betweenthe triggers are divided into a pre-defined number of intervals that may have thesame or variable duration [109]. The coincidence data obtained during di↵erentintervals are binned into di↵erent gates. In amplitude-based gating, the range ofthe motion tracking signal is divided into di↵erent sub-ranges that correspond todi↵erent gates. The coincidence data are binned according to the sub-range wherethe motion tracking signal was measured at any given time. The sub-ranges maybe set to have equal or variable length. The advantage of this method is thatit is more sensitive to the pace variations in the respiratory and cardiac cycles.The disadvantage, however, is that di↵erent gates may contain di↵erent number ofcounts.732.3. Deformable Respiratory and Cardiac Motion Correction in Torso Imaging2.3.2 Deformable Motion Correction of Gated Coincidence DataTo reconstruct motion-corrected images from gated coincidence data, a variety ofmethods have been proposed. They can be divided into three types:1. independent reconstruction of individual gates followed by their co-registration;2. reconstruction that incorporates motion and deformation data in the algo-rithm;3. simultaneous image reconstruction and motion estimation.To clarify the terminology, motion tracking shall refer to the acquisition of the gatingsignal, and motion estimation shall refer to the computation of the rigid motion anddeformation parameters.The first type represents the simplest and most straightforward approach, sim-ilar to using multiple acquisition frames in rigid motion correction. Each gate isreconstructed individually, analytically or iteratively, without using the coincidencedata from the other gates. Let Yg represent the coincidence data that correspondsto the gate number g = 1, 2, . . . ,G, where G is the number of gates. The combineddata from the entire scan is represented by the sum Y =GPg=1Yg. The MLEM imageupdate equation for the reconstruction of individual gate images has the followingform:Am+1g =AmgSPTYgPAmg(2.5)where Amg is the m-th iteration of the activity image that corresponds to the gatenumber g, Yg is the gate coincidence data, P is the SM (same for all gates), andS is the sensitivity vector (same for all gates). After the reconstruction, the SNRcan be improved by registering (warping) all images Ag to the space of a referencegate, and computing the average image.Alternatively, the transformation parameters can be estimated from acquiringand co-registering MRI or CT images that correspond to di↵erent gates. Althoughindependent reconstruction and co-registration of the gate images can eliminate mostof the motion blur, the disadvantage of using this method is that only a fraction ofthe acquired data is used to reconstruct each gate.In the second type of methods, the deformation estimates are used in the iterativereconstruction process by incorporating the estimated transformations between the742.3. Deformable Respiratory and Cardiac Motion Correction in Torso Imaginggates into the SM of the MLEM algorithm [110, 111]. Transformation matrices Wgare obtained from the MRI, CT or PET data (preliminary reconstruction), suchthat Ag = WgAref . Here Aref represents a reference gate. The expected numberof counts for gate g can be expressed asE[Yg] = PAg = PWgAref = PgAref (2.6)where Ag is the image of the gate g, P is the SM that does not incorporate motion,and Pg = P0Wg is the SM that incorporates object deformation relative to thereference. In this representation, the MLEM image update equation can be writtenasAm+1ref =AmrefSG1Xg=0PTgYgPgAmref(2.7)where the image correction factors are now computed over the coincidence datafor all gates. Thus, all the coincidence data are used to reconstruct the imageof the reference gate Aref (as opposed to the method given by Eq. 2.5). It wasdemonstrated [112] theoretically and experimentally that by incorporating motionestimates in the iterative image reconstruction, images with better quality can beobtained compared to the method of post-reconstruction warping and averaging.Although the described methods of non-rigid motion correction were found toprovide acceptable images for most imaging purposes, they have the following limi-tations:• Intra-gate motion is not taken into account;• Computing the SM Pg = PWg is demanding.In high-resolution scanners that have hundreds of millions of possible LORs, thefull SM P is typically too large to be stored and must to be computed on the flyas the list-mode data are processed. Similarly, the matrix of the transformationWg is to large to be stored in full, and instead parts of it are computed on the flyfrom a smaller number of deformation parameters. Thus, image reconstruction mayrequire significant computational resources. When the number of gates is small andthe coincidence data are binned into a low-resolution sinogram, the forward andback-projection factors can be pre-computed to reduce the reconstruction time. Asa trade-o↵, the full temporal and spatial resolution of the system are not preserved.752.4. Awake Animal Imaging TechniquesNevertheless, gate-based motion correction has become standard in full-body com-mercial PET scanners.One of the alternative methods of deformable motion correction is based onimage reconstruction using triangular and tetrahedral meshes defined on (organized)point clouds (a set of points defined in R2 or R3) [113–117]. In this case theSM is computed by projecting the mesh elements onto the detector planes, andmotion correction is performed by adjusting the mesh node coordinates accordingto motion. This approach eliminates the need to compute the image transformationmatrices, enables motion interpolation, and, in principle, enables the coupling ofimage reconstruction with physics-based estimation of internal and external organdeformations, based on the finite element method and other similar techniques [118,119].In the third type of methods, the parameters that define the transformationbetween the gates are considered unknown, on the same footing with the unknownactivity images. A single model for Poisson-distributed gated measurements is con-structed that includes the unknown activity values as well as the transformationparameters [120]. Both unknowns are estimated jointly from the complete coinci-dence data by maximizing the log-likelihood of the combined model. Alternatively,the image likelihood term and the motion matching energy term can be separated.Compared to the separate estimation of motion followed by image reconstruction,this approach reduces motion blur, increases the SNR and improves the accuracy ofthe estimated motion [121].2.4 Awake Animal Imaging TechniquesIn contrast to human studies where subjects cooperate to reduce motion during thescans, in small animal imaging anesthesia is generally used to eliminate motion. Nu-merous studies have shown that anesthetic agents influence functional aspects of thebrain that may be also implicated in neurological diseases under study. The use ofanesthesia may therefore significantly confound imaging outcomes by unpredictablya↵ecting global and regional tracer uptake.In small animal imaging, it has been shown that the e↵ect of anesthesia onblood flow, metabolism and dopaminergic function varied between di↵erent trac-ers and species. For example, common anesthetics such as isoflurane and propofolwere found to increase the clearance of a monoaminergic tracer (dopamine recep-762.4. Awake Animal Imaging Techniquestor antagonist) in Gottingen minipigs (implying increased blood flow), but not inprimates [58]. In rhesus monkeys, isoflurane has been shown to cause trackingof the dopamine transporter protein into the cell, and thereby increase extracellu-lar dopamine concentration [59]. In rats, the use of isoflurane was associated witha significant (up to 22%) reduction in the DVR of 18F-fallypride, a tracer withhigh D2/D3 dopamine receptor anity [122]. Pentobarbital-induced anesthesia de-creased the striatal BPND of D1 dopamine receptor by 41% [123]. On the contrary,chloral hydrate and ketamine anesthesias significantly increased the striatal BPNDof D1 by 36% and 46%, respectively. Cerebral blood flow in rats evaluated usingspin-labeled MRI was also a↵ected di↵erently by di↵erent anesthetics: the bloodflow was reduced heterogeneously with isoflurane and homogeneously with fentanyl[57]. Glucose metabolism under anesthesia was examined in rats using PET imag-ing and autoradiograhpy by Matsumura et al. [60], and was found to be reducedcompared to the conscious state, similar to the results obtained in human imaging.Di↵erent types of anesthesias that are used in small animal PET, SPECT, CT andMRI studies and their e↵ects on the physiology were reviewed by Hildebrandt andWeber [61].Common use of anesthesia in small animal imaging, but not in human imaging,impedes direct translation and cross-interpretation of results between pre-clinicaland clinical studies. Even when using common anesthetic, functional alterationscaused by it in humans and small animals may be di↵erent. In addition, the ne-cessity to immobilize animals imposes limitations on the types of studies that canbe conducted. For example, it makes it impossible to investigate neurophysiologicaland conscious behavioral responses to external stimuli. Therefore, there exists aconsiderable e↵ort to develop methods of awake animal imaging that avoid motionblur, with primary focus on rats and mice.The most straightforward method is based on using stereotaxic devices that fullyrestrain the head and body of the animals [124]. Such devices are often invasive,and require surgery for the implantation of foreign mechanical parts into the skull.After surgery, conditioning of the animals was required that lasted on the order ofweeks [123] to make them suciently accustomed to the restraints.In a di↵erent approach that allowed animal movement, a miniature wearablering of detectors called “RatCAP” was developed [125]. The ring was surgicallyattached to a rat’s skull and supported by counterweights, which allowed the rat tomove relatively freely during the scans. A period of acclimatization was required772.4. Awake Animal Imaging Techniquesafter the surgery to reduce the stress levels and to make the animals accustomedto RatCAP. The system was used to demonstrate that rat brains contained moredopamine in the anesthetized state compared to the conscious state [126]. It wasalso shown that more active rats had lower levels of extracellular dopamine.The methodological and ethical diculties associated with surgery are avoidedin methods where the motion of animal’s head is tracked during PET or SPECTscans, followed by rigid motion correction in the reconstruction process [127]. Theanimal is placed inside a tube or burrow at the center of the FOV that allows formoderate amount of motion. The motion is measured by external cameras thattrack the position of either a single marker [128, 129] or multiple infrared markers[130] attached to the head. Reliance on markers increases the probability of failedexperiments due to accidental detachment, requires animal training, and restrictsthe range of animal motion [131]. Therefore, marker-free head tracking has beenexplored [131] [132] that substantially simplifies the imaging protocol. The headmotion is recorded on video from multiple viewpoints, and feature-based landmarkpoints (such as the SIFT [84]) are detected in the images. Under the assumptionthat the head motion is rigid, the orientation and position of the head can be derivedfrom the corresponding landmarks between di↵erent viewpoints. For example, thiscan be done using random sample consensus (RANSAC [133]) under perspectivetransform.Surgical procedures, placing animals in a burrow, and attaching head markersproduce elevated levels of stress in animals. Persistent increase in stress can induceundesirable alterations in the behavior and neurophysiology that can in turn a↵ectimaging results, especially in longitudinal studies. Experimentally, the amount ofstress experienced by an animal at di↵erent stages of imaging (pre-operative, post-operative, during scans, etc.) can be evaluated by measuring the concentration ofepinephrine and corticosterone in the blood. It has been shown that restraints andinjections increase the amount of corticosterone in rats, accompanied by the increasein glucose and heart rates that may lead to hyperthermia [134]. Mizuma et al. [124]reported that surgical immobilization of animals increased the corticosterone levelsup to 4 times on the day after the procedure. Increased levels of corticosterone(2-fold above baseline) persisted up to several days. Similarly, with RatCAP theamount of corticosterone increased 4 times after the detector attachment [126] (com-pared to the baseline). In surgery-free SPECT imaging of mice, a 3-fold increase incorticosterone was measured one hour after placing the animals in a burrow [130].782.5. Study ObjectivesOpen-chamber imaging designs may substantially reduce the animal stress ando↵er the greatest flexibility of behavioral studies that can be conducted. They alsorequire the least amount of animal training prior to the experiment. The idea isto allow the animal move freely inside a small chamber placed inside a scanner,and to use motion tracking and correction to eliminate motion blur. This methodrequires the use of elaborate motion tracking setups that consist of multiple cameras.Although no such imaging system has been developed and validated to-date, initialattempts in designing such a system have been reported. Zhou et al. [135, 136] aredesigning a robotic platform that can adjust the position of a transparent chamberinside the gantry of a PET scanner. Using real-time optical motion tracking ofthe rodent placed inside the chamber, the platform attempts to maintain consistentlocation and orientation of the rodent’s head. In another study, the quad-HIDACPET camera [137] was used to scan mice moving freely inside an open chamber[138]. The authors proposed to estimate the internal organ motion using short-timereconstruction and mass-preserving registration of motionless periods.Imaging of freely-moving rodents require more sophisticated motion correctionmethods than rigid motion correction. One must account for the fact that thedetected gamma particles may emanate from the head as well as from the body.The distributions of activity and gamma-attenuating material during the scans maychange non-rigidly, as the animal’s torso deforms or when the animal moves its headrelative to the body. The torso positioned between the head and the detectors maycause substantial gamma attenuation. Initial attempts to account for deformableanimal motion in the image reconstruction process have been recently reported [139].The authors proposed to discard those coincidence events that were detected whenthe trunk imposed a substantial footprint on the LORs that intersected the head.The limitation of this method is that it may lead to substantially lower number ofdetected coincidence events, worsening the image quality. In addition, this kind ofapproach is only suitable for neuroimaging studies.2.5 Study ObjectivesImaging of freely-moving rodents requires a motion correction method that can ac-count for complex motion with rigid and deformable components. The traditionalmethods of deformable motion correction, based on the use of gates and image regis-tration [65, 73, 121, 140], are well-suited for cyclic motion types, since in these cases792.5. Study Objectivesonly a relatively small number of gates is required to achieve acceptable image qual-ity. However, with non-cyclic deformable motion that is expected in unrestrainedanimal imaging, these methods appear impractical due to a very large number ofrequired gates.The goal of the first part of this work (Chapters 3 and 4) was to develop aniterative image reconstruction method that can incorporate ecient correction fornon-cyclic rigid and deformable motion, and use the advantage of high temporalresolution o↵ered by list-mode reconstruction. A novel approach to iterative imagereconstruction is proposed, wherein unorganized point clouds are used to model theimaged objects in the image space, and motion is modeled explicitly by using time-dependent point coordinates. The image function is represented using constant basisfunctions with finite support determined by the boundaries of the Voronoi cells inthe point cloud. The quantitative accuracy and stability of the proposed methodis validated by reconstructing noise-free and noisy projection data from digital andphysical phantoms (Chapter 3). The applicability of the method to scans withrealistic rodent motion is validated using a developed digital phantom of a freely-moving mouse (Chapter 4). The point-cloud based MLEM and one-pass list-modeOSEM algorithms are explored. Exploration of motion tracking methods was notpart of this work; the motion correction method is expected to work with several ofthe existing motion-tracking methods, as discussed in the next chapters.80Chapter 3PET Image Reconstruction withMotion Correction usingUnorganized Point Clouds3.1 IntroductionGate-based deformable motion correction methods are poorly suited for non-cyclicmotion types that are expected to be encountered in the imaging of freely-movingrodents. Alternative approaches that are based on using meshes [113, 114] are alsoinherently limited in that the mesh elements may become non-physical, overlapping,or otherwise inconsistent with the imaging system model (due to the local e↵ects)during complex motion, for example due to extensive shear deformation, or relativemotion of the adjacent parts of the imaged object. Therefore, in imaging applicationswhere complex motion may be present, global re-meshing may be required for a largenumber of motion frames. This may be computationally demanding if the numberof mesh elements is large, and introduce approximation errors.The problem of re-meshing has been previously studied and addressed in the fieldof computational mechanics simulations [141, 142], where a separate class of mesh-free approximation methods was developed in order to simplify the modeling of largenon-linear deformations, fractures, joints, and motion at interfaces. The mesh-freemethods are based on representing the domain of interest using unorganized pointclouds — a set of points in space without defined connectivity. This work proposesto use a similar approach in tomographic image reconstruction with correction forcomplex motion with rigid and deformable components.In this Chapter, two novel methods are described and validated: 1) tomographicimage reconstruction using unorganized point clouds, and 2) ordered-subset list-mode PET image reconstruction using point clouds with event-by-event deformablemotion correction. Within the proposed approach, the imaged object is represented813.1. Introductionin the image space by an unorganized dynamic point cloud, and the image functionis defined using basis functions with support determined by the Voronoi cells ofthe points in the cloud. The size and shape of these basis functions are thereforeimplicitly determined by the local point arrangement. In list-mode reconstruction,object motion is assumed to be known. The motion is incorporated into the SMby adjusting the point cloud configuration according to the time of the processedcoincidence event. The computation of the SM elements is performed without theexplicit determination of the Voronoi cell boundaries. The image reconstructionalgorithm estimates the activity concentration corresponding to each point/basisfunction, and the reconstructed activity estimates are then voxelized for image post-processing and analysis.The proposed approach does not require global explicit meshing, and o↵ers ad-ditional flexibility in motion modeling since the Voronoi-cell defined basis functionscan not overlap by definition. The volume of a Voronoi cell is a continuous func-tion of point cloud deformation, and it can be directly utilized to evaluate the localobject compression or expansion.Following the description of the point-cloud based reconstruction technique inSection 3.2, the quality of the reconstructed images is evaluated in three parts.In Section 3.3.1, simulated noise-free projection data are reconstructed using pointclouds with varying uniformity of the point distribution to quantify the magnitude ofthe reconstruction errors and to quantify the accuracy of the reconstructed images.In Section 3.3.2, the acquired sinogram data are reconstructed using the proposedVoronoi cell basis functions (VBF), radially symmetric basis functions (RBF) [143],and rectangular basis functions (RecBF), and the image quality and convergencerates are compared between the methods. In Section 3.3.3, simulated list-mode datafrom a deformable (moving) bar phantom are reconstructed with applied motioncorrection, and the quality and accuracy of the reconstructed images are evaluated.In Section 3.4, the validation results are discussed.823.2. Methods3.2 Methods3.2.1 Image Reconstruction Using Unorganized Point CloudsImage Representation and Object ModelingThe imaged object is represented by an unorganized point cloud P = {xn}, n =1...N , spatially limited by the volume ⌦ (Fig. 3.1A). The space exterior to ⌦ isrepresented by a point cloud B = {xk}, k = 1...K. The unknown image functionf˜(x) is represented by a set of basis functions associated with the points in P andB. Let 'n denote such basis functions corresponding to the points xn in P , and k denote the basis functions corresponding to the points xk in B. Then the imagefunction is given by the equationf˜(x) =8>><>>:NPn=1⌘n'n(x) if x 2 ⌦KPk=1⌫k k(x) if x /2 ⌦(3.1)where ⌘n and ⌫k are the basis function coecients. The expected number of eventsacquired along the LOR i is modeled asE[yi] =NXn=1⌘nain +KXk=1⌫kbik (3.2)where ain and bik are the coecients that quantify the probabilistic contributionsof the basis functions 'n and  k to the events recorded along the LOR.In this work 'n and  k were chosen to be constant functions with finite supportdefined by the Voronoi cell boundaries of the respective points in P and B. Con-sidering only the points in P for now, the basis functions 'n(x) are defined by theequation'n(x) =(1 if x 2 [⌦ \ !n]0 if x /2 [⌦ \ !n](3.3)where !n is the Voronoi cell associated with the point xn 2 P (Fig. 3.1B).There are several advantages to using the Voronoi-defined basis functions forimage reconstruction. First, such basis functions are non-overlapping by definition,which eliminates the need to take into acount the overlap between basis functions.Second, Voronoi cells have local support and shapes determined entirely by the833.2. Methodsarrangement of neighboring points. This means that the computation of line pro-jections can be performed locally (as described in the next section). Third, it is notnecessary to define the connectivity between the points (as opposed to the mesh-based approaches). Finally, the local compression/expansion of the point cloud canbe estimated as the volume (area in 2D) ratio of the Voronoi cells.The probabilistic contributions ain of such basis functions to the LOR i can becomputed asain =ZFOVi(x)'n(x)dx =Z!n\⌦i(x)dx (3.4)where i(x) is the impulse response function for the LOR i and FOV denotes theentire field of view. Adopting the line-integral emission model, we can writeain =ZLOR i'n(x)dx = lin (3.5)where lin is the length of intersection between the LOR i and the Voronoi cell !n.It is possible to consider the Voronoi cells as a generalization of the standardvoxel-based image domain partitioning, since the Voronoi cells of a point grid gener-ate partitioning that is identical to voxels. The use of voxel-shaped basis functionsto represent the volume exterior to the imaged object (background) is beneficialfrom a performance standpoint, as it allows to eciently compute the coecients⌫k using raytracing. Hence, points in B are set to be arranged in a grid, and thebasis functions  k are defined by the equation k(x) =(1 if x 2 [⌦c \ voxel k]0 if x /2 [⌦c \ voxel k] (3.6)where ⌦c is the complement to ⌦. The explicit definition of ⌦ is not strictly neces-sary for the image reconstruction in the proposed framework, as it can be definedimplicitly by the shared Voronoi cell boundaries between the points in P and B.However, in this case the exact location of the boundary would depend on the rel-ative point densities in P and B, and also on the location of P in the image space.The explicit definition of ⌦ eliminates this problem.To find the estimates of the activity values from the projection data, we first843.2. Methodsnote that Eq. 3.2 can be written in the matrix formE(Y) =hA Bi " ⌘⌫#= P (3.7)where the matrices A and B are concatenated into matrix P and the column-vectors⌘ and ⌫ are concatenated into  = {j}, j = 1...J to form the familiar linear modelof the imaging system. To find  in the ML sense, the MLEM algorithm [23] canbe used with the following point image update equation:m+1j =mjsjIXi=1pijyiJPj=1pijmj(3.8)where yi is the recorded number of counts along the LOR i, mj is the m-th activityconcentration estimate for the point j, sj =PIi=1 pij is the sensitivity normalizationfactor, and I is the number of possible LORs. This equation is identical to theequation used in the traditional voxel-based image reconstruction. Therefore, byextension, the activity values in the proposed framework can also be estimated usingthe ordered-subset expectation maximization (OSEM), and the one-pass list-modeOSEM reconstruction [144].Algorithm to Compute ainThe direct method to compute the coecients ain would require the explicit calcu-lation of the Voronoi cell boundaries for all points in the cloud. If the number ofcells and motion frames is large, this method may be computationally prohibitive.In contrast, here an algorithm was used that does not require the explicit determi-nation of Voronoi cell boundaries and works in any number of dimensions. The keyassumption of the algorithm is that in practice, Voronoi cells in the point cloudsused for image reconstruction will be no wider than a given length ⇢ in any direction:8n : k⇧⇠(!n)k  ⇢ (3.9)where !n defines the boundary of the Voronoi cell of the point n, and ⇧⇠ denotesthe orthogonal projection operator onto a line ⇠ 2 Rd. With this assumption, thealgorithm to compute the line-Voronoi intersection lengths consists of the following853.2.MethodsFigure 3.1: A. Imaged object and background are represented in the image space by two unorganized point clouds; ⌦denotes space occupied by the object. B. The image function is defined using VBF inside ⌦, and using RecBF outside ⌦.The SM coecients ain and bik are equal to the intersection lengths of the LOR i with the basis functions. C. Schematicrepresentation of the NEMA-NU4 phantom; images corresponding to sections 1, 2 and 3 are used to generate noise-freeprojections. D. Digital phantom of a deformable (bending) bar used for the validation of deformable motion correction,shown in the reference configuration (0 degrees) and two deformed configurations.863.2. Methodssteps, illustrated for 2 dimensions in Fig. 3.2:1) Starting with the input point cloud P , compute the distances rni between thepoints in P and the line ⇠i. If for any point in P the distance rni is greater than ⇢,then the corresponding Voronoi cell cannot be intersected by the line ⇠i; Therefore,we can restrict further computation to a subset of points Pi ✓ P such thatPi = {xn, n = 1...N : rni  ⇢} (3.10)2) Find pairs of points in Pi for which the Voronoi cells may share a boundary.Two Voronoi cells can not share a boundary if the distance between their generat-ing points is greater than 2⇢. Using this requirement, the adjacency matrix D isgenerated:D[n, k] =(1, if kxn  xkk  2⇢0, if kxn  xkk > 2⇢(3.11)where n and k are the indexes from Pi. The matrix D captures point pairs forwhich the Voronoi cells may be adjacent. The computation of this matrix can beoptimized by sorting the points n along the line, and traversing the line once in onedirection.3) For every non-zero element in D, find the intersection coordinate hink be-tween the line ⇠i and the hyperplane H(xn,xk) that is perpendicular to the segment[xn,xk] and crosses it at the midpoint:hink = H(xn, xk) \ ⇠i|D[n, k] = 1 (3.12)If the Voronoi cells !n and !k are adjacent, the hyperplane defined by the sharedboundary by definition must be perpendicular to, and pass through the middleof the segment [xn,xk]. Therefore, the set of coordinates hink computed for allpoints for which D[n, k] = 1 must necessarily include all possible intersection pointsbetween ⇠i and the Voronoi cell boundaries. At this step, the coordinates hink maybe additionally filtered using the condition kxn  hinkk  ⇢, kxk  hinkk  ⇢.4) Sort the coordinates hink along the line to define line segments liq, q = 1...Q.For each segment liq, find the nearest neighbor point in Pi (using any internal pointof liq), and add the segments that have the same nearest neighbor:Qin = {q : NN(liq) = xn} (3.13)873.2. Methods(1) (2)(3) (4)xnxnxkxkxnxkliqaikli(q+1)hinkHiirniFigure 3.2: Main steps of the algorithm to compute the intersection length betweenthe LOR and implicitly-defined Voronoi cells.ain =Xq2Qinkliqk (3.14)where ain are the required intersection lengths between the considered line ⇠i andthe Voronoi cells !n.Voxelization of Activity ValuesThe final step in the reconstruction process is to convert the point activity concentra-tion values to voxel images for further processing (e.g. smoothing) and analysis. Topreserve quantification, voxelization must be performed by adding the activity con-tributions from all Voronoi cells that overlap with a particular voxel. Implementeddirectly, this method may require a substantial amount of computation. Here anapproximate but faster subsampled nearest neighbor interpolation was used thatconsists of the following steps: 1) define a grid of sampling points with 4 timeshigher resolution than the required voxelized resolution (43 sampling points per883.2. Methodsvoxel); 2) set each sample point activity concentration as the activity concentrationof the nearest point in P ; 3) compute the voxel activity concentration values as themean activity of the sampling points inside that voxel. The di↵erence between theimages produced using this method was found to be negligible (data not shown).3.2.2 Phantom DataStatic Phantoms for Image Quality EvaluationThe reconstructed image quality is assessed using the digital and physical NEMA-NU4 phantoms [145]. The phantom is a cylindrical container (L: 50.0 mm, D: 30.0mm) that consists of 3 axial sections (Fig. 3.1C). Section 1 is a cylindrical chamberfilled with activity that contains two smaller chambers (L: 15.0 mm, ID: 8.0 mm,OD: 10.0 mm), one filled with air and the other filled with water. Section 2 is acylindrical chamber uniformly filled with activity. Section 3 consists of five extendedline sources embedded in plastic, with diameters 1.0, 2.0, 3.0, 4.0 and 5.0 mm.The digital equivalents of the three phantom sections were used to generatenoise-free 2D projections. Pixels corresponding to activity were set to one, and allbackground pixels were set to zero (as shown in Fig. 3.1C). Forward-projection ofthe images was performed by computing line integrals along 128 angular and 128radial bins with bin size 0.86 mm.The physical NEMA-NU4 phantom was scanned on the microPET Focus 120scanner (Concorde/Siemens) to acquire realistic 2D sinogram data. The phantomcontained ⇠0.6 mCi of F-18 in aqueous solution. The total number of acquiredevents was ⇠230 million. The acquired data were Fourier-rebinned into 95 directplane sinograms with 128 angular and 128 radial bins with bin size 0.86 mm. Thesinogram data were corrected for the attenuation, detector deadtime, and randomcoincidences. Scatter correction was not applied.Due to the subtraction of the random counts by the scanner’s software, sino-gram bins corresponding to the background often (⇠15% of the total bin number)contained small negative values. These bins were set to zero prior to reconstruction;although this procedure may introduce positive bias in the absolute quantificationof the images, this was considered to be acceptable for the task of comparing the im-age quality between di↵erent reconstruction methods (as the sinogram data remainsidentical regardless of the reconstruction method).893.2. MethodsDeformable Phantom for Motion Correction ValidationDeformable motion correction was validated using an in-house -developed digital barphantom (Fig. 3.1D). The phantom has dimensions 20.0 (x) ⇥ 20.0 (y) ⇥ 60.0 (z,axial) mm (undeformed configuration) and consists of 3 sections. Section 1 containsfour rectangular activity regions, with relative activity values set to 0 (cold spot),2.0, 3.0 and 4.0; the background activity value is set to 1.0. Section 2 is a region withuniform activity (background). Section 3 contains 16 cylindrical sources arranged ina 4-by-4 grid, with activity values set to 3.0. The attenuation coecient everywherein the phantom was set to 0.096 cm1 (water).The bar phantom was digitally represented by a tetrahedral mesh with knownpoint (node) activity values n and attenuation values µn. To generate deformation,a time-varying bend transformation in the xz plane was applied to the coordinates ofthe mesh using Blender software (www.blender.org). The duration of the simulatedmotion is 180 seconds, during which time the phantom deformed from 0 to 180degrees, with an average rate of 1.0 degree/second. The motion was discretized into181 uniformly spaced time-frames.To generate simulated list-mode coincidence data, the mesh was voxelized (256⇥ 256 ⇥ 96 voxels, voxel size 0.5 mm) using the compression/expansion -adjustedactivity values n(t):n(t) = n(t0)↵1n (t) (3.15)where n(t0) is the activity value at time t0 = 0 s (reference configuration). Thepoint-wise expansion coecients ↵n(t) were computed using the equation↵n(t) = |!n(t)| / |!n(t0)| (3.16)where |!n(t)| is the volume of the Voronoi cell of the point (node) n at time t,and |!n(t0)| is the corresponding volume at time t0. The voxelized attenuationimages were not adjusted for compression/expansion, based on the observation thatin tissues with the µ-values similar to that of water (or higher), the µ-value isnot expected to change with deformation (compression/expansion occurs mostly intissues with low µ-values, such as lungs and digestive tract organs). Thus, in realisticimaging scenarios the µ-values will likely be set constant — this was adopted for thisphantom.The voxelized images were used for an in-house developed Monte-Carlo emission903.2. Methodssimulation. Back-to-back gamma photons were modeled to originate anywhere in avoxel at a rate determined by the activity value for that voxel. Gamma attenuationwas modeled in the simulation, but not scattered or random coincidences. Onlyone interaction with matter was modeled per gamma pair, and the probability ofinteraction was computed using the voxelized attenuation image of the phantomand the Beer-Lambert law. The virtual scanner was set to have cylindrical camerageometry, with the axial length and diameter equal to 190.0 mm. The centroid ofthe phantom was at the center of the FOV. The gamma detection eciency wasset to 100%. The simulation resulted in approximately 250,000 list-mode eventsfor each time frame t (⇠45 million events in total). In addition to list-mode datasimulated for the entire range of motion, high-statistics list-mode data (⇠45 millionevents) were also simulated for selected individual time frames t = 0, 60, 120, 180 s.3.2.3 Image Reconstruction from Sinogram DataThe quality of the reconstructed VBF images (digital and physical NEMA phan-toms) is evaluated using point clouds with varying uniformity of the point distri-bution. The point clouds for reconstruction were generated by iteratively applyingLloyds relaxation (LR) algorithm [146] to randomly distributed points (Fig. 3.3A).The random point distributions were initialized using a point process with a uni-form probability distribution in x and y dimensions in the part of the image spacecontaining the phantom (36.4 mm ⇥ 36.4 mm, 1764 points, 0.86 mm2 per point).The resulting point clouds appear to be locally compressed or expanded relativeto a uniform point distribution (taken as a reference configuration). Applying LRiteratively reduced the magnitude of the apparent compression/expansion that wasquantified using the Eq. 3.16 (the values of |!j(t = 0)| are set equal to 0.86 mm2 —average area per point). The graph in Fig. 3.3A illustrates the distribution of ↵at di↵erent LR iterations (NLR). The range and pattern of the expansion coe-cients produced in the described manner approximately match those observed incompressible tissues, for example in lungs [147, 148].The acquired and projected sinograms were reconstructed, voxelized and ana-lyzed after a di↵erent number (NLR = 0, 2, 5, 15) of LR iterations have been appliedto the initial (random) point clouds. The reconstructed activity concentration valueswere not adjusted for the point cloud compression and expansion.Images reconstructed using VBF are compared to images reconstructed usingRecBF and RBF. The SM elements in the RecBF method are computed using913.2. MethodsSiddon raytracing. The RBF are represented by radially (spherically) symmetricfunctions bm,a,↵(r) [149] of the formbm,a,↵(r) =8<: 1Im(↵)✓q1 (r/a)2◆mIm✓↵q1 (r/a)2◆, if 0  r  a0, if r > a(3.17)where r is the radial distance from the origin (function center), Im(·) is the modifiedBessel function of order m, a is the radius of the function, and ↵ is a parameterthat controls the shape of the function. These functions are spatially localized andnearly band limited. The RBF image reconstruction was implemented as describedin [143], using the values of parameters suggested by the authors (m = 2, ↵ = 10.4,a = 2).All reconstruction methods use the MLEM algorithm given by Eq. 3.8. All SMcoecients are computed prior to the reconstruction and stored. The sensitivityfactors sj are obtained by computing the sum of the SM coecients for all LORs.3.2.4 List-mode 3D Image ReconstructionThe point cloud used for the reconstruction of the simulated list-mode data is il-lustrated in Fig. 3.3B. The motion of the point cloud is matched with the motionof the bending bar phantom. As the point cloud deforms, the density of pointson the inner side is increased (compression), and the density of the outer side isdecreased. The change in the distribution of the point-wise expansion coecients↵ in the point cloud with time is shown by the graph in Fig. 3.3B. The range ofthe modeled expansion coecients encompasses the typical expansion/compressionratios observed in compressible tissues. The µ-values for all points are set to 0.096cm1 and the activity values are set to one prior to reconstruction.To reconstruct the images of the phantom, the one-pass list-mode OSEM [144]image update equation is used that was modified to take into account the activityconcentration change due to compression/expansion:m+1j (t0) = mj (t0)1sjXi2Tmpij1JPj=1pij↵1j (ti)mj (t0)(3.18)where ti is the time of the processed list-mode event, t0 is the time of the referenceconfiguration, Tm is the m-th subset of the list-mode data, and mj (t0) is the m-923.2.MethodsFigure 3.3: A. Point clouds used for VBF image reconstruction from sinogram data, and images that visualize thecorresponding map of the ↵ values (logarithm is used to linearize the scale). Graph shows the distribution of the ↵ valuesin the cloud with sequential LR iterations. B. Point cloud (with defined boundary) used for the reconstruction of thesimulated list-mode data with motion correction, only 12.5% of the actual number of points is shown. Point color intensityis proportional to the local compression. Graph shows the distribution of the ↵ values in the phantom at di↵erent timeframes (deformations).933.2. Methodsth iteration of the activity values in the reference configuration at time t0. Thisequation is obtained by using the relation mj (t) = mj (t0)↵1j (t) in the ordinarylist-mode OSEM image update equation, where mj (t) is the m-th activity iterationin the deformed configuration at time t, and ↵j(t) is the time-dependent expansioncoecient (set to 1.0 for background voxels). To form the subsets, the coincidence-list data are split into 6 sequential segments of equal size. Coincidences in eachsegment are in turn equally split into M portions. Each subset Tm is formed bycombining the data from m-th portions of all segments [150]. The image is updatedafter processing of each subset Tm. The SM coecients pij are computed in theorder of events in the subset. To account for object motion in the reconstructionprocess, the configuration of the point cloud is adjusted according to the time of theprocessed list-mode event.Ignoring the e↵ects of crystal penetration, inter-crystal scatter and variable de-tector pair sensitivity (not modeled in this work), the SM elements are representedas the product pij = gijwi, where gij is a purely geometric term (ain or bik in Eq.3.2) and wi is the attenuation factor for LOR i. To compute the attenuation factorsthe equation wi = exp(PJj=1 lijµj) was used, where µj is the attenuation coe-cient for the point j, and lij are the intersection lengths between the basis functionsand the LOR.The sensitivity factors sj =PIi=1 pij are normally obtained by taking the sum ofthe SM elements over all possible LORs of the system (I). In the proposed method,the SM elements incorporate the non-cyclic motion and the corresponding changein the attenuation map. Therefore, the direct computation of the sensitivity factorsis not computationally feasible. Instead, the sj are estimated by backprojecting anumber of randomly sampled LORs for each subset Tm. Each subset Tm containsmultiple motion frames t, and the backprojection of the random LORs thus generatesthe “subset-mean” sensitivity factors that take motion (and attenuation change) intoaccount. The necessity to backproject the randomly sampled LORs increases thetime required for image reconstruction by several-fold [30] (by a factor of 2–4 in thiswork).To reconstruct the images of the static phantom in the deformed configurations(t = 0, 60, 120, 180 s), the deformed point cloud coordinates are used in the Eq. 3.18with ↵j ⌘ 1, the activity values in the deformed configuration and in the referenceconfiguration are voxelized after applying the correction j(t) = j(t0)↵1j (t) onlyto the last iteration of the activity values. To reconstruct the motion-corrected943.2. Methodsimages of the bending phantom, Eq. 3.18 is used directly, and the values of ↵j areobtained using the Eq. 3.16.Implementation DetailsThe reconstruction was accelerated using ecient spatial search queries. The searchof the points that lie close to the LOR was performed by partitioning the imagespace into a grid, and indexing the points according to their grid location. Thegrid was queried by tracing each considered LOR, and inspecting the points in allintersected grid cells and their neighbors (using a look-up table). To ecientlyfind the intersection between the LOR and the boundary of ⌦, the boundary meshwas indexed using an axis-aligned bounding box (aabb) tree. In the step 4 of thealgorithm to compute ain, the approximate nearest neighbor search [151] was used.The spatial search structure was initialized anew for each LOR using points fromPi. It was determined that the results of the approximate search almost alwaysmatched with the results of the exact search (less than 1 error per 1000 queries) andthe reconstructed images were not impacted significantly.For the ease of code parallelization, the high-level management of the reconstruc-tion threads, data input/output, and matrix storage were implemented in vectorizedMatlab code. The Matlab parallelization toolbox was used to perform forward- andback-projection in parallel on 4 CPU cores (desktop-class Intel i7 CPU). Paral-lelization was performed by dividing list-mode coincidence events in a subset amongdi↵erent processing threads. Close point query and voxel raytracing using the Sid-don algorithm [152] were implemented in C code, which was wrapped into a Matlabfunction class and compiled into a mex-file that could be called directly from theMatlab environment. The algorithm to compute the VBF-based system matrix wasfirst implemented in Matlab, converted to C-code using Matlab coder, and compiledinto a mex-file. LR and Monte-Carlo emission simulation were implemented in vec-torized, parallelized Matlab code. The OPCODE library with a Matlab wrapper(http://www.codercorner.com/Opcode.htm) was used to compute line-mesh inter-sections.The simulated emission data and phantom coordinates were stored as ASCIIfiles. Custom scripts were implemented in Python to achieve procedural generationof objects and motion in Blender, and to import and export 3D unorganized pointcloud data.953.3. Results3.2.5 Measured Image Quality MetricsIn images reconstructed from noise-free projections, the reconstruction errors arequantified as the standard deviation of the reconstructed point and pixel valuesin a uniform activity region (Fig. 3.1C, section 2). The values are taken from arectangular ROI (16⇥16 pixels) placed around the center of the phantom.To compare the image quality and convergence rates between the VBF, RecBFand RBF based reconstruction methods, the analysis approximately followed theNEMA-NU4 standard [145]. The mean, minimum, maximum and standard devia-tion of the pixel values are measured in the uniform activity region (section 2, ROIsize 16⇥16⇥12 voxels). The recovery coecients for the cylindrical sources (section3) are obtained by computing the mean of 16 adjacent planes, and measuring theimage value at the peak of each source. In the contrast recovery versus noise plots,contrast recovery is measured using the 2-mm cylindrical source, and noise is mea-sured as the coecient of variation of the pixel values in the uniform activity region.The focus was not on the absolute quantification accuracy of the reconstructed im-ages, but rather on the image quality of the VBF method relative to the referencemethods, and on the agreement between the images (given the identical sinogramdata).The quantitative accuracy of the deformable motion correction is assessed bycomparing the image profiles through the motion-corrected, static and ground truthimages of the bar phantom (Fig. 3.1D, sections 1, 2, 3). The displacement of thereconstructed source centers from the ground truth source locations is measured(section 3). The convergence of the motion-corrected images is evaluated by mea-suring the contrast recovery and noise with OSEM iterations.3.3 Results3.3.1 Characterization of Images Reconstructed from Noise-freeProjectionsThe images the digital NEMA phantom (section 2) reconstructed using VBF areshown in Fig. 3.4A. The images voxelized using the 512⇥512 and 1024⇥1024 gridsdemonstrate the piece-wise constant nature of the reconstructed image function.On the other hand, the images voxelized using the 128⇥128 grid (pixel size equal tothe sinogram bin width) appear smooth. The e↵ect of using irregularly distributed963.3. Resultspoints for image reconstruction is best seen near the edge of the phantom (shown inthe insets): with NLR = 0, the edge of the phantom contains an irregular patternin the 128⇥128 images; the pattern is absent with NLR = 15.In the uniform activity region, low magnitude reconstruction errors were ob-served manifested as the high-frequency speckle pattern (Fig. 3.4A, 512⇥512 grid).The reconstructed values of points with small Voronoi cells (low ↵) tended to havegreater deviation from the ground truth— at most 2% in the shown images. Theimage profiles (Fig. 3.4B) demonstrate that the maximum deviation was reducedto below 1% in images voxelized using the 128⇥128 grid. The images reconstructedusing RecBF had similar error magnitude. The image profiles also demonstratethat a) on average, the reconstructed activity values were correct (equal to 1.0)and did not depend on the local point cloud compression or expansion, and b) thereconstruction errors diminished with higher NLR.The deviations from the ground truth are further quantified using the joint his-togram of the reconstructed point activity values j and expansion coecients ↵j(Fig. 3.5A). The histogram data represent 50 images reconstructed using indepen-dent point cloud realizations with NLR = 2. The maximum observed deviation ofthe reconstructed point value from the ground truth was ⇠4%. The shape of thehistogram demonstrates that the errors were distributed uniformly around the truevalue, without the negative or positive bias with respect to ↵. The reconstructionerrors were greater for points with ↵ < 1 (compressed regions), compared to pointswith ↵1 (expanded regions).The standard deviation of the reconstructed values plotted against the MLEMiteration number is shown in Fig. 3.5B. The plots have minima at 40 iterations dueto the initial convergence of the uniform activity region, and increase nearly linearlywith further iterations — this behavior is consistent with the known property of theMLEM algorithm to produce overfit images with large number of iterations. Theslope of the graph was greatest with NLR = 0, and nearly identical between thepoint clouds with NLR = 5 and NLR = 15. At any MLEM iteration, the standarddeviation on the pixel level (128⇥128 grid) was 3–4 times lower than the standarddeviation on the point level.973.3. Results3.3.2 Image Quality Comparison between VBF, RecBF and RBFReconstructionImages of the physical NEMA phantom (section 2) reconstructed using VBF, RBFand RecBF are shown in Fig. 3.6A. As expected, the VBF images were similar tothe RecBF images; the RBF images were smoother relative to the other methods.Since the pixel basis functions represent a special case of Voronoi basis functions,the RecBF images were used as reference to investigate the e↵ect of the non-uniformpoint distribution on the reconstructed activity values. The di↵erence images (Fig.3.6B) demonstrate that deviations from the reference were generally greater near theedge of the phantom. The typical deviation from the reference was approximately1.7% of the mean value, as shown by the profiles through the di↵erence images. Theprofiles also demonstrate that the deviations from the reference diminished withgreater NLR.The quantitative metrics of the reconstructed images are given in Table 3.1. Thedi↵erences between the VBF and the reference methods were marginal comparedto the noise introduced by the statistical nature of the acquired data: the standarddeviation of the reconstructed activity values was approximately 4% (3.7% withthe RBF method). The e↵ect of NLR on the minimum, maximum and standarddeviation of the activity values was negligible.The contrast recovery versus noise trade-o↵ is plotted in Fig. 3.6C. On the abso-lute scale, the observed di↵erences between the methods were small. The curves thatrepresent VBF with di↵erent NLR were close to the RecBF curve. The maximumof the contrast recovery to noise ratio was obtained after ⇠25 iterations with VBFand RecBF (no dependence on NLR), and after ⇠35 iterations with RBF. There ap-peared to be no consistent relationship between NLR and the contrast versus noisetrade-o↵. The observed variability in the contrast recovery values can be explainedby the sensitivity of the reconstructed 2-mm source activity value to the local pointdistribution.The metric values given in Table 3.1 and plots in Fig. 3.6C were obtained usingthe same initial point cloud. To verify the stability of the reconstructed imageswith di↵erent point clouds, image reconstruction was performed using 50 randomindependent point cloud realizations (for NLR = 0, 2, 5, 15). The distributions ofthe reconstructed voxel values (not shown) were nearly identical for point cloudswith di↵erent NLR, and consistent with the data in Table 3.1. The variability of983.3. ResultsUniform activity ROIMethod Mean St. Dev. Min MaxRecBF 4.43 0.177 3.66 5.00RBF 4.40 0.168 3.76 4.86VBF, NLR = 15 4.43 0.176 3.71 4.93VBF, NLR = 5 4.43 0.176 3.74 4.96VBF, NLR = 2 4.43 0.175 3.77 5.05VBF, NLR = 0 4.43 0.174 3.74 4.99Source recovery coecients (%)Method 5 mm 4 mm 3 mm 2 mm 1 mmRecBF 87.2 81.5 60.6 40.8 15.9RBF 87.3 82.4 63.4 42.4 15.2VBF, NLR = 15 87.8 81.5 61.2 40.8 16.7VBF, NLR = 5 88.9 82.1 62.4 40.4 16.4VBF, NLR = 2 88.4 81.6 61.2 41.3 15.4VBF, NLR = 0 88.0 82.0 60.9 42.2 15.3Table 3.1: Image metrics obtained from the reconstructed images of the physicalNEMA phantom after 40 MLEM iterations.the maximum source activity values was at most 10% (NLR = 0), and was generallygreater for smaller diameter sources. This variability can again be attributed tothe varying point density at the source locations — the variability was indeed lowerwith higher NLR.3.3.3 One-pass List-mode VBF Reconstruction with DeformationCorrectionThe images of the static bar phantom in the reference and deformed configurationsthat were reconstructed using VBF OSEM (30 iterations) are shown in Fig. 3.7A.Only the image of the phantom deformed to 180 degrees is shown and analyzed, sincein this case the local compression/expansion was greatest. With other deformationconfigurations (60 and 120 degrees), the images were qualitatively consistent withthe images of the phantom used for emission simulation. To validate the quantitativeaccuracy of the reconstructed activity values, the image of the deformed configu-ration was brought to the reference configuration by adjusting the reconstructed values (using the Eq. 3.15 with t = t0) and using the reference point coordinatesfor voxelization (Fig. 3.7B).It was determined that with 9.0⇥105 events per list-mode subset (27⇥106 events993.3. Results0 20 40 60 80 1000.980.9850.990.99511.0051.011.0151.02512x512 profile128x128 profle512x512 St.Dev.128x128 St.Dev.512x512 profile128x128 profle512x512 St.Dev.128x128 St.Dev.0 20 40 60 80 1000.980.9850.990.99511.0051.011.0151.020.80.850.90.9511.051.11.151.200.511.52VBF, NLR=0Image profile, NLR = 0 Image profile, NLR = 15NLR=0 NLR=15VBF, NLR=5 VBF, NLR=15512 x 512 grid128 x 128 grid1024 x 1024 grid128 x 128 gridPixel valuePixel valuePixel valuePixel valueABPosition along profile (%)Position along profile (%)Figure 3.4: A. Images of the section 2 of the digital NEMA phantom reconstructedusing VBF (40 MLEM iterations) and voxelized using grids of di↵erent sizes. The in-sets contain zoomed-in images of the edge of the phantom (note di↵erent color scale).Dashed line indicates where image profiles were measured. B. Profiles through thereconstructed images; dashed lines plot the standard deviation of the profile values.1003.3. ResultsFigure 3.5: A. Joint histogram of the reconstructed point activity values j (after 40MLEM iterations) and the corresponding expansion coecients ↵j . The data weretaken from the points at the center of the digital NEMA phantom (section 2). B.Standard deviation of the reconstructed point activity values (percent of the mean)and post-voxelization pixel values (taken from the same ROI), plotted against theMLEM iteration number.1013.3.ResultsFigure 3.6: A. A single plane from the section 2 of the physical NEMA phantom reconstructed using RecBF, RBF andVBF (voxelized using a 128⇥128 grid) with 40 MLEM iterations. Images on the right visualize the point clouds that wereused in VBF reconstruction. Color indicates the local value of ↵. B. Di↵erence between the RecBF and VBF images,profile location indicated by the dashed line. C. Contrast recovery and noise in the VBF, RBF and RecBF images plottedas functions of the MLEM iteration number.1023.3. Resultsin total, similar to the number of events that may be acquired in a small animalscan), and 7.2⇥ 106 randomly sampled LORs per subset to compute the sensitivityimage, the resulting images were too noisy for the accurate quantitative comparisonto the ground truth (image noise is quantified below). Therefore, a Gaussian filterwith 1.0 mm (2 voxels) FWHM in each dimension was used to smooth the images.Smoothed and non-smoothed images of the di↵erent axial sections of the deformedphantom (in the reference configuration) and profiles through them are shown in Fig.3.7C. The profiles demonstrate that a) the used compression/expansion correctionproduced correct activity values, and b) the smoothed images were in general quan-titative agreement with the ground truth. In the non-smoothed images, the sourceactivity values deviated from the ground truth by as much 20%. The analysis ofthe line source alignment revealed that with 180-degree deformation, the maximum(mean) deviation from the reference peak locations among the 16 sources was 1.17(0.52) pixels. The mean ROI values measured in the rectangular compartments(section 3) agreed with the ground truth.Images of the phantom with continuous motion reconstructed with and withoutmotion correction are shown Fig. 3.8A. The motion-corrected images were recon-structed using the Eq. 3.18, and voxelized in the reference configuration. Imageprofiles (not shown) were similar to those obtained with the static phantom, andagreed with the ground truth to the same extent as the images of the static phan-tom. The line source recovery coecients and the mean ROI values agreed withthe ground truth within error (defined as the standard deviation of voxel values insection 2 of the phantom).The contrast recovery and standard deviation are quantified in Fig. 3.8B asfunctions of the OSEM iteration number. The graphs demonstrate that the mea-sured noise was a function of the number of events per subset, as expected. Thestandard deviation after 30 OSEM iterations was 56% and 33% with 3 ⇥ 105 and9⇥ 105 events per subset, respectively. This suggests that the observed noise origi-nates from the statistical nature of data, rather than from an inherent property ofthe VBF reconstruction. The graphs demonstrate that the line sources convergedafter approximately 30 iterations, and the uniform activity regions (not shown) con-verged after approximately 10 iterations. These convergence rates were similar tothose of the static phantom images.Overall, these results demonstrate that with simulated list-mode data, the pro-posed method yields quantitatively accurate images with correction for deformation1033.4. Discussionthat includes local compression/expansion. In realistic scans, post-reconstructionsmoothing may be required if the number of acquired true coincidences, and thenumber of points in the point cloud, are similar to the number used in this study.3.4 DiscussionPET image reconstruction and deformable motion correction techniques were de-scribed and validated that are based on using unorganized point clouds for objectrepresentation. The images of the small animal NEMA phantom reconstructed usingthe proposed VBF were in quantitative agreement with images reconstructed usingRecBF and RBF. With random point clouds used for reconstruction, deviationsfrom the reference (RecBF) were on the order of 1–2% in the uniform region, and atmost 5% in the point source region (attributed to the variable local point density).The deviations reduced with more uniform point distribution. The contrast versusnoise trade-o↵ was similar to the RecBF reconstruction. Using multiple realizationsof random point clouds demonstrated that the VBF images were stable with respectto the point distribution. Finally, using the digital bending phantom, it was demon-strated that the VBF can be employed with one-pass list-mode OSEM for imagereconstruction with deformable motion correction that accounts for the local com-pression/expansion of the imaged object. The accuracy of the deformation-correctedimages was validated, and noise quantified for the local expansion coecients rang-ing from ⇠0.75 to ⇠2.0, which approximately matches the magnitude of the tissuecompression and expansion in the lungs.The high-frequency reconstruction errors observed mainly in the compressedregions were substantially lower than the noise introduced by the statistical natureof the coincidence data. Voxelization reduced the standard deviation of errors from⇠0.7% to ⇠0.18% of the ground-truth value. In the context of the typical variabilityencountered in pre-clinical studies (e.g. 8% in [153]), this can be considered to beacceptable for quantitative imaging. The reconstruction errors resulted mainly fromusing the line-integral projection model with piece-wise constant basis functions withlocal support that was significantly reduced in the compressed regions. Point cloudsused in practical scans can be generated with rarefied point sampling to account forthe expected compression.These results a↵ord a high degree of confidence that the proposed method canbe utilized for quantitative image reconstruction in scans that require correction for1043.4. Discussion012340 20 40 60 80 100Distance along profile (%)012345Voxel valueReconstructed1.0 mm FWHM filt.Ground truth0 20 40 60 80 100Distance along profile (%)0123456Voxel value0 20 40 60 80 100Distance along profile (%)00.511.52Voxel valueVoxel valuelog(    )Deformed configurationCompression/expansion correctedVoxelized in reference configurationFiltered-0.8-0.400.40.8A BCPoint cloud configurations (static phantom)Reconstructed images (static phantom)Reconstructed1.0 mm FWHM filt.Ground truthReconstructed1.0 mm FWHM filt.Ground truthFigure 3.7: A. Reconstructed image of the static bar phantom deformed by 180degrees (the average of 8 axial planes is shown), 6⇥105 events per list-mode subset,and the point cloud used for the reconstruction (12.5% of the points are shown).B. Reconstructed images of the phantom voxelized with compensation for compres-sion/expansion in the deformed and reference configurations. C. Single transaxialplanes (original and smoothed) in the reconstructed images, with profiles indicatedby the dashed lines.1053.4. DiscussionFigure 3.8: A. Image of the bending bar phantom reconstructed without motioncorrection, and iterations of the image reconstructed with motion correction (theaverage of 8 axial planes is shown). Each list-mode subset contained 900,000 events.B. The measured contrast recovery coecients and standard deviation of voxel val-ues in the uniform region plotted against the list-mode OSEM iteration number(with motion correction). The contrast recovery plots represent the average con-trast recovery of 16 line sources (section 1 of the bar phantom).1063.4. Discussioncomplex non-periodic motion. The following approach to imaging with motion cor-rection can be envisioned: 1) acquire list-mode coincidence data synchronized with3D motion tracking data, 2) define a point cloud to represent the imaged objectin reference configuration, 3) estimate the point cloud deformation from the mo-tion data, and 4) reconstruct the image using list-mode OSEM with VBF. Motionand geometry data necessary to implement the proposed approach can be obtainedby several techniques reviewed in Chapter 2: optical stereo imaging with featuretracking, imaging with depth sensors, or simultaneous MRI and PET imaging. De-formation of the point cloud can be estimated from the temporal evolution of therecovered surface using harmonic volumetric mapping [154].Compared to the mesh-based approaches, the use of basis functions defined byVoronoi cells eliminates the need to perform mesh quality control and re-meshing.On the other hand, with compressible deformations, the point-wise expansion coef-ficients must be estimated, and in this work this was done using the direct compu-tation of the Voronoi cell volumes that required the determination of the Voronoicell boundaries. In practical imaging with thousands of distinct point cloud con-figurations (time frames), fast computation of the approximate volumes (or volumechanges) can be performed using one of the available mesh-free techniques (e.g.Monte-Carlo sampling or the sub-sampled nearest neighbor method). The compu-tation of the expansion coecients is not required with deformations that can betreated as incompressible.The most computationally intensive part of the implementation used in thiswork is the nearest neighbor search, for which the time requirement is linear withrespect to the number of coincidence events. In addition, each discrete motion framethat corresponds to unique point cloud configuration requires the computation of ↵values and spatial hashing for raytracing and nearest neighbor search. The amountof time required for these operations is linear with respect to the number of discretepoint cloud configurations used in the reconstruction. In the proposed framework, itis possible to interpolate the point coordinates between the acquired motion frames(for example, in those cases where the motion acquistion frame rate is insucient).Then, the number of discrete point cloud configurations can in principle be equal tothe number of time stamps in the list-mode file. This could significantly increase theoverall reconstruction time, and the choice of the optimal number of motion framesmust be made with time considerations in mind.Although corrections for scattered and random coincidences were not consid-1073.4. Discussionered, the proposed VBF-based reconstruction is consistent with the voxel-basedreconstruction, and is based on the well-developed MLEM algorithm and its vari-ants; thus, previously developed correction techniques are expected to be applicable.For example, the point µ -vales can be voxelized on a low resolution grid, and thetraditional methods of scatter estimates (e.g. single scatter simulation) can be uti-lized to model the scatter contribution in the forward-projection step (OSEM-OP).The random coincidence contribution can be estimated from the detector singlesrate or using the delayed coincidence window technique.108Chapter 4Development and Use of aDigital Mouse Phantom forMotion Correction Validation4.1 IntroductionThe performance of the proposed image reconstruction method based on the VBFwas assessed in Chapter 3 using simulated and acquired data from stationary phan-toms. The correction for motion was validated using a digital bar phantom thatunderwent a relatively simple bending deformation. Using this relatively simple mo-tion/deformation type is preferred if the task is to analyze the influence of motioncorrection on the reconstructed images, compared to stationary phantoms. How-ever, the method must also be validated using phantoms that undergo more realisticand complex motion that is likely to be encountered in practical imaging scenarios.Particularly, the development and overall assessment of the method would benefitfrom using a digital rodent phantom with known motion profile that resembles themotion of an unrestrained awake animal. Such a phantom may be advantageous inseveral aspects: 1) the e↵ect of the motion quality data (e.g. sampling frequency,noise) on the reconstructed images can be investigated, 2) the requirements towardsthe motion tracking system can be estimated based on the simulation studies, 3) theaccuracy of the reconstructed images can be assessed for di↵erent types of motion,imaging tracer, and other imaging parameters.In this chapter, a new technique was developed to generate a digital phantom ofan unrestrained mouse that incorporates realistic body geometry and motion derivedfrom the observed motion of a live mouse. The technique is based on representingthe phantom as a volumetric point cloud, and modifying the position and shape ofthe cloud using deformation modifiers typically used in computer animation, suchas curve modifiers, cages and armatures. We recorded the motion pattern of an1094.2. Materials and Methodsunrestrained live mouse using a depth-sensing camera. Two types of depth-sensingcameras were evaluated for this task; one operates based on the TOF measurementand the other on the analysis of a SL pattern. Kinematic parameters of the animal’smotion were measured from the acquired images, and reproduced in the phantom.The phantom and the general technique to construct it can be used in a variety ofnuclear imaging studies. Here the phantom was used to validate the VBF-based mo-tion correction method proposed in Chapter 3. The phantom was voxelized and usedin a Monte-Carlo PET emission simulation, and the simulated list-mode coincidencedata were reconstructed with and without motion correction. The motion-correctedimages were compared to the ground truth images and to the images of a stationaryphantom.The chapter is organized as follows. In Sections 4.2.2 and 4.2.3, the setup andmethod to perform optical imaging of an unrestrained rodent are described. InSection 4.2.4, di↵erent aspects of the phantom construction are explained: digitalrepresentation of the geometry, the technique to simulate motion, and the assignedactivity and attenuation values. The voxelization of the phantom and the gammaemission simulation are described in Section 4.2.5. The comparison of the depth-sensing cameras is performed in Section 4.3.1. The results of the optical imagingexperiment are presented in Sections 4.3.2 and 4.3.3, and the results of motion-corrected image reconstruction are described in Section 4.3.4. The analysis anddiscussion of the results are presented in Section 4.4.4.2 Materials and Methods4.2.1 Method OverviewThe diagram of the method to construct the phantom are shown in Fig. 4.1. First,a live mouse was confined to a transparent chamber and imaged using a combinedoptical and depth-sensing camera. Kinematic parameters of the mouse’s motionwere measured from the acquired sequences of depth images. Second, a whole-bodydigital phantom of a mouse in reference configuration (undeformed) was generatedas a volumetric point cloud bounded by a triangular surface mesh. The point cloudwas rigged for animation. Third, the rigged point cloud was manually animatedbased on the measured motion parameters, and the motion parameters of the phan-tom and of the live mouse were compared. The procedure of manual animation1104.2. Materials and MethodsStep 1: Camera evaluation Step 2: Live mouse imagingStep 1: Manual motion modelingStep 2: Motion analysisCompare motion parametersEmission simulationAcceptable match?Record and analyze mouse motion1Simulate motion3Voxelize moving phantom45Generate digital phantom2depth-sensing cameras tested motion parameters measured image sequence generatedstationary digital phantom generated in reference poseYESNOrepeatcoincidence data generatedFigure 4.1: Main steps of the method to construct the mouse phantom.was iterated until a good match between the motion parameters was achieved. Theconstructed phantom was then voxelized and used in a Monte-Carlo gamma emis-sion simulation. The simulated coincidence data from the stationary and movingphantom were reconstructed with motion correction using the VBF, and withoutmotion correction using the VBF, RecBF and RBF.4.2.2 Optical Imaging SystemThe setup used for optical live mouse imaging is illustrated in Fig. 4.2A. A camerawith color and depth sensors was positioned above a transparent acrylic chamberthat rested on a flat aluminum surface. Prior to animal imaging, the resolutionand noise of two consumer-grade depth-sensing cameras were evaluated using astationary 3D-printed phantom of a mouse placed inside the chamber. A TOFcamera (DS325, SoftKinetic, image size 320⇥240, nominal depth uncertainty < 1.4cm at distance 1.0 m) and a SL camera (Carmine 1.09, PrimeSense, image size640⇥480, nominal depth resolution 1.0 mm at distance 0.5 m) were tested. Thecameras were connected one at a time to a desktop computer via the USB port,and acquired color RGB frames simultaneously with depth frames in the formatz = f(i, j), where i and j are the pixel row and column coordinates, and z is thedistance from the camera to the surface. To evaluate the depth image quality, the3D-printed (UP Plus printer, Tiertime) phantom was positioned at the center ofthe FOV, and a sequence of 5 depth images was acquired. Image profiles and SNRwere measured in the first image and in the average of 5 images. The depth profileswere compared to the ground truth values obtained by ray-tracing the original meshmodel used for 3D-printing. The mesh model was derived from the Digimouse atlas1114.2. Materials and Methods[155], and the printing was done from an acrylonitrile butadiene styrene filament(single layer thickness 0.25 mm).4.2.3 Live Mouse ImagingLive mouse imaging was performed using the SL camera. The animal (healthycontrol, weight 22–29 g, strain BALB/c) was untrained prior to the experiment.The acquisition of color and depth images from the camera commenced one minuteafter the animal was placed inside the chamber. The acquisition rate was 25 framesper second and lasted for 60 seconds. After the acquisition, the depth images werecropped to the internal dimensions of the chamber, and smoothed using a uniformspatio-temporal filter (spatial size 5⇥5 pixels, temporal width 3 frames). In addition,regions of erroneous depth values (outside of the expected range) were replaced usinginward interpolation.The processed depth images were used to generate a sequence of 3D surfacesfor the analysis of the recorded mouse motion. The (x, y, z) coordinates of threechosen points on the body of the mouse were measured in each frame. The pointswere located on the top of the animal’s head, neck, and trunk. The coordinateswere measured by manually marking the corresponding locations on the surfaces inevery third frame (out of 1500). Motion parameters of the head were derived fromthe measured coordinates: motion trajectory, position and velocity as functions oftime, distributions of the instantaneous velocity and acceleration, and the frequencyspectrum of motion along the x (width) and z (height) dimensions. The coordinatesof all three points were used to measure the angle ✓ in the x y plane between thehead and the trunk.4.2.4 Digital Phantom Generation, Rigging and AnimationThe constructed whole-body phantom of an unrestrained mouse was representedas a volumetric point cloud with time-dependent point coordinates, a surface meshdefined on the outer points, and constant activity and attenuation values assignedto each point. The point cloud corresponding to the reference pose was derived fromthe anatomical label image of the Digimouse atlas (Fig. 4.2B). A mask of the bodycontour was generated from the label image, and the front limbs were removed fromthe mask (Fig. 4.2C) to simplify the animation rig that could be used to generatemotion. It was assumed that in realistic scans, limbs would have a relatively low1124.2. Materials and MethodsFigure 4.2: A. The setup for optical imaging and the 3D-printed mouse phantom.The top chamber cover (not shown) had thickness 1.5 mm. B. Activity (FDG),X-ray CT and label image components of the Digimouse atlas. C. Visualization ofthe point cloud and surface mesh that correspond to the reference (undeformed)pose of the phantom.activity concentration and would not contribute significantly to the attenuation andscatter. A point cloud (grid) bounded by the body mask was generated, with 1.0mm point spacing in each dimension. The dimensions of the point cloud were 31.8mm (width) ⇥ 18.7 mm (height) ⇥ 87.0 mm (length) with the total number ofpoints Npts = 20403. The point activity, attenuation and anatomical labels weremapped from the Digimouse atlas. The X-ray attenuation values were re-scaledto the corresponding 511-keV attenuation values. A triangular surface mesh wasdefined on the outer points of the cloud, with face normals pointing outward.The key idea of the proposed method is to use the techniques developed in thefield of computer graphics and animation to manually animate the phantom, i.e.to add motion to the point cloud. These techniques are generally based on usinghierarchically linked space transformation modifiers and computational heuristics tocontrol the shape of the animated mesh object. Animation is achieved by makingthe parameters of the deformation modifiers time-dependent.A hierarchically linked animation rig that was set up in Blender (www.blender.org)1134.2. Materials and Methodswas used to animate the phantom. The time-dependent state of the rig (defined man-ually) controlled the position and deformation of the point cloud that was attachedto the rig. The advantage of using Blender is that it provides an implementationof the several deformation modifiers most commonly used in the computer anima-tion industry, and allows to control the (time-dependent) state of the modifiers andgeometry either through the use of graphical user interface or Python commandline interface. While Blender is designed to work with triangular meshes defined onpoint clouds, it was determined that the deformation modifiers implemented in thesoftware could also be applied to volumetric unorganized point clouds.To animate the phantom, an empty scene was created in Blender with the totalduration of 60 seconds, discretized into 1500 motion frames. The generated pointcloud of the reference pose was imported to the scene (Fig. 4.3A). The connectivitybetween the points was not defined upon import. The animation rig was set upto control the configuration of the point cloud (phantom pose) in a way that wasconsistent with the mouse anatomy and the expected motion profile. Figure 4.3Billustrates the type and the hierarchical order of the deformation modifiers that con-stituted the rig. Mathematically, the animation rig can be expressed as a sequenceof operators applied to the point coordinates:xn(t) = D(t)xn(t0)D(t) = A(t)D(t0)A(t) = L(t)A(t0)(4.1)where xn(t0) are the coordinates of the point n in the reference configuration, andxn(t) are the coordinates at the time frame t = 1...1500. The following notation isused for the deformation modifiers:D(t): cage-based harmonic coordinate transform [156] that acted directly on thepoint coordinates;A(t): a skeleton graph [157] (controls the cage) that conformed to the segmentedline;L(t): segmented line with time-dependent vertex coordinates.Five control handles were rigidly attached to the vertices of the segmented line(L). The time-dependent 3D positions of the control handles were at the top of thekinematics chain, and were set by the user who animated the phantom. The line1144.2. Materials and Methodsfollowed the control handles, and the skeleton graph (A) represented by the armaturemodifier was constrained to slide along the line. The total length of the armaturewas 95 mm split into 5 segments (lengths: 28.3, 14.7, 16.7, 16.5, 18.7 mm). Thearmature controlled the mesh of the cage modifier (D), which in turn controlledthe deformation of the point cloud. The mesh of the cage (44 triangular facesand 24 vertices) was constructed manually around the point cloud in the referenceconfiguration. The rigging factors between the cage mesh and the armature werecomputed automatically by the software.Thus, the configuration (position, orientation, deformation) of the point cloudwas completely determined by the 3D coordinates of the control handles (15 inde-pendent parameters in total). In addition, the exact behavior of each deformationmodifier was defined by several intrinsic parameters that were set using the GUI. Weomit the description of these parameters here for the sake of compactness —mostof the parameters were kept at the default values.Phantom motion was generated by manually positioning the control handles ina subset of “key” frames (168 out of 1500 for handle 1) using the 3D and projectionviews of the scene. The control handles were positioned such that the overall patternof the simulated motion resembled that of the observed mouse. Similarly to thelive mouse motion tracking experiment, the simulated motion was confined to avirtual chamber with dimensions 128 mm ⇥ 128 mm ⇥ 48 mm. The control handlecoordinates in the non-key frames were automatically computed by interpolation.Bezier interpolation was used to make the generated motion smooth.To measure the parameters of the simulated motion, three points from the pointcloud were chosen in the middle of the head, neck and trunk (as shown in Fig. 4.3B).The anatomical locations of the tracked points and the measured motion parame-ters were similar to those in the mouse tracking experiment. After the animationprocedure was complete, the simulated and recorded motion parameters were com-pared. If the motion parameters and pattern turned out substantially di↵erent, theanimation procedure was repeated (using a di↵erent set of key frames). The finalanimated phantom exported from Blender was represented as the time-dependentpoint cloud P (t) = {xn(t),n, µn, bn}, where xn(t) are the time-dependent coordi-nates of the point n, n is the radiotracer activity value, µn is the attenuation value,and bn is the anatomical label.1154.2. Materials and MethodsFigure 4.3: A. The point cloud data flow. The (x, y, z) coordinates of the pointcloud in reference configuration were imported to Blender, where an animation rigwas set up. After the animation procedure, the new time-dependent coordinateswere exported from Blender as 1500 discrete coordinate sets, one per frame. B.Diagram of the employed animation rig, in hierarchical order. The points that wereused to measure the motion parameters (including the angle ✓ between the headand the trunk) are shown in the right panel.4.2.5 Phantom Voxelization, Emission Simulation andReconstructionTo generate the Monte-Carlo simulated emission data, the constructed phantom wasconverted to a temporal sequence of voxelized activity and attenuation coecient(µ-value) images. The voxelization was performed on grids defined over the spaceoccupied by the virtual motion chamber. The size of the voxelized activity (µ-value)images was 256⇥256⇥96 (64⇥64⇥24) voxels in the x, y and z dimensions, respec-tively. The point cloud activity, attenuation and label values were interpolated ontothe grid using constant basis functions with support determined by the Voronoi cellsof the points in the phantom, which corresponds to the nearest neighbor interpola-tion.1164.2. Materials and MethodsThe cage-based deformation modifier produced regions of local compression/ex-pansion in the animated phantom. To quantify the magnitude of the compres-sion/expansion, the point-wise expansion coecient ↵n(t) was computed using Eq.3.16, where the index j was substituted for n, |!n(t)| is the volume of the Voronoicell associated with the point n in the phantom at time t, and |!n(t0)| is the corre-sponding volume in the reference frame denoted by t0. The value ↵ > 1 correspondsto expansion, and the value 0 < ↵ < 1 corresponds to compression. To preservethe total activity constant in the voxelization process, the point activity values wereadjusted for compression/expansion using Eq. 3.15. The compression/expansionadjustment was not applied to the attenuation values.The Monte-Carlo emission simulation was performed using the method that wasdescribed in Section 3.2.2. The simulated PET camera used in the simulation wasset to have the axial length and diameter equal to 190.0 mm, and the center ofthe virtual chamber was placed at the center of the camera. Approximately 30million simulated events were acquired over 60 seconds. Images of the phantomwere reconstructed using the list-mode OSEM algorithm and two types of basisfunctions: RBF (used with stationary phantom only) defined by Eq. 3.17 and VBF(used for reconstruction with motion correction). The technique that was used toachieve the VBF-based motion correction is described in Section 3.2.4. The dynamicpoint cloud used for the reconstruction of motion-corrected images was identical tothe animated point cloud of the constructed phantom.Implementation DetailsThe acquisition of the TOF and SL images was performed using software supplied bythe vendors. The processing and analysis of depth images was performed in Matlabusing in-house developed functions and scripts. A Python function was written toimport 3D unorgnaized point cloud data into Blender, and to export the animatedpoint cloud as a sequence of ASCII files containing point coordinates. Voxelizationof animated phantom was implemented in Matlab code that was parallelized usingthe parallelization toolbox. Code for image reconstruction was implemented asdescribed in Section 3.2.4.1174.3. Results4.3 Results4.3.1 Depth Camera EvaluationThe depth images of the 3D-printed mouse phantom acquired using the TOF and SLcameras are shown in Fig. 4.4A. The SL images had lower noise and higher accuracyof the recovered geometry compared to the TOF images. For example, the limbs ofthe phantom were resolved in the SL images but not in the TOF images. In the SLimages, artefacts were observed near the edges of the phantom, and for some pixelsthe temporal sequence of values was inconsistent, switching intermittently betweentwo or more di↵erent values that were outside of the expected range (removed in thefigure). The TOF images substantially deviated from the ground truth in the neckregion, likely due to artefacts associated with the light reflection. Near the edgesof the phantom, the measured depth values were negative, i.e. they were measuredto be further away from the camera than the surface upon which the phantom wasplaced.There were substantial di↵erences in the pattern and magnitude of the acquisi-tion noise between the two cameras. In the TOF images, noise had approximatelyGaussian distribution with uniform variance throughout the FOV. The coecientof variation of the depth values in a flat region was 9% without temporal smoothingand 6% with 5-frame averaging. In the SL images, noise was substantially lowercompared to the TOF images (<1%) and had a discrete non-Gaussian distributionthat depended on the topography of the imaged surface. The e↵ect of temporalsmoothing on the SL images was relatively small, as can be seen from the imageprofiles.With both cameras, we observed depth image degradation when reflective sur-faces were present in the FOV. The e↵ect was more severe with the TOF camera,where using the top chamber cover resulted in the appearance of a “blind spot”in the center of the cover filled with over-saturated salt-and-pepper noise. In theSL images, the presence of reflective surfaces in the scene increased the number ofartefacts near the edges, however the surface of the chamber underneath the coverremained visible.1184.3. Results4.3.2 Live Mouse ImagingBased on the results of camera comparison, the SL camera was chosen for live rodentimaging. The representative color images and recovered surfaces from the mouseimaging experiment are shown in Fig. 4.4B. The head, ears, neck, and trunk of theanimal were clearly distinguishable in the majority of the frames, and the trackedpoints on the head, neck and trunk could be accurately placed. However, due tothe image noise and the lack of the bottom or side views, not all poses manifestedby the animal were reliably resolved. For example, during the animal groomingor when the head was pointed straight down (⇠15% of the recorded time period),depth images contained a single blob from which the body configuration could notbe reliably determined, and the identification of the tracked points on the recoveredsurfaces became unreliable. It was estimated that the accuracy of the tracked pointidentification in these dicult frames was ±3 mm in x and y dimensions, and ±10mm in z dimension. The used temporal acquisition rate of 25 frames per secondwas sucient to sample the motion with adequately high temporal resolution.After the animal was placed inside the chamber, it preferred to stay in thecorners and to move from one corner to another along the chamber walls. Periods ofrelatively high activity/motion were interspersed with periods of quiescence, whenthe head and the trunk remained relatively motion-free. The plotted parameters ofthe head motion (Fig. 4.5A) demonstrate that there were periods when the animalmoved relatively slowly. The total duration of such periods (radial velocity less than20 mm/s) was ⇠20 seconds. The animal tended to move slower and spent more timeat rest as it became more accustomed to the chamber. The trajectory length, meanand median velocity, mean and median acceleration of the head motion are given inthe figure. During the phantom animation procedure, e↵ort was made to replicatethe values of the kinematic parameters in the simulated motion.4.3.3 Analysis of Simulated MotionThe pattern of the simulated motion was similar to the observed motion, as revealedby the graphs of trajectory, position and velocity versus time (Fig. 4.5B). Thevalues of the kinematic parameters for the simulated motion are given in the figure.The periodic pattern of fast and slow motion was less pronounced in the animatedphantom compared to the observed mouse, and the acceleration histogram was moreskewed towards the lower values with simulated motion (as reflected in the median1194.3. ResultsFigure 4.4: A. Depth images of the 3D-printed mouse phantom acquired using theTOF and SL cameras with the chamber cover removed. The profiles show thedepth values along the dashed lines. B. Examples of color images and recovered3D surfaces from the live mouse imaging experiment. Points on the head, neck andtrunk indicated by the markers were manually identified (placed) in the acquiredframes.acceleration value). This can likely be explained by the observation that during theanimation procedure, it was more dicult to reproduce the second motion derivativethan the first derivative.The simulated motion characterized by the graphs in Fig. 4.5 was obtained after4 iterations of the animation procedure (Fig. 4.1). The first iterations producedmotion that was either unrealistic or that did not match the average velocity andacceleration of the observed motion. The main encountered diculty was maintain-ing the correct motion pacing, which was governed by the selection of key framesand control handle positions. In addition, motion of the animated point cloud aftermotion interpolation by the software had to be inspected on a frame-by-frame basisto detect and fix any geometry overlaps. Modeling of complex poses exhibited bythe mouse, such as grooming, or when the body was axially collected with the headpointing down was found to be dicult with the employed animation rig. Thus,the range of body poses that were replicated in the phantom represented a subsetof those observed in the live animal.1204.3. ResultsFigure 4.5: A. Motion parameters of the observed motion. B. Motion parametersof the simulated motion.4.3.4 Voxelization and ReconstructionThe renderings of the phantom surface and maximum intensity projections of theactivity and attenuation in Fig. 4.6A demonstrate the typical poses and positionsinside the virtual chamber that were modeled using the employed animation rig.The map of the expansion coecient in Fig. 4.6B demonstrates the typical internaldeformation distribution caused by the bending of the phantom. The graph of thehead-trunk angle ✓ versus time compares the observed and the simulated bendingof the body. The corresponding plot of the local expansion coecient versus timedemonstrates that the compression/expansion was most pronounced in the lateralsides of the abdomen and thorax, and less pronounced near the medial axis. Therange of the expansion coecient near the sides covered the range typically observedin lungs [147, 148].Images of the stationary phantom reconstructed from the simulated coincidencedata using RBF and VBF are shown in Fig. 4.7A. The reconstructed images con-verged after approximately 30 list-mode iterations. The image reconstructed using1214.4. DiscussionRBF was less noisy than the VBF images, as expected from the smoother shape ofthe blob functions. Images of the moving phantom reconstructed with and withoutmotion correction are shown in Fig. 4.7B. The image reconstructed without motioncorrection reveals the “averaged” distribution of the activity over the simulated 60-second time interval. The activity image reconstructed with motion correction usingthe ground truth motion data agreed within noise with the reconstructed images ofthe stationary phantom.4.4 Discussion4.4.1 Rodent Motion Tracking and Phantom ConstructionThe results demonstrate that the constructed phantom can be used to simulate PETcoincidence data a↵ected by the motion of an unrestrained mouse. While otherphantoms and atlases focus on accurately modeling the anatomy and respiratory orcardiac motions, the distinguishing feature of the this phantom is that it modelsthe motion of a freely moving mouse, with motion parameters that approximatelymatch those of the observed live animal. Using the proposed phantom constructiontechnique, it was possible to reproduce the most common animal poses and bodydeformations observed experimentally. Although it required some degree of train-ing, the described animation procedure yielded simulated motion that had similarkinematic parameters and deformations to those in the observed mouse motion.A more general conclusion that can be drawn from this part of the study, aswell as from the phantom constructed in Chapter 3, is that the skeletal- and har-monic coordinate -based animation techniques can be used to generate a variety ofphantoms for nuclear imaging studies with di↵erent motion types, including articu-lated phantoms of unrestrained animals. The implementation of these techniques inuser-oriented software such as Blender makes them widely available, freely accessi-ble and relatively easy to use, and does not require the knowledge of the underlyingmathematical algorithms. It is believed that the proposed method can be used tosimulate motion of practically any complexity, given enough e↵ort and time. Ani-mation rigs with more sophisticated hierarchies than the one explored in this workcan be developed to improve the realism of the simulated motion. The deformationmodifier hierarchy can be exploited, for example, to simultaneously model cardiac,respiratory, and full body motion, or to place deformation constraints on rigid body1224.4.DiscussionFigure 4.6: A. Renderings of the phantom surface inside the virtual chamber, and maximum intensity projections of thecorresponding activity and attenuation images for 3 representative motion frames. B. Map of the expansion coecient ↵(single z-plane), and plots of ✓ and local values of ↵ against the frame number.1234.4.DiscussionA BGround truth 15 iterations 30 iterations30 iterations 30 iterationsVoronoi motion-correctedBLOB Voronoi No motion correction012010080604020Activity concentration (Bq/mm3)012010080604020Activity concentration (Bq/mm3)Virtual chamber boundaryyzxxyxFigure 4.7: A. Ground truth image of the phantom in reference configuration and images of the stationary phantomreconstructed using blob and Voronoi basis functions. Top and bottom images represent single planes in the x  y andx  z dimensions, respectively. The location of the x  z plane is indicated by the dashed line. B. Image of the movingphantom reconstructed without (single x y plane is shown) and with motion correction.1244.4. Discussionparts.A limitation of the proposed phantom construction method is that it is basedon using deformation modifiers that are not physics-driven. As a consequence, thedistribution of the tissue compression/expansion shown in Fig. 4.6B is to some de-gree unrealistic, especially in the body parts that are supposed to be rigid (e.g. thehead). The phantom also did not completely preserve volume. It can be assumedthat the change of the body volume in a live animal would occur primarily due tobreathing or air compression in the digestive tract organs (abdominal region). Theattenuation coecients in the animal’s body are expected to be constant and inde-pendent of volume changes and deformations — this was adopted in the phantom,even though the phantom volume variability (6.5%) was likely greater than whatcould be expected in a live animal. In this respect, motion modeling based on thefinite element analysis would be superior, as it would enable realistic simulation ofthe internal deformations that takes into account the distribution of compressibleand incompressible tissues. Despite these limiting factors, the developed model isexpected to be appropriate for a number of tasks associated with the developmentof practical awake rodent imaging, as elaborated in the next section.The manual animation and motion tracking procedures may also be consideredas limitations of the method. If the modeled phantom geometry and motion arecomplex, the animation procedure may require several repetitions, as well as prioruser training; the manual placement of the pose-tracking points in the acquired im-ages is subject to the operator-induced variability. Care must be taken to avoidcreating non-physical configurations of the geometry (e.g. mesh elements that over-lap) during the animation procedure. A better approach would be to transfer theobserved trajectory directly to the phantom, and to only leave minor motion aspectsup for the manual intervention (e.g. deformation quality control). This approachwas not explored here since the method of automatic motion estimation from theawake mouse images may not be trivial and require a separate investigation.Based on the optical imaging results, an assessment can be made on the po-tential use of depth-sensing cameras for rodent motion tracking. Although suchcameras were previously used to estimate rigid motion (translation and rotation) innuclear imaging [85], with rodents a more dicult aim can be posed of measuringthe complete (external) body geometry and position in the FOV. The optical imag-ing experiments have shown that consumer-grade SL cameras may be better suitedfor rodent motion tracking than TOF cameras, although this may depend on the1254.4. Discussionparticular camera models chosen. Both technologies are being actively developed,and future advancements may invalidate this assessment. The SL camera used inthis work had sucient depth image quality to measure the kinematic motion pa-rameters and to analyze typical animal poses. However, the data acquired from asingle camera was insucient to accurately recover the body geometry and to useit in image reconstruction with motion correction. It is expected that better resultscan be achieved with a) multiple cameras acquiring depth images from di↵erentdirections [86], b) rats rather than mice due to their larger body size, and c) baldanimal strains to avoid the influence of hair.An observation relevant to the development of full-body motion tracking tech-niques for rodents is that the body of the mouse remained relatively still for timeperiods that summed to approximately 43% of the image acquisition time. Themouse moved less as it became familiar with the chamber. Thus it can be expectedthat during the periods of relative quiescence, motion tracking at relatively lowframe rates (e.g. using MRI) may be sucient to obtain motion-corrected imagesof relatively high accuracy.4.4.2 Strategies for Practical Unrestrained Rodent ImagingBased on the successful validation of the proposed VBF-based image reconstructionand motion correction methods, the following approach to quantitative unrestrainedrodent imaging is envisioned:1. acquire list-mode coincidence data simultaneously with motion tracking data;2. define a point cloud to represent the animal in reference configuration;3. estimate the point cloud motion and deformation from the motion data;4. reconstruct motion-corrected image of the animal using the VBF-based list-mode OSEM.The requirements to the accuracy of the motion data can be relaxed dependingon the aims of imaging. The ability to image unanesthetized, unrestrained rodentswill be of value primarily in neurology studies. The tracers used in neuroimaginghave their primary targets in the brain, with minimal binding in the rest of the body.Therefore, the distribution of activity in the trunk is of secondary importance; thisimplies that approximate estimates of internal body deformations can be used in1264.4. Discussionimage reconstruction. Limbs are likely to contribute minimally to activity andattenuation, and can be ignored without a significant impact on the image accuracyin the brain.Motion and geometry data necessary to implement the proposed approach canbe obtained from multiple depth cameras or simultaneous MRI and PET imaging.The solution to estimate the complete rodent motion (including internal deforma-tions) from such data may be based on constructing a template of a rodent’s body(represented by a triangular mesh), and registering it to the motion tracking data inevery frame. Rigidity constraints can be applied to the head of the template. Thetemporal sequence of registered templates would provide an estimate of the externalbody deformations. The internal deformations can be estimated by constructing anunorganized point cloud inside the template (in one of the frames), and assigningthe template’s mesh to act as the cage of the harmonic volumetric mapping appliedto the point cloud [154]. Theoretically, this processing sequence would produce a dy-namic unorganized point cloud that approximately matches the motion of the rodentinside the FOV; the point cloud can then be used to reconstruct motion-correctedimages.The development of a method to robustly register the template to the motiondata requires additional e↵ort. It can be speculated that skeletal priors, hierarchicalsurface fitting and statistical pose priors have the potential to improve the robustnessof registration. Even if the pose of the animal could not be reliably estimated in allmotion frames, statistical image reconstruction a↵ords some degree of robustness tothe missing data. The coincidence data for time periods with poor motion estimatescan be removed from consideration; appropriate weighting of the image updatefactors in Eq. 3.18 to preserve image quantification.Image analysis that relies on the traditional KM methods may not be applicablewhen imaging awake rodents. With anesthetized animals, PET scans can be initi-ated at the time of activity injection, and the measurement of the input function(and activity concentration in the target region) can commence almost immediatelyafter the injection. With awake animals, intravenous tracer administration may bedicult, and there is likely to be a time delay between the tracer administration andthe beginning of the scan, during which the input function could not be measured.Additionally, intravenous administration may induce stress in awake animals anda↵ect the neurochemical processes in the brain.Instead of using KM, the physiological parameter probed by the tracer can be1274.4. Discussionassessed using the ARs or the standard uptake values. These metrics can be com-puted from the images acquired with a substantial time delay post-injection. Ifthe scanner resolution is suciently high, the physiological parameters of interestmay be also assessed using shape- and texture-based analysis. This type of analysisapplied to high-resolution clinical images is explored in chapters 6, 7 and 8.128Chapter 5Spatial Image Analysis in BrainPET Imaging5.1 Brain PET Imaging in Parkinson’s Disease StudiesPD is a progressive neurodegenerative disorder for which there is currently no cure.It is the second most common neurodegenerative disorder after Alzheimer’s dis-ease [158]. While the relevance and adoption of imaging in the clinical practice isexpected to increase as treatments become available, to this day the in-vivo PETimaging of PD is predominantly performed in a research setting. In part this is dueto the fact that PD predominantly occurs at a later age, and can often be reliablydiagnosed from the clinical symptoms, obviating the need for imaging to performdiagnosis. The clinical onset of PD is typically characterized by tremors, bradyki-nesia, and muscle rigidity. Posture, gait, balance and speech become impaired, andnon-motor cognitive functions may become a↵ected in the later stages of the disease.In the brain, PD is associated with the loss of pre-synaptic dopaminergic neurons inthe substantia nigra and dopamine deficiency in the striatum, with minimal or noanatomical atrophy [159]. The lack of dopamine upsets the dopamine/acetylcholinebalance and leads to impaired motor control.The diagnostic value of PET imaging in the studies of PD is that it allowsin-vivo assessment of the state of various neurological subsystems implicated in PDpathogenesis. Several PET tracers have been developed that enable molecular imag-ing of pre-synaptic and post-synaptic targets. Examples of such tracers, previouslyincluded in Table 1.2, are described below:• [11C]-DTBZ is a pre-synaptic tracer that binds to the vesicular monoaminetransporter type 2, an integral membrane protein that transports the dopamineand other neurotransmitters from the citosol to the synaptic vesicles. TheDTBZ binding is therefore strongly indicative of the dopaminergic function:1295.1. Brain PET Imaging in Parkinson’s Disease Studiesthe BPND of DTBZ is significantly reduced in PD patients compared to healthycontrols (Fig. 5.1).• [11C]-RAC is a post-synaptic tracer that acts as an antagonist of the D2dopamine receptors; it also binds less selectively to the D1, D3 and D4 re-ceptors. The D2 receptor density is relatively well preserved in PD, comparedto the endogenous dopamine production (Fig. 5.1). One of the applications ofimaging with RAC is the assessment of dopamine release: two scans are per-formed sequentially with a stimulus between the scans. If the stimulus elicitsdopamine release, the number of available (unoccupied) D2 receptors becomessmaller, and the RAC binding in the second scan becomes lower.• [18F]-FD is a pre-synaptic tracer that is similar in the chemical structure tolevodopa and resembles its biological distribution. Levodopa is a naturally syn-thesized precursor to the neurotransmitters such as dopamine and adrenaline.FD is obtained by adding a radioactive fluorine atom to levodopa, and simi-larly to DTBZ it can be used to assess the functional state of the nigrostriataldopaminergic pathway. FD binding is reduced in PD compared to healthycontrols.• [11C]-MP is a radioactive analog of methylphenidate, a synthetic compoundthat acts as a stimulant in the central nervous system. Methylphenidate bindsto and blocks the dopamine transporter, a pre-synaptic membrane proteinthat pumps the dopamine from the synaptic cleft to the intracellular space.Imaging with MP is used to assess the dopamine reuptake function— it isreduced in subjects with PD.These tracers target the dopaminergic system that is known to be most profoundlya↵ected by PD. In addition, there are tracers that target various receptors andproteins of the serotonergic and cholinergic systems that may be implicated in PD[160, 161], and new tracers to assess the integrity of various neurological functionsare continuously being developed. A review of PET tracers utilized in PD researchcan be found in [162].There are three aspects of PD development and progression that are particularlyrelevant to the imaging-based studies of the disease, as they frequently determine themethodology that is employed for image analysis. The first aspect is that the loss ofthe dopaminergic function occurs in a specific pattern, along the rostro-caudal axis1305.1. Brain PET Imaging in Parkinson’s Disease StudiesFigure 5.1: Single transverse slices and 3D visualization of the MRI and BPNDimages of a healthy control subject (left column), a PD subject on the year ofdiagnosis (middle column), and a PD subject 10 years after diagnosis (right column).The 3D visualizations show the BPND distributions of DTBZ (red colormap) andRAC (yellow colormap) on the left side of the striatum, with MRI-defined outlinesof the striatal shape.1315.1. Brain PET Imaging in Parkinson’s Disease Studiesof the striatum. The pattern is shown in Fig. 5.1, where the distribution of DTBZBPND in the striatum is shown in PD subjects and healthy controls. While the entirestriatum is a↵ected, the most substantial loss of the dopaminergic function occurs inthe posterior putamen. The neurodegeneration progresses along the postero-anterioraxis.The second aspect is that the PD-induced functional changes in the brain beginyears prior to the first clinical symptoms. This is evident from the comparison ofDTBZ images in Fig. 5.1 between a healthy subject and a PD subject who wasimaged on the year of diagnosis. Typically, at the time of PD diagnosis the meanDTBZ BPND in the less a↵ected side of putamen is reduced by 50% from the healthystate. Combined longitudinal and cross-section imaging studies suggest [163] thatthe first PD-related functional changes in the brain begin to occur 17 years priorto the clinical symptoms according to the observed changes in DTBZ binding, 13years according to changes in MP binding, and 6 years according to changes inFD uptake. The spatio-temporal pattern of the physiological changes in the pre-symptomatic disease is not well understood due to the lack of pre-symptomaticimaging data.The third aspect is that PD is an asymmetric disease, i.e. as the disease pro-gresses the left and right sides of the body are a↵ected to di↵erent degrees. Firstclinical symptoms begin on one side, and remain to be worse on that side untillater in the disease; in the advanced disease, both sides of the body are a↵ectedto approximately the same extent. The asymmetry of the clinical symptoms is re-flected in the brain: if the motor performance is worse on the right side, the leftside of the brain is a↵ected to a greater extent, and vice versa (e.g. DTBZ bindingis asymmetrically a↵ected, as shown in Fig. 5.1 for a 0-year PD subject). Typicallyin PET imaging studies of PD, either the less a↵ected (better) side or the morea↵ected (worse) side of the brain is used for the image analysis. The choice of theside may a↵ect the relationships and correlations between the image-derived metricsand clinical metrics of disease severity.The standard metric employed for the image analysis is the mean BPND or thestandard uptake value of the tracer in a given ROI. The ROIs can be defined eitheraccording to the anatomy if the anatomical reference is available, or placed manuallyover the structure of interest. The advantage of using the mean BPND is that itallows to relate the imaging outcome to underlying neurobiology, and to interpretthe images in terms of the biologic parameters. The downside of using BPND in1325.1. Brain PET Imaging in Parkinson’s Disease Studiesimage analysis is that it requires the knowledge of the time course of the tracerdistribution in the selected ROI. Thus, dynamic scanning and either a plasma- ortissue-derived input function must be acquired, which leads to significantly increasedscan durations, introduce additional sources of error and reduce patient comfort.To obtain the image-derived input function, a reference region devoid of specifictracer binding is required, and such region may not exist or could not be easilyidentifiable for di↵erent tracers and disorders. Additionally, the mean operator doesnot explore the full diagnostic potential of imaging, since it does not capture thespatial pattern of the tracer distribution that can carry information relevant to thedisease phenotype.Depending on the aims of research and imaging, the easy-interpretable rela-tionship between the image-derived metrics and biologic parameters may not berequired, and using the mean operator may not be adequate. For example, the aimmay be to detect early signs of PD, in order to initiate therapeutic intervention asearly as possible (years before clinical symptoms). In this case, an image metricshould be used that proves to be most accurate for that particular task; the bio-logical interpretation of the metric value becomes of secondary importance. Similarargument can be made for the task of tracking the disease progression and predictingthe clinical outcome based on imaging. Quantifying the spatial distribution of thetracer uptake in addition to the mean value is expected to be relevant here, sincevarious neurological functions are known to be a↵ected in distinct spatial patterns:the early disease onset may indeed be reflected in the change of binding pattern andnot in the mean value.Therefore, it is of interest to explore new image-derived metrics that correlatewith disease, convey useful information regarding the relevant physiological pro-cesses, and can be obtained from relatively simple scanning procedures. The devel-opment and validation of methods that quantify the spatial pattern of PET tracerdistribution in neuroimaging has been somewhat lagging, specifically for the tracersthat have a rather localized distribution pattern (as opposed to being distributedover the entire brain as, for example, in the case of imaging with FDG). The use-fulness of geometry, shape and texture-based metrics in neurodegeneration has notbeen thoroughly explored.1335.2. Previous Methods of Spatial Image Analysis5.2 Previous Methods of Spatial Image AnalysisLiterature on the application of texture and shape-based spatial analysis of PETimages is limited. Metrics that quantify image histogram and texture have beenalready used in cancer-related PET imaging [164, 165], and analyses of co-variancepatterns have been proposed in neuroimaging for tracers with di↵use brain distri-bution, such as 18F-fluorodeoxyglucose (18F-FDG) and 11C-PIB [166, 167].Texture-based metrics such as the Haralick features (HF) computed from thegray level co-occurrence matrix (GLCM) have been previously found to contributeto automatic detection of disease [168]. The literature on application of HF inthe analysis of emission images has been primarily focused on oncology [169, 170]and proper translation of these features to neurological analysis requires careful at-tention. For instance, in oncology the GLCM is typically computed along 2 or 3orthogonal directions and averaged [164, 171]. This method is consistent with thefact that tumor growth typically does not have a preferred direction, or such direc-tion is unknown apriori. On the other hand, in PD a clear directionality (gradient)in binding is present due to the specific (and known) pattern of the dopaminergicfunction loss (Fig. 5.1) [172].The descriptive strength of image metrics may depend on the particular ROIdefinition method; this was indeed observed in oncology [171]. The selection of themost appropriate ROI over which to evaluate a specific metric may be ambiguous inpresence of neurodegeneration: while specific aspects of the neurochemical functionmay be impaired, other functional or structural aspects may be preserved, leadingto large di↵erences in region identification ability between di↵erent modalities andtracers. Typically, the ROI choice is dictated by the available data, the primaryaim of investigation, and overall robustness of the method [173, 174]. The definitionof an ROI on a PET image is often ambiguous due to functional atrophy or fuzzyboundaries of the regions of preferential tracer uptake/binding. In the case whenmore robust anatomically-defined regions are required, they can be delineated usingthe MRI image of the subject. Other approaches include using atlas-derived ROIs[175–177], or a more simple definition of ROIs using geometric primitives placedover the brain structure of interest [62, 178].There is a frequent dilemma on whether to utilize the ROIs defined by thefunction or the anatomy. Both approaches can be valid, but they may be addressingdi↵erent questions and lead to di↵erent information. In recent years, several studies1345.3. Aims and Structure of the Studyhave shown that inter-modality ROIs that define image space regions based on thecombined information from two or more di↵erent modalities or tracers, seem to beoptimal for a number of context-specific imaging tasks. For example, El Naqa etal. [179] demonstrated that image segmentation based on fused PET/CT data wasclosest to manual segmentation, compared to using either of the modalities alone;Chowdhury et al. [180] employed concurrent MRI/CT segmentation for improvedlocalization of metastatic tumours; Han et al. [181] showed that combined PET-CTsegmentations are more accurate than single-modality segmentations; Bagci et al.[182] provide a good review of previous work in inter-modality segmentation, andreport that inter-modality PET-MRI ROIs were closest to manual segmentations(used as the gold standard), compared to single-modality ROIs. Since it is recognizedthat the diagnostic value of image metrics depends on the region over which theyare calculated, it is of interest to investigate how much the correlation betweenthe image-derived and clinical metrics changes as a function of the shape of inter-modality ROI.5.3 Aims and Structure of the StudyBased on the observation that neurodegenerative disease usually a↵ects tracer bind-ing in a distinct spatio-temporal pattern, the aim of this work was to test thehypothesis that shape and texture metrics evaluated for the regions of high traceruptake would correlate well with the clinical disease progression. The investigatedimage metrics were considered as possible alternatives or additions to those derivedfrom KM. The study was performed using DTBZ and RAC images of PD subjectsand healthy controls, which were co-registered with MR images. The investigatedimage metrics were computed from the PET images within striatal ROIs defined onthe both sides of the brain. The correlation between the metric values and clinicalmeasures of disease severity — duration and motor performance — was analyzed.The correlation with disease may depend on how the region over which themetric is calculated is defined. Therefore, a thorough investigation should includean estimate of the variability of metric performance with respect to the ROI shape.To this end, the analysis was carried out for a range of possible ROI definitionsderived from the MRI and PET images. Multiple ROIs were utilized to investigatethe metrics more systematically and to test the generality of the analysis outcomebeyond a specific method of ROI definition.1355.4. Explored Image MetricsThe subject data and images were taken from an ongoing clinical PD imagingstudy at the University of British Columbia. Due to the ongoing nature of thestudy, the number of study subjects was di↵erent in di↵erent parts of this work.Additional subjects were added to the analysis as the images from those subjectsbecame available. Additionally, as the study progressed, the methods of the analysissuch as the ROI definition method or the employed statistical significance tests wererefined. The analysis methodology and number of used subjects are specified in eachchapter.The entirety of the investigation is chronologically and thematically split intothree chapters. In Chapter 6, the focus was on quantifying the geometrical propertiesof regions of high tracer concentration; these results were obtained first. Basedon the outcome of Chapter 6, in Chapter 7 the texture of activity distributionwas analyzed (additional subjects were added). In Chapter 8 the focus is on themodeling-based analysis and interpretation of the results obtained in Chapter 7.5.4 Explored Image MetricsThe choice of the investigated metrics was based on the following three criteria:1) metric was not previously explored in brain PET, or was explored in a relatedarea (e.g. oncology imaging) and found to have a diagnostic value, 2) metric wassuitable for the analysis of localized tracer distributions, and 3) metric seemed tobe appropriate for the characterization of the rostro-caudal pattern of dopaminer-gic function loss in PD. The chosen metrics were separated into 4 distinct groupsaccording to the image property that they quantify: value metrics, shape metrics,moment invariants (MIs), and HF. The abstract mathematical definitions of themetrics are provided below. The images from which the metrics were computed arespecified in Chapters 6 and 7.5.4.1 Value MetricsThis group contains metrics that capture the statistical properties of voxel intensitiesinside the ROI. These metrics were computed either from parametric BPND images,activity concentration or AR images, and the voxel values in the images of di↵erentsubjects were not binned or normalized to a fixed range.• The mean voxel intensity within the ROI;1365.4. Explored Image Metrics• The standard deviation (STD) of voxel intensities;• The index of dispersion (IOD) of voxel intensities, defined as the ratio of thevariance to the mean.5.4.2 Shape MetricsMetrics in the shape group quantify the geometrical properties of a region, eitherusing absolute measures, or with respect to a reference region. Let R and RREF de-note the analyzed and reference regions, respectively. For example, R may representthe size of the functionally active region in the putamen, and RREF may representthe putamen as defined by the MRI.• Volume (VOL) and surface area (SAR) of R. These metrics were expected tocapture the reduction of functionally active regions with time.• Relative volume di↵erence (RVD) and volumetric overlap error (VOE):RVD =|R| |RREF ||RREF | (5.1)V OE = 1 |R \RREF ||R [RREF | (5.2)where |·| denotes the number of voxels in a region. RVD is a measure of relativeregion size, and VOE measures the degree of the spatial alignment betweentwo regions. With R set to represent the functionally active (DTBZ) region,and RREF set to represent the MRI-based anatomic reference regions, thesemetrics were expected to diminish with PD duration.• Distance between the centers-of-mass of R and RREF (RCM), normalized tothe anteroposterior length of RREF . With disease progression, the centerof the functionally active region in PD subjects shifts towards the anteriorstriatum. With R and RREF set to represent the functionally active andanatomic reference regions, respectively, RCM was expected to increase withPD duration.• Eccentricity of ellipsoid fitted to R (ECM), which in PD subjects was expectedto become lower with time.1375.4. Explored Image Metrics• Region compactness (CMP), defined as the inverse ratio of SAR of R to SARof a same-volume sphere.• Region extent (EXT), defined as the ratio of VOL of R to VOL of the tightbounding box that contains R. With disease progression, the shapes of thefunctionally active regions were expected to become more regular (blob-like).The metrics CMP and EXT were expected to reflect this aspect.• Mean region breadth (MBR), which measured the mean width of R along 13spatial axes. MBR was expected to diminish with disease progression.5.4.3 Moment InvariantsMIs are calculated as specific combinations of image moments that provide invari-ance to scaling, translation and rotation. For a 3D function, in this work representedby a PET image within a given ROI, the moments of order n = p+ q + r are givenby:mpqr =Z +11Z +11Z +11xpyqzrf(x, y, z)dxdydz (5.3)where f(x, y, z) is the value of voxel with coordinates (x, y, z) within the ROI. Thefirst order moments can be used to find the centroid coordinates of the object ineach direction as follows:x¯ =m100m000, y¯ =m010m000, z¯ =m001m000(5.4)To obtain invariance to position in the image, central moments are used and aredefined as follows:µpqr =Z +11Z +11Z +11(x x¯)p(y  y¯)q(z  z¯)rf(x, y, z)dxdydz (5.5)In addition, central moments can be made invariant to the size of the object bynormalizing them appropriately. Since low-order moments are less sensitive to noiseand easier to calculate, it is common to normalize by the smallest order momentµ000:⌘pqr =µpqrµp+q+r3 +1000(5.6)To obtain invariance to rotation, the normalized central moments need to be1385.4. Explored Image Metricscombined in specific ways. The combinations have been derived analytically bydi↵erent authors. In this work, the definitions provided by [183] and [184] wereused:• Moment J1 that quantifies total spatial variance:J1 = ⌘200 + ⌘020 + ⌘002 (5.7)• Moment J2 that quantifies spatial covariance and variance:J2 = ⌘200⌘020 + ⌘200⌘002 + ⌘020⌘002  ⌘2101  ⌘2110  ⌘2011 (5.8)• Moment J3 that quantifies spatial covariance and variance:J3 = ⌘200⌘020⌘002  ⌘002⌘2110  ⌘020⌘2101⌘200⌘2011 + 2⌘110⌘101⌘011(5.9)• Moment B3 that includes skewness and other terms:B3 = ⌘2300 + ⌘2030 + ⌘2003+3(⌘2210 + ⌘2021 + ⌘2201+⌘2120 + ⌘2012 + ⌘2102) + 6⌘2111(5.10)• Moment B4 that includes kurtosis and other terms:B4 = ⌘2400 + ⌘2040 + ⌘2004+4(⌘2310 + ⌘2031 + ⌘2301 + ⌘2130 + ⌘2013 + ⌘2103)+6(⌘2220 + ⌘2022 + ⌘2202) + 12(⌘2211 + ⌘2121 + ⌘2112)(5.11)The MI metrics have been previously used to classify image patterns [185]. Whenapplied to binary images where f(x, y, z) 2 {0, 1}, they could be loosely categorizedas shape metrics. On the other hand, when f(x, y, z) represents the real-valuedvoxel intensities (e.g. activity concentration values), the MI combine two types ofinformation— voxel intensities and their spatial distribution. However, as opposedto texture metrics, they do not characterize repetitive patterns in the images. TheMI are considered in chapters 6 and 7.1395.4. Explored Image Metrics5.4.4 Haralick FeaturesHF are second order metrics that quantify the statistical properties of the GLCM.The GLCM is computed from a gray value image obtained by binning the originalvoxel intensities into an integer number of bins. Each voxel in the gray value imagerepresents a bin index. The elements of GLCM contain the counts of co-occurringgray levels between pairs of voxels located at the end points of the stepping vectorg = D ⇥ gˆ (Fig. 5.2). Starting from the location of a given voxel in the image, astep to a di↵erent voxel is taken. If the gray values of the two voxels are i and j,the element GLCM(i, j) is incremented by one. The entire GLCM is computed bygoing over all voxels in the image or ROI. At least three parameters must be definedto compute the GLCM: a) the number of gray level bins used for intensity discretiza-tion, b) direction vector gˆ = (x, y, z) along which the gray-value co-occurrences arecounted, and c) distance D between the analyzed voxel pairs (distance along thenormalized direction vector gˆ).The standard method to compute the gray value image is to define 2N , N =2, 3, 4, ... bins on the range from the minimum to the maximum voxel intensity.With such normalization, all information about the original voxel intensities, andrelative intensities between di↵erent images, is removed. Therefore, the HF metricsquantify local spatial patterns and gray value distributions. Mathematically, thiscan be expressed as the equalityGLCM(f(x, y, z)) = GLCM(af(x, y, z) + b) (5.12)where a 6= 0 and b are the scale and o↵set, respectively, applied to the image f .Many di↵erent HF can be computed from a single GLCM. There exists somedegree of confusion in the mathematical description and naming of di↵erent HF. Theequations for computing the features are defined di↵erently in di↵erent publishedworks. Several tables of the most frequently cited HF definitions written using aconsistent notation are provided in Appendix A. A set of HF compiled from di↵erentsources and used in this work are listed and defined below.In formulating the HF, the following abbreviations are used (based on [186, 187]):p: normalized gray-level co-occurrence matrixNg: number of gray level binsi, j: indexes of the gray level bins1405.4. Explored Image Metricsp(i, j): value of p at row i and column jpx(i) =PNgj=1 p(i, j)py(j) =PNgi=1 p(i, j)px+y(k) =NgPi=1NgPj=1(p(i, j)|i+ j = k)µx =Piipx(i)µy =Pjjpy(j)x =PiPj(i µx)2p(i, j)y =PiPj(j  µy)2p(i, j)HXY 1 = PiPjp(i, j) log(px(i)py(j))HXY 2 = PiPjpx(i)py(j) log(px(i)py(j))HX = Pipx(i) log(px(i))HY = Pjpy(j) log(py(j))HXY = PiPjp(i, j) log(p(i, j))The 15 HF explored in this work included:• Autocorrelation (ACRL):ACRL =XiXj(ij)p(i, j) (5.13)• Contrast (CTR):CTR =XiXj(i j)2p(i, j) (5.14)• Correlation (CRL):CRL =PiPj(i µx)(j  µy)p(i, j)xy(5.15)• Cluster prominence (CLP), also called cluster tendency in literature:CLP =XiXj(i+ j  µx  µy)4p(i, j) (5.16)1415.4. Explored Image Metrics• Cluster shade (CLS):CLS =XiXj(i+ j  µx  µy)3p(i, j) (5.17)• Dissimilarity (DIS):DIS =XiXj|i j| p(i, j) (5.18)• Energy (ENR), also called uniformity and angular second moment in literature:ENR =XiXjp(i, j)2 (5.19)• Entropy (ENT):ENT = XiXjp(i, j) log(p(i, j)) (5.20)• Homogeneity (HOM), also called inverse di↵erence:HOM =XiXj11 + |i j|p(i, j) (5.21)• Information measure 1 (INF1):INF1 =HXY HXY 1max(HX,HY )(5.22)• Information measure 2 (INF2):INF2 = (1 exp(2(HXY 2HXY )))1/2 (5.23)• Normalized homogeneity (NHOM):NHOM =XiXj11 + |ij|Ngp(i, j) (5.24)• Maximum probability (MPR):MPR = maxi,j(p(i, j)) (5.25)1425.4. Explored Image MetricsFigure 5.2: Diagram that illustrates the computation of the GLCM for a striatal ROIdefined over the DTBZ image of a PD subject. g is the stepping vector that connectsvoxels with gray values 3 and 4. AP – anteroposterior direction, ML – mediolateraldirection.• Sum average (SAVG):SAV G =2NgXk=2ipx+y(k) (5.26)• Sum entropy (SENT):SENT = 2NgXk=2px+y(k) log{px+y(k)} (5.27)143Chapter 6Analysis of Localized TracerDistribution Using ShapeDescriptors6.1 IntroductionPET data related to neurodegeneration are most often quantified using methodsbased on tracer KM. In this chapter, the ability of KM-independent shape and MImetrics to convey useful information on disease state is evaluated, in comparisonto value-based metrics. The study is performed using data from PD subjects andhealthy controls imaged with DTBZ and RAC. The metrics are evaluted for the func-tionally active regions that are defined using inter-modality (DTBZ or RAC PETcombined with MRI) region definition criteria. Regression analysis is performed be-tween the image metrics and clinical measures of PD—motor performance scoresand disease duration (DD). It was expected that the shape and MI metrics computedusing the DTBZ-defined ROIs and activity values would capture the progressive as-pect of PD and show significant correlations with the clinical measures, while nosignificant disease-related trends would be observed for metrics derived from theRAC images. To test the hypothesis that combining metrics that quantify di↵erentproperties should improve the correlation with the clinical measures, combinationsof metrics are analyzed using bivariate linear regression models.The analysis methodology in this chapter consists of three main steps: 1) ROI-definition: a series of single-modality and inter-modality PET-MRI ROIs are gener-ated. These ROIs include PET-defined functionally active regions (regions of hightracer uptake), MRI-defined regions corresponding to anatomical structures (puta-men and caudate), and a range of intermediate regions. The intermediate ROIsare taken from a mixed ROI space established using a linear combination of thesingle-modality PET and MRI segmentations. The mixed ROIs are used as a tool1446.2. Methodsto systematically investigate the performance and variability of each metric as afunction of ROI definition criteria. DTBZ-MRI and RAC-MRI ROIs are explored;2) Metric computation: the shape metrics are evaluated for the generated ROIs, andthe MI and value metrics are computed using the DTBZ and RAC activity values;3) Metric evaluation: the correlation between the metric values and the clinical PDseverity measures is analyzed as a function of the used ROIs.The chapter is organized as follows. The imaging protocol, the ROI generation,and the analysis procedures are described in Section 6.2. The main results arepresented in Section 6.3: the data obtained with DTBZ-derived metrics are reportedfirst, followed by the description of findings with RAC-derived metrics. Section 6.4contains the summary and analysis of results.6.2 Methods6.2.1 Data Acquisition and Pre-processingThe study used data from 16 PD and three control subjects acquired as part of anongoing clinical study. The mean age of the PD subjects was 61.2 ± 7.3 (range 52to 79 years), with the clinical disease severity ranging from mild to moderate onthe unified Parkinson’s disease rating scale (UPDRS). Given that PD can a↵ect twosides of the brain to di↵erent degrees, the motor part of the UPDRS was evaluatedseparately for the left and right side (o↵-medication). The mean lateralized UPDRSscore was 11.6 ± 6.5 (range 1 to 26). The mean DD measured from the time ofclinical onset was 7.3 ± 4.5 years (range 1 to 15 years).T1-weighted MRI and DTBZ PET images of the brain were acquired for all sub-jects; RAC images were acquired for all but one PD subject. The MR images wereobtained with a Philips Achieva 3T scanner at the UBC MRI Research Centre usinga T1-weighted Turbo Field Echo sequence (TR 7.7 milliseconds). MR image dimen-sions were 256⇥256⇥170 with voxel size (1.0 mm)3. The PET data were acquiredin list mode using the HRRT at the UBC PET Imaging Centre and reconstructedusing the 3D OSEM-OP algorithm [29]. Two image types were used for metricevaluation: activity concentration images acquired over 30 minutes (30–60 minutespost-injection), and parametric BPND images produced from 16 temporal frames.The durations of the frames were (in sequence) 1 min (4 frames), 2 min (3 frames),5 min (8 frames), 10 min (1 frame). The parameteric images were computed using1456.2. Methodsa simplified RTM [52]; the occipital cortex and cerebellum were used as the DTBZand RAC reference regions, respectively. PET image dimensions were 256⇥256⇥207with voxel size (1.219 mm)3. PET images were rigidly co-registered to the nativeMR images (transformation was applied to PET images). The MR images wereresampled using trilinear interpolation to match PET voxel size. The SPM softwarepackage (www.fil.ion.ucl.ac.uk/spm/) was used for rigid, mutual information-basedco-registration.6.2.2 Single-modality ROIsMRI-based ROIs for each subject (ROIMRI) were generated by manually out-lining left and right putamen and caudate in the MR images using the ImageJ(www.rsbweb.nih.gov/ij/) image analysis software (4 MRI-based regions in total persubject). To produce the DTBZ and RAC ROIs (ROIPET ) defined by the regionsof high activity concentration, the activity images acquired over 30 minutes weresmoothed with an anisotropic di↵usion filter [188] and thresholded. Following one ofthe methods previously used in oncology imaging [189–192], the threshold level wascomputed as 40% of the maximum value (within the respective side), after subtrac-tion of the background in the surrounding tissue. The resulting binary mask wassubdivided into left and right putamen and caudate regions (4 PET-based regionsin total per subject). Thus, for each MRI-based ROI there was a correspondingDTBZ-based ROI and a RAC-based ROI.6.2.3 Mixed PET-MRI ROIsTo properly evaluate the variability of metric performance, a set of mixed PET-MRI ROIs was used to obtain metric values over a range of possible ROIs with thetwo extremes being ROI defined purely 1) according to the functionally preservedvolume of the structure of interest and 2) according to the anatomical delineationof the structure. The method of mixed ROI generation followed the mathematicsof localized N-d shape transformations [193, 194] and is outlined in Fig. 6.1. Foreach pair of corresponding ROIPET and ROIMRI , the distance-argumented implicitfunctions (PPET , PMRI) were computed using the equationP (d) = 0.5(err( ⇥ d) + 1) (6.1)1466.2. Methodswhere d is the signed distance from the ROI boundary (positive inside the ROI), anderr(x) is the Gaussian error function. The error function was used here because itrepresents the convolution of the Heaviside step function with the Gaussian function,which provides a good description of the PSF of the imaging apparatus. The values = 0.32 and  = 0.81 were used for PET and MRI implicit functions, respectively.These values were obtained experimentally by fitting the error function to the edgeof striatum in the respective images, and taking the mean value of  over all subjects.Note that at the ROI boundary P (0) = 0.5.The implicit PET and MRI functions corresponding to the same subject andstructure were fused as follows:PFUSED(↵) = ↵PMRI + (1 ↵)PPET , ↵ 2 [0, 1] (6.2)where ↵ is a free parameter representing the relative weight of the MRI component.The mixed PET-MRI ROIs were obtained by choosing the voxels with the valueof PFUSED above 0.5: ROIMIX(↵) = {(x, y, z) : PFUSED(↵, x, y, z) > 0.5}. Thegeometric shape of ROIMIX could be adjusted by changing the ↵ parameter. Values↵ = 0 and ↵ = 1 correspond to single-modality PET and MRI ROIs, respectively:ROIMIX(0) = ROIPET (DTBZ or RAC), ROIMIX(1) = ROIMRI .Two mixed ROI spaces were established using Eq. 6.2: DTBZ-MRI and RAC-MRI. The putamen and caudate mixed ROIs were generated automatically for allsubjects. The mixed ROIs corresponding to ↵ = [0, 0.1, ..., 1.0] were used to com-pute the investigated image metrics and to evaluate their correlation with DD andUPDRS scores.6.2.4 Image MetricsThe image metrics chosen for the analysis included value metrics described in Section5.4.1, shape metrics described in Section 5.4.2, and MI metrics described in Section5.4.3. The shape metrics were evaluated for the ↵-dependent regions ROIMIX(↵)that were used in place of R; ROIMRI were used in place of RREF ; since ↵ enteredthe shape metric equations, the shape metrics themselves were treated as functionsof ↵.The value and MI metrics were computed using PET voxel values (BPND or ac-tivity concentration) inside ROIMIX(↵). The mean BPND of DTBZ and RAC werecomputed from the corresponding parameteric images — this was the only metric1476.2. MethodsMRIPETSegmentation Implicit function computationFusionThresholdingROIMRIROIPETPMRIPPETPFUSED ROIMIXFigure 6.1: Flowchart of the algorithm employed to generate mixed PET-MRI ROIs.The main processing steps are shown using the transaxial slices through the repre-sentative PET/MRI volume images.that was based on KM, and the performance of other metrics (in terms of the corre-lation with the clinical data) was evaluated in comparison to the mean BPND. Themetrics COV and STD were computed from the activity concentration images. TheMI metrics were also computed from the activity concentration images.Functions to compute the image metrics were implemented in Matlab. The ROIswere represented and stored as binary volume images (voxel grids), with the samesize as the analyzed DTBZ and RAC images.6.2.5 Metric EvaluationThe values of the image metrics were obtained using DTBZ-MRI and RAC-MRImixed ROIs that were generated using Eq. 6.2 with ↵ = [0.0, 0.1, ..., 1.0]. Toaccount for the asymmetric nature of PD, the metrics were computed separatelyfor the two sides of the brain, using either putamen or caudate mixed ROIs. Foreach value of ↵, the metric values corresponding to the clinically better (or worse)side in each subject were regressed against the corresponding lateralized UPDRSscores and DD.Since the control subjects represent a di↵erent population compared to the PDsubjects, they should be excluded from regression analysis in order to avoid biasingthe measured correlation coecients and significance levels. On the other hand,including the control subjects enables the examination of the metric behavior whentransitioning from healthy to disease state. In addition, control subjects provide a1486.2. Methodsreference for tracking the change in metric value with disease progression. Thus, theregression analysis was performed with (N = 19) and without (N = 16) the inclusionof the control subjects in the test sample.The main measure of correlation was the square of the correlation coecientR2, obtained by fitting the data to the two-term models of the form f(x) = b+ axand f(x) = b exp(ax), where f is the analyzed metric, and x is DD or UPDRSscore. A detailed investigation of more complex functions was outside the scope ofthis work, as a higher number of subjects would have been required to establishthe proper functional form for each of the investigated metrics. The exceptionwas an additional fit of BPND to the three-term function f(x) = c + b exp(ax),since it has been previously determined that such a function most appropriatelydescribes the relationship between DTBZ BPND and DD [195]. Bootstrapping withreplacement was used to obtain the mean R2(↵), with the corresponding standarddeviation and 95% confidence intervals. In addition, the Spearmans correlationcoecient ⇢ was computed to measure the statistical dependence between the imageand clinical metrics in a way that does not imply any specific type of functionalrelationship. Since the results were not corrected for multiple comparisons [171, 196],the absolute p-values associated with the correlations need to be interpreted withcaution; nevertheless, the relative values between di↵erent metrics are expectedto be relevant. To investigate the degree of dependency between image metrics,the goodness of fit to a bivariate linear model was measured, expressed as adjustedR2adj , with two image metrics as the independent variables and DD as the dependentvariable.The analysis procedure was performed separately for the DTBZ-derived met-rics in the DTBZ-MRI ROIs and RAC-derived metrics in RAC-MRI ROIs and forthe clinically worse and better sides of the striatum. In addition, the relationshipbetween the clinical data and DTBZ-derived metrics was evaluated in the RAC-MRI space. This provided an assessment of the relative metric sensitivity to theanatomic fidelity of the ROI, i.e. whether the metrics remain suitable for analysiswith RAC-based ROIs used as surrogate anatomical guidelines [176, 197]. Here anassumption was made that there are no spatial di↵erences between the pre-synapticand post-synaptic binding targets.1496.3. Results6.3 ResultsResults obtained with DTBZ-derived metrics in the DTBZ-MRI ROI space are pre-sented in detail first, including the variability of metrics with ↵ and R2(↵) plots.The value metric group is discussed first, followed by the shape and MI metrics. Re-sults obtained with RAC- and DTBZ-derived metrics in the RAC-MRI ROI spaceare summarized in a separate section.The correlations were found to be stronger when the clinically better side wasevaluated, consistent with previous studies (Nandhagopal et al. 2009). Therefore,the presented results are for the metrics values and correlations that were evaluatedon the less a↵ected (better) side; using the more a↵ected side yielded the samegeneral outcomes. Additionally, the MI metrics J3, B3, and B4 produced poorcorrelation results with multiple outliers (consistent with the earlier results reportedin [198]). Thus, they were excluded from the analysis at the early stages of work.6.3.1 Metric Values and VariabilityExamples of ROIs from the DTBZ-MRI ROI space used for metric evaluation areshown in Fig. 6.2. Greater values of ↵ produced ROIs with finer spatial detail(particularly in the areas of interior capsule and posterior caudate) and lower surfaceirregularity. With PD subjects, the activity-defined PET regions were smaller thanthe corresponding MRI regions, as expected. Figure 6.3 further demonstrates thedegree of alignment between the activity and anatomy-defined regions: even in areaspresumably una↵ected by the disease, the MRI ROIs did not always align well withthe functionally active regions (indicated by arrows). This misalignment was greaterwith the caudate ROIs, where the ROI shape di↵erences between DTBZ and MRIwere more significant (likely due to the more pronounced partial volume e↵ect andregistration imperfection). With control subjects, the mean VOE was 0.64 ± 0.08for the caudate, and 0.33 ± 0.05 for the putamen; with PD subjects, the mean VOEwas 0.67 ± 0.08 and 0.82 ± 0.11 for the caudate and putamen regions, respectively.Several representative metrics (VOL, BPND, CMP, and J1) are plotted in Fig.6.4 as functions of ↵. The VOL values reveal the various degrees of functionalatrophy in PD subjects; di↵erent metric behavior between the less a↵ected andmore a↵ected subjects is observed. The values of BPND were highest with ↵ ⇠ 0 forPD subjects, reflecting the spatially non-uniform dopaminergic denervation typicalof PD. The CMP graphs revealed that PET ROIs were on average more compact but1506.3. Results0.0 0.2 0.4 0.6 0.8 1.0=DTBZROIMRIROIαControl SubjectPD SubjectFigure 6.2: The surface renderings of ROIMIX(↵) for one control subject and onePD subject (UPDRS 9.0, DD 6, moderate severity) in the DTBZ-MRI ROI space.substantially less consistent compared to MRI ROIs. The J1 graphs demonstratethat the spatial variance of voxel values was highest with MRI ROIs and lowest withPET ROIs. Longer DD generally corresponded to higher J1, lower VOL and lowerBPND.6.3.2 Correlation Between Image and Clinical MetricsThe maximum values of R2 and ⇢ obtained for the investigated metrics in the DTBZ-MRI and RAC-MRI ROI spaces are summarized in Table 6.1, along with the valuesof ↵ that maximized the correlation with the clinical data (↵max). The data areshown only for less a↵ected side of putamen; the correlation obtained using thecaudate ROIs was weak for most metrics, as expected, given the known spatio-temporal progression pattern of PD.Among the value group, STD had the strongest correlation with the clinicalmeasures, similar to that of BPND and followed by IOD. The corresponding valuesof ⇢ generally followed the same pattern. The R2(↵) plot and representative scatterplots for log(BPND) are shown in Fig. 6.5. The R2(↵) graph demonstrates a trendtoward stronger correlation between BPND and DD around ↵ ⇠ 0.5 (also observedwith ⇢). The scatter plots revealed that a three-term exponential function of theform f(x) = c+ b exp(ax) was a better fit for BPND (R2DD = 0.94, R2UPDRS = 0.62)compared to the two-term function (R2DD = 0.82, R2UPDRS = 0.61), as known fromliterature [195]. Using the linear and exponential two-term fits resulted in nearly1516.3. ResultsImage MetricsDTBZ-MRI↵max DD UPDRSValueLog(BPND) 0.40.82** (-0.90**)T 0.94**0.61** (-0.86**)T 0.62**STD 1.0 0.88** (-0.91**) 0.63** (-0.84**)IOD 1.0 0.72** (-0.83**) 0.39** (-0.65**)ShapeVOL 0.2 0.57** (-0.69**) 0.48 (-0.68**)SAR 0.2 0.56** (-0.68**) 0.51** (-0.70**)RVD 0.3 0.53** (-0.70**) 0.46** (-0.69**)VOE 0.2 0.60** (0.76**) 0.52** (0.77**)RCM 0.3 0.48** (0.64**) 0.38** (0.60**)ECM 0.0 0.65* (-0.75**) 0.59* (-0.74**)CMP 0.5 0.32* (-0.49*) 0.36* (-0.53*)EXT 0.5 0.31* (-0.51*) 0.36* (-0.69**)MBR 0.3 0.51** (-0.62**) 0.53** (-0.68**)Moment InvariantsJ1 1.0 0.94** (0.94**) 0.79** (0.90**)J2 1.0 0.91** (0.93**) 0.77** (0.91**)Table 6.1: Maximum values of R2 and ⇢ (given in parentheses) between image met-rics and clinical metrics obtained in the DTBZ-MRI ROI space (using less a↵ectedside of putamen). All subjects were included in the analysis. The R2(↵) valueswere obtained by fitting the image metrics with the two-term linear functions ofUPDRS and DD. A two-term exponential function was used with BPND. Absent↵max indicates that no trend in the correlation strength was observed. ** p<0.01;* p<0.05; no glyph for p>0.05. TValue obtained with three-term exponential fit(BPND only).1526.3. ResultsImage MetricsRAC-MRI↵max DD UPDRSValueLog(BPND) - 0.06 (0.21) 0.06 (0.22)STD - 0.07 (0.23) 0.05 (0.24)IOD - 0.07 (0.26) 0.09 (0.30)ShapeVOL - 0.10 (0.38) 0.06 (0.29)SAR - 0.09 (0.32) 0.06 (0.24)RVD - 0.11 (0.32) 0.17 (0.29)VOE - 0.12 (0.30) 0.06 (0.28)RCM 0.3 0.29* (0.51*) 0.21 (0.39)ECM - 0.16 (0.32) 0.09 (0.27)CMP - 0.13 (0.28) 0.12 (0.31)EXT 0.3 0.36* (0.51*) 0.40** (0.50*)MBR - 0.09 (0.26) 0.07 (0.25)Moment InvariantsJ1 - 0.07 (0.24) 0.08 (0.26)J2 - 0.08 (0.25) 0.07 (0.23)Table 6.2: Maximum values of R2 and ⇢ (given in parentheses) between imagemetrics and clinical metrics obtained in the RAC-MRI ROI space (using less a↵ectedside of putamen). All subjects were included in the analysis. The R2(↵) valueswere obtained by fitting the image metrics with the two-term linear functions ofUPDRS and DD. A two-term exponential function was used with BPND. Absent↵max indicates that no trend in the correlation strength was observed. ** p<0.01;* p<0.05; no glyph for p>0.05.1536.3. Results0.0 0.5 1.0=PD Subject APD Subject BαFigure 6.3: Contours of ROIMIX(↵) overlaid on the transaxial slices of DTBZ BPNDimages, for two representative PD subjects and three values of ↵. Arrows pointout areas of misalignment between ROIMIX(↵ = 1) and regions of high activityconcentration.identical correlation coecients with all metrics except BPND. Nevertheless, sincethe two-term linear function was used with other metrics, with BPND both thetwo-term and three-term R2 values were retained as the reference. The use of theSpearman correlation coecient ⇢ may be more robust in this regard since it doesnot imply any specific type of functional dependence.The correlation between the clinical data and image metrics in the geometrygroup was statistically significant but lower compared to the value group. Thecorrelation was statistically significant only with PET ROIs or mixed ROIs, andnegligible with MRI ROIs. Metrics related to the size of the region (VOL, VOE,RVD, SAR) had the highest values of R2 and ⇢, and the maximum correlationwas most often observed around ↵ ⇠ 0.3 as shown in the R2(↵) graph for RVDin Fig. 6.5. With metrics that captured the shape properties (CMP, EXT, MBR,ECM), R2 and ⇢ plots generally had a maximum around ↵ = 0.5. An example ofsuch trend for CMP is shown in Fig. 6.5. The scatter plots for CMP demonstrate1546.3. Results051015diseaseduration (years)BP ND vs αBPNDJ1VOL vs αVOLCMP vs α J1 vs α0 0.2 0.4 0.6 0.8 112345678910 x 10−4  0 0.2 0.4 0.6 0.8 105001000150020002500  0 0.2 0.4 0.6 0.8 1−1−0.500.511.5  CMP0 0.2 0.4 0.6 0.8 10.40.450.50.550.60.65 α αα αFigure 6.4: Graphs of VOL, BPND, CMP, and J1 in the DTBZ-MRI ROI space forall subjects, evaluated using mixed ROIs of the putamen (less a↵ected side). HigherDD generally corresponded to more significant metric variability with ↵. The threesubjects with DD of zero correspond to control subjects.1556.3. ResultsR2(α):log(BPND)AB0 5 10 15 20 25−1−0.8−0.6−0.4−0.200.20.4RVD−vs−updrsupdrsRVD0 3 6 9 12 15−1−0.8−0.6−0.4−0.200.20.4RVD−vs−durationRVD0 3 6 9 12 150.440.460.480.50.520.540.560.58CMP−vs−durationCMP0 5 10 15 20 250.440.460.480.50.520.540.560.58CMP−vs−updrsupdrsCMP0 3 6 9 12 15−1.5−1−0.500.511.5log(BPND)−vs−durationDisease duration (years) Disease duration (years) Disease duration (years)log(BP ND)0 5 10 15 20 25−1.5−1−0.500.511.5log(BPND)−vs−updrsupdrslog(BP ND)R20 0.2 0.4 0.6 0.8 100.20.40.60.810 0.2 0.4 0.6 0.8 100.20.40.60.81R2(α):RVDR20 0.2 0.4 0.6 0.8 100.20.40.60.81R2(α):CMPR2α α αPD subjectsCL subjectsPD subjectsCL subjectsDDUPDRSR         = 0.74, p < 0.012CL+PDR     = 0.73, p < 0.012PDR         = 0.52, p < 0.012CL+PDR     = 0.31, p < 0.052PDR         = 0.28, p < 0.052CL+PDR     = 0.24, p = 0.052PDR         = 0.59, p < 0.012CL+PDR     = 0.63, p < 0.012PDR         = 0.44, p < 0.012CL+PDR     = 0.33, p < 0.052PDR         = 0.34, p < 0.012CL+PDR     = 0.29, p < 0.052PD= 1.0 = 0.3 = 0.5= 1.0 = 0.3 = 0.5Figure 6.5: A. Mean bootstrapped values of R2 with standard deviation (error bars)and 95% confidence intervals (filled regions), plotted against ↵ for BPND (left), RVD(middle) and CMP (right). The correlation with DD (blue) and UPDRS (green) wasevaluated using putamen mixed ROIs; B. Representative scatter plots of log(BPND),RVD and CMP against DD and UPDRS. Non-bootstrapped values of R2 are shownfor the cases with the control subjects were included (control+PD) and excluded(PD) from the analysis.1566.3. Resultsthe correlation pattern observed with ↵ = 0.5: subjects with low DD and UPDRSscores generally had higher CMP values. The trend of increased correlation in theregion of mid-range ↵ values was observed in both the R2 and ⇢ functions with mostgeometry-based metrics.The MI metrics J1 and J2 had the strongest correlation with the clinical dataamong all other metrics; the highest values of R2 and ⇢ were obtained with MRI-based ROIs. The R2(↵) graphs and scatter plots for J1 and J2 are shown in Fig. 6.6.With ↵ < 0.5 the mean correlation coecients were substantially reduced comparedto ↵ ⇠ 1.0; the reduction was on the order of 54% for DD and 74% for UPDRS(compared to ⇠20% and ⇠32% with BPND, respectively). The scatter plots stronglysuggest a linear relationship between DD and J1 (J2), as opposed to an exponentialtrend observed with BPND (compare Figures 6.5 and 6.6). Interestingly, J1 andJ2 were the only metrics that had relatively high R2 values when evaluated usingcaudate ROIs: R2DD(J1) = 0.71± 0.09 [↵ = 1], R2DD(J2) = 0.70± 0.10 [↵ = 1] withp<0.01.When the control subjects were excluded from the regression analysis, the cor-relation analysis for BPND produced values R2DD = 0.85 and ⇢ = 0.84, comparableto those previously obtained with control subjects included in the regression. Thecorrelation between the MI metrics and clinical data also did not change apprecia-bly: the value of R2DD was 0.91 (0.94 with control subjects included) for J1, and0.89 (0.91 with controls) for J2. On the other hand, the correlation strength for theshape metrics became lower. For example, the corresponding values of R2DD were0.39 (0.57 with controls) for VOL, 0.34 (0.53 with controls) for RVD, 0.49 (0.60 withcontrols) for VOE, and 0.52 (0.65 with controls) for ECM. The correlation valuesfor CMP and EXT were reduced by approximately 20%.6.3.3 Metric CombinationsWhen metrics of similar character were combined in a bivariate model (e.g. VOLand RVD), the value of R2adj expectedly did not increase compared to the respectiveunivariate models. However, the correlation did improve when qualitatively dif-ferent metrics were combined, indicating that metrics carried some complementaryinformation. For example, combining IOD (for ↵=1) with any ROI size metric (for↵=0) increased the R2adj by approximately 20%. The greatest increase was observedwith [IOD, VOE]: R2adj=0.69 (AID), 0.64 (VOE), 0.89 (AID, VOE). For reference,the R2adj with univariate regression was 0.92 for J1 and 0.84 for STD.1576.3. Resultsαα 0 3 6 9 12 1512345678910 x 10−4 J1−vs−durationDisease duration (years)Disease duration (years)J10 3 6 9 12 150123456789 x 10−8 J2−vs−durationJ20 5 10 15 20 2512345678910 x 10−4 J1−vs−updrsupdrsJ10 5 10 15 20 250123456789 x 10−8 J2−vs−updrsupdrsJ20 0.2 0.4 0.6 0.8 100.20.40.60.81R2(α):J1R20 0.2 0.4 0.6 0.8 100.20.40.60.81R2(α):J2R2PD subjectsCL subjectsPD subjectsCL subjectsR         = 0.92, p < 0.012CL+PDR     = 0.89, p < 0.012PDR         = 0.78, p < 0.012CL+PDR     = 0.75, p < 0.012PDR         = 0.90, p < 0.012CL+PDR     = 0.87, p < 0.012PDR         = 0.77, p < 0.012CL+PDR     = 0.72, p < 0.012PD= 1.0 = 1.0= 1.0 = 1.0DDUPDRSFigure 6.6: Mean bootstrapped values of R2 with standard deviation (error bars)and 95% confidence intervals (filled regions), plotted against ↵ for J1 (top) and J2(bottom). The representative scatter plots of metric values against DD (blue) andUPDRS (green) are shown for MRI-based putamen ROIs. Non-bootstrapped valuesof R2 are shown for the cases with the control subjects were included (control+PD)and excluded (PD) from the analysis.A related observation is that di↵erent image metrics appeared to have di↵erentfunctional dependence on DD. For example, BPND had an exponential dependenceon DD, while J1 and J2 clearly had a linear relationship with DD within the stud-ied range (compare Figures 6.5 and 6.6). This observation additionally indicatesthat di↵erent metrics may capture di↵erent aspects of disease progression, whichmay become even more evident when a larger range of disease severity/duration isconsidered.6.3.4 Metric Correlation in the RAC-MRI ROI SpaceExamples of mixed RAC-MRI ROIs are shown in Fig. 6.7A. The RAC ROIs hadapproximately the same size as the MRI ROIs, and the VOL metric averaged across1586.4. Conclusionssubjects was approximately constant with ↵. The RAC ROIs were noisier and notanatomically as accurate as the MRI ROIs. Even with co-registration and tracerbinding una↵ected by the disease, it was found that the MRI ROIs did not accuratelyencompass the regions of high RAC uptake. The mean VOE between RRAC andRMRI was 0.58 ± 0.05 for the caudate, and 0.36 ± 0.04 for the putamen. TheseVOE values are similar to those observed between RDTBZ and RMRI with thecontrol subjects, indicating that such mismatch is likely a reflection of the di↵erencein resolution between the MRI and PET images.When the metrics were computed using RAC-MRI ROIs and RAC image data(Table 6.2), the R2 and ⇢ for all metrics were low regardless of ↵ value, as expected.In this way, the RAC-MRI ROI space served as a negative control for the patternsobserved in the DTBZ-MRI ROI space. However, weak trends were observed withRCM and EXT: the correlation between these metrics and clinical data tended toincrease with ↵! 0.When the value and MI metrics were computed in the RAC-MRI space usingDTBZ image data, the high correlation of those metrics with DD and UPDRS waspreserved. The representative R2(↵) plots for BPND and J1 are shown in Fig. 6.7B.With putamen ROIs, the correlation between DTBZ BPND and clinical metrics didnot depend on ↵ (Fig. 6.7B, left).With J1, a gradual decrease of R2 was observed with ↵! 0 (Fig. 6.7B, middle),which here indicates going from MRI to RAC-defined ROIs. The reduction wasmore evident with regression against DD, with R2DD = 0.86 ± 0.06 [↵ = 0] andR2DD = 0.94± 0.02 [↵ = 1].On the other hand, when J1 was evaluated for the caudate regions, the oppositetrend was observed: R2DD = 0.81±0.06 [↵ = 0] and R2DD = 0.73±0.10 [↵ = 1] (Fig.6.7B, right). Since the mean values of R2 for ↵ = 0 and ↵ = 1 agree within errorbars, the trend is considered as statistically insignificant.6.4 Conclusions6.4.1 Relative Metric PerformanceAmong the investigated metrics, the MI J1 (R2DD = 0.94) and J2 (R2DD = 0.91)had the strongest correlation with the clinical data and the values of the correlationcoecient were similar to those obtained with DTBZ BPND when using a three-term1596.4. ConclusionsR2(α):DTBZ J1R2(α):DTBZ J1R2(α):DTBZ log(BPND)0.0 0.2 0.4 0.6 0.8 1.0=RACROIMRIROIABα α ααPUTAMEN PUTAMEN CAUDATE0 0.2 0.4 0.6 0.8 100.20.40.60.81R20 0.2 0.4 0.6 0.8 100.20.40.60.81R20 0.2 0.4 0.6 0.8 100.20.40.60.81R2DDUPDRSFigure 6.7: A. The shape of ROIMIX(↵) for one of PD subjects (UPDRS 9.0, DD6, moderate severity) in the RAC-MRI ROI space. B. Mean bootstrapped values ofR2 plotted against ↵ in the RAC-MRI ROI space, with standard deviation (errorbars) and 95% confidence intervals (filled regions). Left – DTBZ BPND computedin putamen; Middle - DTBZ J1 computed in putamen; Right – DTBZ J1 computedin caudate.exponential fit (R2DD = 0.94). The correlation was maximized when the MRI-basedROIs were used. Importantly, the MI were fit well with a two-term linear model,in contrast to DTBZ BPND. This indicates that the MI and BPND may relate tothe di↵erent aspects of disease progression and may be most sensitive at di↵erentstages of the disease. To test this hypothesis, studies on a wider cohort of subjectsare required.Compared to the value and MI metrics, the shape metrics had moderate-to-lowcorrelation with the clinical metrics and performed worse than BPND. Nevertheless,the measured values of the correlation coecient between the clinical assessmentsand these metrics evaluated for DTBZ-MRI ROIs were statistically significant, un-like the shape metrics evaluated for RAC-MRI ROIs. This observation strengthensthe conclusion that the moderate levels of correlation observed in the DTBZ-MRIROI space were indeed meaningful and informative of the neurochemical changesassociated with PD. The low correlation values indicate that, in the context exam-1606.4. Conclusionsined, shape metrics may be of limited value by themselves; they become more usefulif combined with complementary metrics (e.g. they can be combined with value-type metrics to improve the predictive strength of the corresponding multivariatemodel).It should be pointed out that the obtained p-values were not corrected for mul-tiple comparison (several metrics were tested using a relatively limited number ofsubjects). This could lead to worse than expected generalization outside of thestudied sample. However, conceptually similar metrics produced similar correlationvalues, providing an additional indication that the analysis results were robust. Theoptimal form of the functions relating the outcomes of the imaging metrics to theclinical data was not explored in detail; the limited number of data points did notallow for such an exhaustive comparison. The trends observed with the Spearmanscorrelation coecient ⇢ replicated those obtained with R2, which at least in partindicates that the results were not specific to the used linear (or exponential) fits.Although a search for an optimal functional form would likely tweak the rank orderof the correlations in terms of R2 (but not ⇢), its absence does not detract from oneof the main messages of this work that there is indeed clinically-relevant quantitativeinformation in the spatial distribution of the tracer, and that such information canbe captured using shape and MI descriptors.6.4.2 The Use of Mixed ROIs in Image AnalysisThe employed method to generate and use the mixed ROIs represents a new toolfor quantitative image analysis that is based on the gradual transition betweenthe ROIs defined using di↵erent imaging modalities. Starting with a single ROIset per subject per imaging modality, a multitude of ROIs can be generated andinvestigated systematically, thus increasing the generality and robustness of theanalysis. For example, in PET/MRI image analysis, the mixed ROIs can be chosento better align with the high activity regions, or to have higher anatomic fidelityand spatial definition, thus helping to investigate the dependence of quantitativedata on the ROI shape and to make a more informed, task-specific, choice of theROI placement. Although PET and MRI of the striatum were used to developthe method, the problem of ROI selection is not specific to these two modalitiesor to studies of PD. In fact, since di↵erent modalities reveal anatomical structuresdi↵erently, a similar problem may be present in any multi-modality imaging study.Therefore, the proposed approach is believed to be generalizable to other modalities1616.4. Conclusionsand imaging objectives, beyond the striatum, with the caveat that methodologicaldetails (such as calculation of the structure localization probability maps) mightneed to be optimized for each specific modality.The use of mixed ROIs in this study enabled the inspection of metric valuessimultaneously for a large set of regions: instead of looking at metric values asso-ciated with a particular region, whole ranges of metric and correlation values wereanalyzed. Thus, the data were e↵ectively put into greater perspective. The studyconfirmed that the relationship between the image-derived and clinical data mayindeed depend on the ROI definition method: the correlation between the investi-gated metrics and clinical data was found to depend on the relative contribution ofeach modality to the ROI definition. For example, comparing the DTBZ-MRI andRAC-MRI ROI spaces, the correlation between DTBZ BPND and clinical metricsremained unchanged regardless of what ROI space was used, indicating that regionsof high RAC uptake could be used as a substitute for the accurate anatomical (MRI-based) reference regions. On the other hand, the correlation between DTBZ J1 andclinical metrics was highest with MRI ROIs, and degraded by ⇠10% with RACROIs. This implies that the J1 and J2 metrics have higher sensitivity than BPNDto a region definition method, and that a fairly accurate anatomical reference maybe required to achieve maximum correlation when using these metrics. With BPND,the resolution of the anatomy-revealing scan could be reduced without significantlya↵ecting the correlation, e.g. RAC images can be used for the ROI definition if MRIdata are not available.The single-modality PET or MRI ROIs were not always optimal in terms ofmaximizing the R2 values. With region size metrics such as VOL, RVD and SAR,the correlation strength was maximized when PET ROIs were combined with asmall MRI contribution (highest R2 was achieved with ↵ ⇠ 0.3). These patterns(also observed with Spearmans correlation coecient) can likely be explained by theinsensitivity of the MRI ROIs to the disease on the one hand, and noise in the shapeof PET ROIs on the other. It can be hypothesized that the regularization of mixedROIs introduced by the MRI component reduced noise in the ROI shape, which inturn positively a↵ected the correlation strength. On the other hand, the majorityof pure MRI ROIs underestimated the mean BPND values for control subjects dueto misalignment and the partial volume e↵ect in PET images. This implies thatcaution should be used when using MRI-based ROIs to obtain quantitative PETdata. The mixed ROIs were better aligned with the high activity regions than the1626.4. Conclusionspure MRI ROIs.With regard to using mixed ROIs in image analysis, it is important to emphasizethe multiple applicability of the approach. For example, focus could be placed onlyon finding the optimal ROIs for use in a particular study. In this case, the samevalue of ↵ (or any other shape-defining parameter) should be used for all subjectsin the study (altering the free parameters on a per-subject basis would introduce aselection bias). The choice of ↵ value could be based on a number of di↵erent criteria.For example, in correlation studies that investigate the relationships between clinicaland image-based manifestations of the disease, the value of ↵ that maximizes thecorrelation could be learned from the training set of images/subjects.Alternatively, the approach could be utilized for classification and analysis tasks.The behavior of the metric functions BPND(↵), VOL(↵), CMP(↵) (Fig. 6.4) andothers were noticeably di↵erent for the control and PD subjects, thus making thesedata suitable for automatic PD/control discrimination. Analyzing the shapes ofthese functions may also prove useful for gaining additional information on themechanisms of disease progression, i.e. larger slopes can be indicative of greaterfunctional degeneration.Another potential application is to check the images for possible misregistra-tion errors and outliers. The graph of the functions BPND(↵) and VOL(↵) wereremarkably di↵erent for one of the subjects in the study compared to the rest. Asubsequent detailed examination of the corresponding MRI and PET images re-vealed sub-optimal image registration: the corresponding single-modality putamenROIs were displaced axially by ⇠6 voxels (⇠6 mm), compared to 1–3 voxels forother control subjects (Fig. 6.8). This method of registration quality control shouldbe especially useful in large studies where direct image inspection may be hindered.1636.4. ConclusionsROIPETROIMRICL 01 CL 02Figure 6.8: 3D renderings of single-modality PET and MRI ROIs for subjects CL01(left) and CL02 (right). The region of ROI mismatch due to misregistration isindicated by arrow.164Chapter 7Analysis of Regions withSpecific Tracer Uptake UsingTexture Descriptors7.1 IntroductionIn this chapter texture-based analysis of PET images is explored. The HF metricsdescribed in Section 5.4.4 were employed to characterize the DTBZ image texturein healthy controls and PD subjects. The correlation between the HF metrics andclinical DD was evaluated. The HF metrics are computed from GLCM that canbe defined for di↵erent image directions and distances. A thorough investigationrequires that a range of possible directions and distances are explored. Thus, theseparameters were evaluated from the standpoint of their e↵ect of the correlationstrength between the HF metrics and DD. The measured correlation coecientswere compared to those obtained with the MI and the mean BPND.The ROI definition criteria influences the descriptive strength of metrics, asdemonstrated by the results of Chapter 6. With the MI metrics (that similarly tothe HF metrics reflect the spatial distribution of the voxel intensities in the ROI) theMRI-based ROIs were found to produce the highest values of R2 and ⇢. Therefore,in this chapter MRI-based ROIs are treated as the gold standard, and PET-basedROIs are not used (since in Chapter 6 with the MI metrics they produced similar-or-worse results compared to the MRI-based ROIs).One must consider that nuclear imaging studies may not necessarily includeMRI for all subjects, and therefore anatomical reference images may not always beavailable. Additionally, the resolution of the acquired images may not be sucientto reliably di↵erentiate between closely located brain structures. In such cases, itis still desirable that texture-based analysis could be applied. Therefore, in thischapter two types of ROIs were considered: the MRI-derived ROIs of the putamen,1657.2. Methodscaudate and striatum, and simple bounding box (BB) ROIs that encompass thesame structures. The BB ROIs represent a simplified method of ROI definition thatcan be used when accurate anatomic reference is not available. While the primaryaim of this chapter was to investigate whether the HF metrics computed from PETimages correlate with the progression of PD, the secondary aim was to test theutility of the BB ROIs, including the influence of PET/MRI registration inaccuracyobserved in Section 6.3.1.The methodology of investigation adopted in this chapter was developed basedon the results presented in Chapter 6. Only DD was used as the clinical measureof progression, and the Spearman’s coecient ⇢ was used as the sole measure ofcorrelation between the image metrics and DD. Control subjects were removed fromthe correlation analysis, and instead the control/PD discriminative power was ex-amined. Additional methodological details are provided in Section 7.2, results arepresented in Section 7.3, and the discussion of results is presented in Section 7.4.7.2 Methods7.2.1 Clinical and Image DataAnalysis was performed using a sample of 37 PD subjects (mean age 61.4±8.0 y,range 40 to 79 y) and 10 control subjects (mean age 48.4±18.0 y, range 24 to80 y), a superset of subjects used in Chapter 6. Since in Chapter 6 almost allimage-derived metrics were better correlated with DD than with UPDRS score, forthe analysis performed in this chapter DD was used as the primary clinical mea-sure of disease. The mean DD for the expanded sample of subjects was 5.7±4.2years (range 0 to 13 years). The PET and MRI image acquisition, reconstructionand registration protocols were the same as those described in Section 6.2.1. The30-minute DTBZ activity concentration images and parametric BPND images werecomputed and rigidly co-registered with the MR images using the SPM softwarepackage (www.fil.ion.ucl.ac.uk/spm/). Mutual-information cost function was used.RAC images were not used in the investigation, since in Chapter 6 it was determinedthat compared to DTBZ, metrics evaluated on RAC images were not strongly af-fected by the disease progression.1667.2. Methods7.2.2 Evaluated MetricsTexture in DTBZ activity concentration images of PD and control subjects wasquantified using the HF metrics defined in Section 5.4.4. The GLCMs were computedfrom the DTBZ gray value images. The activity bin range was set between the 1stpercentile and the 99th percentile of the activity values in the ROI. The gray valueof a voxel was set as the bin index that corresponded to that voxel’s activity value.The MI metrics J1 and J2 as well as the ROI-mean DTBZ BPND were computedfor a reference for correlation strength comparison. The MI metrics were computedfrom the DTBZ activity concentration images, and the mean BPND was computedfrom the parametric images.7.2.3 Investigated Brain Structures and ROIsThree brain structures were considered: putamen, caudate and striatum (comprisedof putamen and caudate). Left and right sides of the brain were analyzed separately.With each structure, two types of ROIs were used (Fig. 7.1):• anatomical ROIs (PUT for putamen, CAU for caudate, STR for striatum)that were obtained by automated brain segmentation using Freesurfer [199];the STR ROIs were defined as the union of PUT and CAU ROIs;• BB ROIs (PBB for putamen, CBB for caudate, SBB for striatum) that weredefined using the corresponding MRI ROIs. The faces of the BB were parallelto the image planes. The size and position of the BB was set to tightly boundthe MRI ROIs, i.e. the size of the box was equal to the extent of the MRI ROIsalong the image dimensions. The orientation of head in the MR images wassuch that the image dimensions approximately corresponded to the anatomicanteroposterior, mediolateral and inferosuperior directions.For those subjects that had manually-defined ROIs available, using FreesurferROIs produced similar metric values to manual ROIs.7.2.4 GLCM ComputationThe HF metrics were computed from GLCMs that were obtained using di↵erentdirections gˆ and distances D (as defined in Section 5.4.4). The impact of theseparameters on the correlation between the HF metrics and DD was analyzed. Theexplored directions included:1677.2. Methods1. static directions (same for all subjects) that were defined according to theimage dimensions. The global directions were defined on a 3⇥3⇥3 voxel grid asvectors pointing from the central voxel to the periphery voxels. Included in thisset were the anteroposterior, mediolateral, and inferosuperior directions. Themediolateral directions for the left and right sides were taken to be opposite;2. dynamic directions that were defined according to the same rule but but variedbetween the subjects. The dynamic directions included (Fig. 7.2):(a) MRI-defined direction: along the longitudinal axis of the analyzed brainstructure; this direction was determined as the first principal componentresulting from the PCA analysis of the voxel coordinates comprising theMRI-defined ROI, and was only examined with the MRI-based ROIs;(b) PET-defined direction: along the mean activity gradient computed withinthe analyzed ROI; to reduce the influence of high activity gradients at theedges of putamen and caudate, the images were smoothed with a coarseGaussian filter (FWHM 10 voxels) prior to computing the gradient.Additionally, direction-averaged GLCMs were computed by averaging three GLCMscomputed in the anteroposterior, mediolateral, and inferosuperior directions.The GLCMs were direction-symmetric, i.e. the co-occurence counts were mea-sured using direction vectors gˆ and gˆ. The investigated distances D ranged from 1to 5 voxels (voxel size 1.219 mm). The number of gray level bins was always 16. TheGLCMs were normalized by the total number of samples (co-occurrence counts).Functions to compute the GLCMs and HF metrics were implemented in Matlab.The ROIs were represented by binary volume images, with the same voxel size anddimensions as the analyzed DTBZ images. Traversing thousands of voxels in anROI is computationally intensive; therefore, Matlab code to compute the GLCMwas converted to C code using Matlab coder, and compiled to a mex file that couldbe called directly from the Matlab environment.7.2.5 Methodology of Correlation and Discrimination AnalysisBased on the outcome of analysis performed in Chapter 6, the Spearman’s rankcorrelation coecient ⇢ was used as the measure of correlation between the image-derived metrics and DD. Indeed, the values of ⇢ measured between di↵erent metrics1687.3. Resultswere ranked similarly to R2, while having the advantage of being evaluated in a non-parametric test. The correlation was evaluated for the image metrics obtained fromthe less a↵ected side of the brain using the MRI and BB ROIs. Control subjectswere not included in the corrleation analysis to avoid biasing the ⇢ values. Thesensitivity of the measured ⇢ values to the PET/MRI registration accuracy wasassessed by rotating the MRI and BB ROIs by a random angle in 3D, ranging from0 to 15 degrees from the initial (registered) orientation. This test also provided anassessment of robustness of the measured ⇢ values.In Chapter 6, the relative ability of image metrics to discriminate betweenhealthy and disease states was assessed by including the control subjects in theregression analysis. While a relatively small number of control subjects (3) were notexpected to bias the results substantially, using such an approach becomes prob-lematic with a larger number of subjects. Therefore, in this part of the study thediscrimination between the control and PD subjects was evaluated using the sepa-rability index (SI) [200], which was computed between the PD and control subjectgroups. In control subjects, the metric values to compute the separability indexwere taken from the side of striatum with higher mean BPND in the PUT ROI.The SI is defined as a fraction of data points whose class labels are the sameas those of their nearest neighbors. It represents a measure of how data pointswith di↵erent classes tend to cluster together. The advantages of using the SI inthis context is that a) it is a non-parameteric measure, b) it is related to simpleproximity-based classifiers, and c) that is not sensitive to the width of the PD orcontrol data distribution. The latter property is desirable if one aims to distinguishPD subjects from healthy controls regardless of DD. More traditional measures suchas the two-sample t-test can not be applied in this case since the PD subject groupcontains subjects with di↵erent disease severities. The consistency of the metricvalues for the controls was evaluated separately.7.3 Results7.3.1 Correlation Analysis Between HF and DDIn all analyzed structures and ROIs, significant correlations were measured betweenthe HF metrics and DD. Examples of scatter plots between the HF metrics andDD are shown in Fig. 7.3 for the PUT and PBB ROIs. Inspection of the scatter1697.3. ResultsCAU PUT STRCBB PBB SBBFigure 7.1: The MRI-based ROIs of the caudate (CAU), putamen (PUT), andstriatum (STR), and the corresponding BB ROIs (CBB, PBB, SBB) for a controlsubject.Figure 7.2: Examples of the PET-defined and MRI-defined directions used in theGLCM computation for one of the PD subjects. The color of the scatter pointsrepresents the relative voxel value in the PUT and CAU ROIs.1707.3. Resultsplots for all metrics revealed no significant outliers that could skew the measured⇢ values. Similar scatter patterns were observed with the metrics not shown inthe figure. There was a prominent di↵erence between the PUT and PBB scatterplots corresponding to the same metric. For example, the INF1 data appeared tobe distributed randomly in PUT with no di↵erence between the control and PDgroups; in PBB, there was a clear separation between the groups and an upwardtrend in the metric value with DD. The converse was observed with ACRL andother metrics. The plots for CLS had an upward trend in PUT and downward trendin PBB. The MI metrics and the mean BPND) had greater scatter/variability andworse control/PD separation in PBB.The absolute values of ⇢measured between the image metrics and DD are plottedin Fig. 7.4. In di↵erent brain structures, di↵erent HF metrics had significant corre-lations with DD. The type of the used ROIs also a↵ected the correlation strengthsand significance levels. The most consistent HF metric in terms of the correlationsignificance was CLS (p<0.01 with most structures/ROIs except CAU), althoughthe value of ⇢ for CLS was positive in PUT and negative in PBB. Significant corre-lations with relatively high ⇢ values were measured for INF1 and INF2 in PBB. Thevalues of ⇢ for BPND and MI were lower in BB ROIs than in MRI ROIs, consistentwith the higher data variability observed in the scatter plots. In CBB and SBB thevalues of ⇢ for BPND were comparable to those measured with the HF metrics CLS,ENR and ENT.The variability of the measured ⇢ values that resulted from the random rotationalperturbation of PUT ROI is plotted in Fig. 7.5. For the HF metrics the variabilityof ⇢ was lower in PBB than in PUT. For the MI and BPND, the variability of ⇢ inPUT and PBB was approximately equal. The MI had the lowest variability of ⇢among all tested metrics.The examination of GLCMs revealed that the MRI and BB ROIs produced grayvalue distributions that were considerably di↵erent. The GLCMs computed for allPD subjects and averaged are shown in Fig. 7.6. In all structures and ROIs, thepeaks of the grey value distributions were located in the lower half of the gray levels(top left matrix quarter). Compared to the MRI ROIs, the BB ROIs had gray valuedistributions that were narrower along the main diagonal, and their maxima wereshifted in the direction of lower gray levels.1717.3. ResultsPUTPBB0 5 10 15406080100120140 ACRL0 5 10 15100020003000400050006000 CLP0 5 10 15-0.2-0.15-0.1-0.050 INF10 5 10 15x 10-412345 J10 5 10 150123 BPnd0 5 10 153040506070 ACRL0 5 10 15200040006000800010000 CLP0 5 10 15-0.25-0.2-0.15-0.1-0.050 INF10 5 10 15x 10-40.511.522.53 J10 5 10 1500.511.5 BPndDD (years)DD (years)DD (years)DD (years)DD (years)DD (years)DD (years)DD (years)DD (years)DD (years)Figure 7.3: Image metrics computed in PUT and PBB plotted against DD. Solidhorizontal lines represent control subjects, and dots represent PD subjects. TheGLCMs were computed in the anteroposterior direction using the GLCM distanceequal to 3 voxels.7.3.2 Analysis of Discrimination Between Control and PDSubjectsThe SI measured between the PD and control subject groups are shown in Table7.1. A relatively high discrimination was measured with the HF metrics in the PBB,STR and SBB ROIs, similar to that of the mean BPND. The HF metrics that arenot included in the table had low SI in all structures and ROIs.The scatter plots in Fig. 7.3 demonstrate the variable consistency of the metricvalues in the control subject group. In general, the correlation between the metricand DD and the SI between the controls and PD were not indicative of the con-sistency of the metric value for the control subjects. For example, J1 in PUT hadhigher correlation with DD than CLP in PBB, but the latter better separated thecontrol and PD groups.7.3.3 E↵ect of GLCM Direction on Measured Correlation ValuesThe ⇢ values did not depend on the GLCM direction in a consistent manner. How-ever, the data relating the GLCM direction to ⇢ in di↵erent ROIs contained twoprominent patterns. First, the correlations were in some cases found to be weakerin the mediolateral direction, compared to other directions. Second, the direction1727.3. ResultsFigure 7.4: The absolute values of ⇢ measured between the image metrics and DD.1737.3. ResultsFigure 7.5: Box plots of the ⇢ value distributions obtained by rotating PUT ROIsby a random angle. Each box plot represents 50 independent data realizations.PUT CAU STRPBB CBB SBB1 4 7 10 13 16Gray Level 1147101316Gray Level 200.51.01.52.01 4 7 10 13 16Gray Level 1147101316Gray Level 200.51.01.52.02.53.01 4 7 10 13 16Gray Level 1147101316Gray Level 200.20.40.60.81.01.21 4 7 10 13 16Gray Level 1147101316Gray Level 200.51.01.52.02.53.03.54.01 4 7 10 13 16Gray Level 1147101316Gray Level 200.20.40.60.81.01.21.41.61.82.01 4 7 10 13 16Gray Level 1147101316Gray Level 200.51.01.52.02.53.03.54.0x10-2x10-2 x10-2 x10-2x10-2 x10-2Figure 7.6: Visualization of average GLCMs computed from the DTBZ images ofPD subjects. Top row corresponds to the MRI-based ROIs, bottom row correspondsto the BB ROIs.1747.3. Results0.810.830.660.770.980.770.660.740.700.940.700.910.570.700.851.000.721.000.790.910.870.960.910.980.570.680.720.890.720.740.640.600.770.660.890.910.740.660.640.980.720.870.740.680.790.960.740.720.790.830.790.810.850.870.790.890.700.810.850.890.851.000.830.960.960.89ACRLCRLCLPCLS DISENRINF1INF2J1 J2 BPNDCAUPUTCBBPBBSTRSBBTable 7.1: SI measured between the control and PD subject groups.had greater impact on the metrics for which the values of ⇢ were below approxi-mately 0.5.The most substantial di↵erences in ⇢ between di↵erent directions were found inPUT and SBB. The values of ⇢ measured in PUT are given in Table 7.2, for six met-rics with the greatest variability of ⇢ between the anteroposterior, mediolateral andinferosuperior directions. The metrics that were a↵ected the most were ENT andSENT: the mediolateral and PET-defined directions produced insignificant correla-tions, while the anteroposterior, inferosuperior, MRI-based and averaged directionsproduced significant correlations.The values of ⇢ measured in SBB are given in Table 7.3. The greatest e↵ectof GLCM direction or ⇢ was observed with CRL, where the mediolateral directionproduced significantly lower correlation compared to other directions. The medio-lateral direction also produced weaker correlations with other metrics shown in thetable.With other ROI/metric combinations, there were generally no pronounced dif-ferences in ⇢ between di↵erent directions. For example, in PBB ⇢ changed by atmost 0.07 (INF1). In CBB, the metrics CLP, DIS and NHOM had relatively weakercorrelations in the mediolateral direction; in STR, the metrics ENR and SENT alsohad relatively weaker correlations in the mediolateral direction.It was expected that the PET-based direction would be similar to the anteropos-1757.3. ResultsDirection ACRL CLS ENR ENT MPR SENTAP -0.65** 0.48** 0.55** -0.44* 0.49** -0.48**ML -0.57** 0.56** 0.45** -0.23 0.58** -0.21IS -0.63** 0.49** 0.54** -0.48** 0.58** -0.42*AVER -0.63** 0.51** 0.57** -0.46** 0.60** -0.42*MRI -0.65** 0.47** 0.50** -0.37* 0.50** -0.44*PET -0.62** 0.56** 0.43* -0.27 0.54** -0.15* p < 0.05 ** p < 0.01 AP – anteroposterior; IS – inferosuperior;ML – mediolateral; AVER – direction-averaged GLCM.Table 7.2: Values of ⇢ measured using di↵erent GLCM directions in PUT.Direction CTR CRL CLP DIS INF1 NHOMAP 0.38* -0.47** -0.54** 0.42* 0.29 -0.42*ML -0.06 -0.23 -0.51** 0.22 0.21 -0.27IS 0.36* -0.53** -0.58** 0.42* 0.40* -0.42*AVER 0.28 -0.41* -0.53** 0.37* 0.29 -0.38*PET 0.34* -0.51** -0.59** 0.40* 0.48** -0.42** p < 0.05 ** p < 0.01 AP – anteroposterior; IS – inferosuperior;ML – mediolateral; AVER – direction-averaged GLCM.Table 7.3: Values of ⇢ measured using di↵erent GLCM directions in SBB.terior direction. Contrary to the expectation, in most subjects the average activitygradient was approximately pointed in the mediolateral direction. The MRI-defineddirection was similar to the anteroposterior direction, as expected.The GLCMs computed in PUT along the anteroposterior, mediolateral and in-ferosuperior directions are shown in Fig. 7.7. Compared to the anteroposteriorand inferosuperior directions, the mediolateral GLCM had higher fraction of non-diagonal counts, and the distribution of counts along the diagonal was more uniform.7.3.4 E↵ect of GLCM Distance on Measured Correlation ValuesThe e↵ect of GLCM distance on ⇢ was marginal with most metrics and ROIs withthe exception of ENR, ENT, MPR and SENT computed in PUT. The correspondingplots of ⇢ against GLCM distance are shown in Fig. 7.8. The most impacted metricwas ENT, for which the correlation with DD was significantly stronger with greater1767.4. Discussion1 4 7 10 13 16Gray Level #1147101316Gray Level #200.20.40.60.81.01.21.41.61.821 4 7 10 13 16Gray Level #1147101316Gray Level #200.51.01.51 4 7 10 13 16Gray Level #1147101316Gray Level #200.20.40.60.81.01.21.41.61.8x10-2 x10-2 x10-2AP ML ISFigure 7.7: The PD subject-averaged GLCMs computed in PUT along di↵erentdirections. AP – anteroposterior, ML – mediolateral, IS – inferosuperior. The usedGLCM distance was 3 voxels.GLCM distances (4, 5 voxels).The average (across PD subjects) PUT GLCMs computed using distances 1 and5 voxels are shown Fig. 7.9. In the 1-voxel GLCM, the majority of gray valuecounts were along the diagonal, and the total number of counts was approximately7000. In the 5-voxel GLCM, the counts were distributed more widely among thenon-diagonal elements, and the total number of counts was approximately 4000.7.4 DiscussionThe results demonstrate that the HF metrics evaluated from the DTBZ images ofstriatum correlate significantly (p<0.01) with the clinical PD duration. The highestcorrelation values were obtained with ACRL (⇢=0.64 in PUT), ENR (⇢=0.57 inPUT), SAVG (⇢=0.58 in PUT), INF2 (⇢=0.59 in PBB), CLP (⇢=0.55 in PBB), andCLS (⇢=0.62 in SBB). Since the HF metrics were computed from the normalizedgray value images, the results imply that in PD subjects the spatial pattern of thetracer distribution alone is a statistically significant predictor of disease progression.Although the strongest correlation was measured with the ROI-mean BPND (⇢=0.79in PUT), the correlations measured with the HF metrics are relevant since comparedto BPND they capture a di↵erent type of information from the images. A gooddiscrimination between the control and PD subjects was achieved using the HFmetrics computed in BB ROIs. This demonstrates that the HF metrics computedfrom 30-minute activity concentration images can potentially be used in the PET-based diagnosis of the disease, without the need to perform MRI imaging, prolonged1777.4. DiscussionPUTPBB1 2 3 4 500.20.40.60.81ENRGLCM distance (voxels)1 2 3 4 500.20.40.60.81ENTGLCM distance (voxels)1 2 3 4 500.20.40.60.81SENTGLCM distance (voxels)1 2 3 4 500.20.40.60.81 MPRGLCM distance (voxels)Figure 7.8: Plots of ⇢ against GLCM distance for HF metrics computed in PUTand PBB. Direction-averaged GLCMs were used.GLCM distance 1 voxel GLCM distance 5 voxels1 4 7 10 13 16Gray Level #1147101316Gray Level #200.0050.010.0150.020.0250.031 4 7 10 13 16Gray Level #1147101316Gray Level #200.0020.0040.0060.0080.010.0120.0140.0160.018Figure 7.9: Direction-averaged GLCMs computed in PUT using the GLCM dis-tances equal to 1 and 5 voxels. The shown GLCMs were computed by averaging theGLCMs for all PD subjects.1787.4. Discussiondynamic scanning, and measuring the arterial input function. While this outcomewas achieved with DTBZ, the results warrant similar types of investigation withother tracers.One of the major findings of this chapter is that di↵erent HF were correlatedwith DD depending on the used ROI type for a given structure. On the one hand,this mirrors the findings of Chapter 6 in that the correlation between the image-derived and clinical metrics strongly depends on the ROI definition method for aparticular anatomical structure. On the other hand, this finding indicates that dif-ferent disease-related characteristics of the spatial pattern were captured by theMRI-based and BB ROIs. From the application point of view, when using BB ROIsone has to choose the appropriate metrics (such as INF1 and INF2) to track diseaseprogression. The MI metrics and the mean BPND also had significant (p<0.01) cor-relations with DD when evaluated using BB ROIs, albeit the values of the correlationcoecient were lower compared to the MRI-based ROI. Overall, these observationspoint to a conclusion that simpler BB ROIs that encompass the regions of specifictracer uptake can be used for certain investigative tasks when an anatomic referenceis not available. Additional benefit when using this type of ROIs may be in greaterrobustness of the metric values with respect to the variations in ROI orientation, asdemonstrated by the box plots in Fig. 7.5.The BB ROIs are expected to be relatively easy to define in the DTBZ imageseven without using an anatomic reference. The BB ROIs used in this work weredefined using the MRI ROIs. However, when anatomy images are not available,it should be relatively straightforward to place box-like ROIs algorithmically, atleast with tracers that have localized binding pattern. Since the adult brain has arelatively consistent size among individuals, the box ROIs can be set to have thesame size for di↵erent subjects. For example, an ROI can be placed at the location ofthe maximum convolution between the image and a binary 3D window. Additionalinvestigation performed but not reported in this thesis indeed showed that the ROIsplaced using this method produced results very similar to the MRI-guided BB ROIs.In addition to the ROI definition method, the GLCM direction and distancealso influenced the measured correlation values, albeit to a lesser extent. In termsof choosing the optimal GLCM distance to maximize the correlation, there exists atrade-o↵ between using shorter and longer distances. On the one hand, the resultsimply that greater distances may improve the correlation for ENT and other HFmetrics, at the cost of fewer GLCM counts. On the other hand, using shorter1797.4. Discussiondistances may be preferred for the analysis of small regions; here we must takeinto account that in PET images the neighboring voxels are strongly correlated.Using shorter distances may therefore reduce the sensitivity of GLCM to diseasemanifestations. As a compromise, the GLCM distance equal to 3 voxels (3.66 mm)seems to be appropriate for future investigations. This figure must be adjustedaccording to the image resolution and voxel size.With respect to GLCM direction, the HF metrics tended to have stronger cor-relation with DD in the anteroposterior and inferosuperior directions, and weakercorrelation in the mediolateral direction (this pattern was not consistent across allROIs). While the mean gradient in the PET images was approximately along themediolateral direction, the anteroposterior and inferosuperior directions are the onesrelated to the PD-associated dopaminergic function loss in the putamen. This sug-gests that at least some HF metrics captured the spatial pattern related to diseaseprogression. The correlation values were not reduced appreciably when direction-averaged GLCMs were used. Therefore, in future studies it seems appropriate to useeither direction-averaged GLCM, or direction along which the greatest functionalchanges are expected or were previously observed. Using the subject-specific MRI-defined directions did not provide additional benefit in terms of the correlation withDD.While the correlation between the HF metrics and DD was found to be signif-icant, the trends observed in the data require explanation. For example, it is notclear why CLS had a positive correlation with DD in PUT and negative correlationin PBB, or why INF1 and INF2 only have significant correlation with DD in PBB.Although the tentative interpretation of the trends can be made based on the knowl-edge of the gross neurodegeneration pattern in PD, a more thorough analysis of themetric behavior with respect to DD requires the simultaneous consideration of theimage properties and terms of the equations that define the HF; image histogramsand GLCMs need to be compared between di↵erent subjects and ROI types. Thisanalysis is performed in the next chapter.180Chapter 8Analysis of the Metric Behaviorwith Disease Progression8.1 IntroductionResults of Chapter 7 revealed a discrepancy between the expected and measureddependencies of the metric values on DD. According to the definitions given inSections 5.4.4 and 5.4.3, the values of the HF and MI metrics are expected toreflect the heterogeneity of the images. The heterogeneity of DTBZ binding in theputamen increases in early stages of PD, as the rostro-caudal gradient in the tracerbinding becomes more pronounced with time. On the other hand, the heterogeneitydecreases in advanced stages of PD, as the tracer binding levels in the putamenapproaches those of the background (non-specific binding). Therefore, the HF andMI metrics were expected to either increase or decrease with low DD, and do theopposite with high DD. However, the measurement of DTBZ-derived metric valuesat di↵erent DDs did not reveal such patterns.The primary goal of this chapter is to investigate the reason why the measuredmetric behavior with disease progression di↵ers from the expected behavior. Thesecondary goal is to explain the measured di↵erences in the metric (and ⇢) valuesbetween the MRI and BB ROIs. Analysis of the relationship between the voxelvalues and image metrics could in principle be performed based solely on the carefulconsideration of the acquired DTBZ images. However, there are three factors thatmay hinder such an approach:• limited range of the represented disease phenotypes, i.e. lack of images thatrepresent the pre-symptomatic stage and images that correspond to DD greaterthan 13 years;• natural variability in the courses of disease progression between subjects;• image noise that may propagate into metric values.1818.2. Development of a Model for Tracer Binding LossA novel approach to the analysis of image metric behavior with disease progres-sion is taken in this chapter. An analytic model is constructed that models thespatio-temporal pattern of the progressive dopaminergic function loss in the puta-mens of PD subjects. The model is data-driven: its parameters are determinedby fitting to the model to line-profiles of the DTBZ activity ratio (AR) measuredin the putamens of PD subjects. The fitted model is used to generate a temporalsequence of “synthetic” AR images, which are intended to model the most relevantproperties of the acquired DTBZ images. The synthetic images are used to predictthe image metric behavior with disease progression, with zero natural variability,extended range of (simulated) DD, and controlled image noise. The predicted (sim-ulated) metric behavior is compared to the measured behavior in order to estimatethe influence of various confounding factors, and to explain the lack of “U”-shapedtrends in the measured data. Additionally, the model is used to predict and explainthe di↵erences in the metric values between MRI-based and BB ROIs.In the first part of the chapter, the analytic model of the dopaminergic functionloss is established and the synthetic AR images are constructed: in Section 8.2.1,measurement of the AR line-profiles is described; in Section 8.2.2, the model isfitted to the profile data; in Section 8.2.3, the model is used to generate syntheticAR images; Section 8.2.4 describes how ROIs are defined in the synthetic images.The second part of the chapter focuses on the analysis of the measured andsimulated HF and MI metric values: in sections 8.3.1 and 8.3.2, simulated andmeasured image histograms and GLCMs are compared; in sections 8.4 and 8.5, thetrends in metric value change with DD are compared between the acquired andsynthetic images. Detailed analysis of the trends is performed with several chosenmetrics. The results are summarized and discussed in Section 8.6.8.2 Development of a Model for Tracer Binding Loss8.2.1 Measurement of AR Profiles in the PutamenProfiles of the AR in the putamens of control and PD subjects were measured along apath that approximately corresponded to the anteroposterior anatomical axis of theputamen, as derived from the MR images. The coordinates of the vertices of the pathwere obtained from the topological skeleton (medial axis) of the putamen, separatelyfor each subject and side. To account for possible mismatch (in registration or1828.2. Development of a Model for Tracer Binding Lossshape) between the path and high activity regions (e.g. due to imperfect PET/MRIregistration), the x (mediolateral) and z (inferosuperior) coordinates of the pathvertices were re-adjusted (weighted) in the inferosuperior and mediolateral directionsaccording to the local AR distribution. In other words, the profiles were defined topass through the maximum of AR in the vicinity of the MRI-derived medial axis ofthe putamen.The measured AR profiles are shown in Fig. 8.1. In control subjects, the ARranged between 4 and 6 and was approximately uniform along the putamen. In PDsubjects with DD up to 4 years, the AR profiles on the less a↵ected side appearedto decrease linearly in the anteroposterior direction, and the AR values were abovebackground along the entire length of the putamen. On the other hand, with DDbetween 5 and 10 years, the AR profiles resembled exponential functions, with valuesclose to that of the background (AR = 1) in the posterior putamen. Beyond 10 years,the AR profiles were close to the background.The profiles demonstrate that by the time the first clinical PD symptoms ap-pear, there is already a substantial reduction of the dopaminergic function in theputamen as revealed by the DTBZ binding. Thus, the spatio-temporal pattern ofneurodegeneration in the pre-symptomatic stages of the disease is unclear. The pro-files suggest that prior to the symptoms/diagnosis, the reduction of dopaminergicfunction (DTBZ binding) occurs everywhere in the putamen with constant gradi-ent in the anteroposterior direction (greatest reduction in the posterior putamen).This assumption is used to establish a mathematical model of the spatio-temporaldopaminergic function loss that is described in the following section.8.2.2 Analytical Model FittingIt was previously reported based on the analysis of both longitudinal and cross-sectional PD subject data [195] that the disease-associated change of the striatalROI-mean BPND with time progresses according to an exponential law. The profilesin Fig. 8.1 suggest that an exponential function may be an appropriate choice formodeling the spatial component of the pattern, i.e. the observed anteroposteriorgradient. One of the original propositions of this thesis is to combine the spatial andtemporal components of neurodegeneration in a single functional form. The simplestgeneral model that combines the temporal and spatial components (coordinates) in1838.2.DevelopmentofaModelforTracerBindingLoss10 20 30246c0110 20 30246c0210 20 30246c0310 20 30246c0410 20 30246d10 DD=010 20 30246r05 DD=010 20 30246d06 DD=110 20 30246d12 DD=110 20 30246p01 DD=310 20 30246p24 DD=310 20 30246d04 DD=310 20 30246d02 DD=510 20 30246p15 DD=610 20 30246d08 DD=610 20 30246r01 DD=710 20 30246r06 DD=810 20 30246d01 DD=1010 20 30246e01 DD=1010 20 30246e07 DD=1010 20 30246p04 DD=1110 20 30246p23 DD=115 10 15 20 25246p27 DD=1110 20 30246e06 DD=11PD 11-13 yearsPD 8-10 yearsPD 5-7 yearsPD 2-4 yearsPD 0-1 yearsControls10 20 30246e05 DD=3x-axis: position along profile; y-axis: activity ratioLeft sideRight sideFigure 8.1: Examples of AR profiles measured in the putamens of control and PD subjects. The profiles are sortedaccording to DD. Zero corresponds to the anterior side of the putamen, and the background AR is equal to 1.1848.2. Development of a Model for Tracer Binding Lossthe exponent is given by the expressionAm(x, td) = A0ea(x+b)(td+c) +B0 (8.1)where Am(x, td) is the modeled AR (ARm) along the longitudinal axis of the puta-men, x is the distance along the axis, td is the DD since clinical onset, A0 is theAR in healthy state, B0 is the background (non-specific) AR, and where a, b andc are the fitting terms. The model of the dopaminergic function loss given by thisequation makes the following assumptions:• the dopaminergic function first becomes a↵ected at time tm = td + c = 0,where tm is the modeled DD (DDm); note that DDm may be di↵erent fromDD;• when the disease first manifests in the striatum (DDm = 0), the dopaminergicfunction is reduced in all parts of the putamen, with greater reduction in theposterior side.With respect to the second assumption, there have been no conclusive studies so farthat would imply whether the early manifestation of the disease occurs throughoutthe putamen or in specific sub-regions. Additionally, it is not known whether thedisease manifests on the left and right sides of the striatum simultaneously andprogresses at di↵erent rates, or if there is a delay in disease progression on the lessa↵ected side. Therefore, the di↵erence in the AR between less and more a↵ectedsides of the striatum is not modeled in this work.The parameters a, b and c were computed by fitting the surface given by the Eq.8.1 to the combined AR data from all PD subjects. The scatter plot of the datais shown in Fig. 8.2A. The data points represent combined AR profiles measuredin the less a↵ected putamen. The first 4 data points in each profile were removedto account for the reduction of AR near the anterior edge of the putamen, and theremaining data were re-scaled to 24 voxels (average putamen size in the anteropos-terior direction). The values of the parameters a, b and c were determined usingthe non-linear least squares fitting with the trust region algorithm, with constraintsA0 = 4, B0 = 1, 0 < c < 20. The optimal values of the parameters (with 95%confidence intervals) were a = 5.84 ⇥ 103 (5.04 ⇥ 103, 6.65 ⇥ 103), b = 8.91(7.62, 10.2), c = 7.66 (6.55, 8.76). The R2 of the fit was 0.6, and the RMS error was1858.2. Development of a Model for Tracer Binding Loss0.47. Thus, the fitted model was defined by the equationA(x, td) = 4e5.8(x+8.9)(td+7.6)⇥103 + 1 (8.2)The surface given by the Eq. 8.2 is plotted in Fig. 8.2B, and the residuals ofthe fit are plotted in Fig. 8.2C. In general, the residuals were distributed evenlyaround zero in the considered domain, with small bias towards negative residualsin the central region (DD between 5 and 10 years and distances between 10 and 20voxels). The marginalized distribution of the residuals had mean 4.7⇥103, median3.6⇥102, standard deviation 0.46, skewness 0.41, and kurtosis 1.28. These figuresdemonstrate that the residual distribution was close to the normal distribution.The plots of ARm given by the Eq. 8.2 that correspond to fixed values ofDDm are shown in Fig. 8.2D. According to the fit, the di↵erence between themodeled and clinically measured DD is 7.6 years. Thus, the fit suggests that thefirst dopaminergic changes in the brain begin to take place on average 7.6 yearsprior to the clinical symptoms (based on the analysis of the less a↵ected side of thestriatum). The range of DD between 0 and 13 years defined by the PD subjectsample approximately corresponds to the range of DDm between 7.6 and 20.6 years;the graphs of ARm that correspond to this range are shown in the middle row ofFig. 8.2D, and can be compared to the experimentally measured AR profiles in Fig.8.1. The simulated and measured on the less a↵ected side profiles were qualitativelyand quantitatively similar.8.2.3 Procedure to Generate Synthetic AR ImagesA temporal sequence of synthetic ARm images was generated using the Eq. 8.2that aimed to simulate the loss of the dopaminergic function in the putamen, whilereplicating the characteristics of the acquired DTBZ PET images. The size of thesynthetic images was 72⇥24⇥48 voxels, and the voxel size was set to be 1.0 mm(same as acquired images).To determine the average size of the putamen, the MRI-derived ROIs of the leftand right putamens in the control subjects were rigidly co-registered to a chosentemplate (left putamen of one of the control subjects). The right putamens weremirrored in the mediolateral direction prior to the registration. The binary average1868.2. Development of a Model for Tracer Binding LossFigure 8.2: A. Scatter plot of the AR profile data combined from 37 PD sub-jects. B. Surface given by the Eq. 8.2 with respect to the data. C. Scatter plotand marginalized histogram of the residuals, with overlaid normal distribution ofequivalent variance. D. Plots of ARm for di↵erent values of DDs, the middle rowapproximately corresponds to the clinical DD.1878.2. Development of a Model for Tracer Binding LossROI was computed by thresholding the mean of the co-registered putamen ROIs:PUTAV G =NHCPn=1LPn +NHCPn=1RPn2NHC> 0.5 (8.3)where PUTAV G is the binary image of the average putamen ROI, LPn and RPn arethe binary ROI images of the left and right putamens, respectively, for the n-th con-trol subject (registered to a common template), and NHC = 10 is the total numberof control subjects. The average putamen ROI is shown in Fig. 8.3. The aver-age putamen volume was 2940±398 voxels. Taking into consideration the volumeand the shape of the average putamen ROI, the linear dimensions of the averageputamen were taken to be 8⇥16⇥24 voxels in the mediolateral, inferosuperior, andanteroposterior directions, respectively (total volume 3072 voxels).In the synthetic images, the modeled rectangular region of functionally activetissue (region of specific tracer binding) was set to have size 8⇥16⇥24 voxels andwas located at the center of the image. This region represented (modeled) theputamen. Voxels outside of the region were modeled to represent regions of non-specific binding, i.e. to represent the background. The value of ARm assigned tothe background was 1.0. Within the region of specific binding, the ARm values wereassigned using the Eq. 8.2, with x equal to the voxel number starting at the anteriorboundary of the region. The resulting images are shown in Fig. 8.4 (top). The totalnumber of images in the sequence was 100, covering the range of DDm from 0 to 30years. The synthetic images that correspond to DDm<7.6 years (pre-symptomaticdisease) and DDm>20.6 years (advanced disease) extrapolate the range of clinicalDD in the studied sample of PD subjects. For comparison, examples of DTBZBPND images of subjects with di↵erent DD are shown in Fig. 5.1. It is importantto emphasize that the acquired images represent cross-sectional data, while thesynthetic images attempt to model the longitudinal dopaminergic function loss inthe putamen of a single subject (on the less a↵ected side of the brain).Poisson noise was added to the synthetic images to match the SNR (ROI meandivided by standard deviation) in the acquired DTBZ images. The mean back-ground SNR in the acquired images of PD subjects was 3.0±0.2. In the syntheticimages, when the background voxel values were randomly chosen from the Poissondistributionp(k,) =kek!(8.4)1888.2. Development of a Model for Tracer Binding LossRightIsometric, Front Isometric, BackLeft FrontBackTopBottomAPAPAPAPAPLMLMLMLMISISISISISISLMLMAPFigure 8.3: Visualization of the average putamen ROI surface from di↵erent direc-tions.where  was set to the voxel’s ARm value (Fig. 8.4, middle), the SNR of thebackground was 1.0 (this follows from the SNR being equal top). After smoothingthe noisy images with a 1.5-voxel (FWHM) Gaussian filter (filter size 3⇥3⇥3 voxels),the SNR of the background was ⇠3.0 (Fig. 8.4, bottom). The noise in the region ofspecific tracer binding was simulated using the same method. The synthetic imageswith added noise and smoothing were used to compute the texture metric valuesand to analyze the behavior of the texture metrics with di↵erent types of ROIs.8.2.4 ROI Definition in Synthetic AR ImagesThe image metrics were computed from the synthetic ARm images using two ROIs.The first ROI, denoted PUTm, modeled the PUT ROI and was defined tightlyaround the region of functionally active voxels (8⇥16⇥24 voxels) (Fig. 8.4). Thesecond ROI, denoted PBBm, modeled the PBB ROIs. The PBBm ROI was obtainedby uniformly expanding the PUTm ROI and had size 15⇥23⇥31 voxels (volume10695 voxels). The choice of this size was based on the measured (average) ratio ofthe PBB ROI volume to the volume of the PUT ROIs that was 4.6. However, onemust also take into account that PBB ROIs included part of the caudate, wherethe dopaminergic function is relatively una↵ected by the disease. The average valueof the adjusted ratio |PBB| / |STR \ PBB| (where STR = PUT [ CAU) was 3.3.The ratio of the chosen PBBm and PUTm volumes was 3.48.1898.2. Development of a Model for Tracer Binding Loss0 3.6 7.6 10.6 13.6 16.6 19.7 24.0 30PUTmPBBm012345ARmDDm (years)Figure 8.4: Temporal sequence of synthetic images that model the dopaminergicfunction loss, as revealed by imaging with DTBZ, in the less a↵ected putamen. Theimages were generated using model given by the Eq. 8.2. Top – images withoutadded noise, middle – images with simulated Poisson noise, bottom – noisy imagessmoothed using a Gaussian filter.1908.3. Model Validation8.3 Model ValidationValues of the MI and HF metrics, as well as the distribution of counts in the GLCM,depend on the overall distribution of voxel intensities in analyzed image regions.Therefore, the model and method to generate synthetic AR images were validatedby a) computing image histograms and GLCMs from the synthetic images (usingPUTm and PBBm ROIs), and b) comparing them to image histograms and GLCMscomputed from the acquired AR images (using PUT and PBB ROIs).8.3.1 Comparison of Measured and Simulated Image HistogramsBox plots of measured AR values for all control and PD subjects are shown in Fig.8.5A, comparing PUT and PBB ROIs. With PUT ROI, the AR distributions in thecontrol and PD subjects di↵ered mainly in the median values and the widths. WithPBB ROIs, the most prominent di↵erence between the PD and control distributionswas in the width and skewness, while the median values were approximately thesame. With PD subjects, the minima of the distributions remained approximatelythe same with di↵erent DD, and the maxima diminished with DD.Detailed AR histograms for a representative set of control and PD subjects areplotted in Fig. 8.5B. The histograms obtained using the PUT and PBB ROIs wereconsiderably di↵erent. In the PUT ROIs, the mean values of the distributions dimin-ished with DD, and the distributions were generally even-tailed. On the contrary, inthe PBB ROIs, the distributions were positively skewed (long-tailed in the directionof higher values), and skewness reduced with DD. The medians and the modes ofthe distributions were approximately the same with di↵erent DD.The histograms obtained from the synthetic ARm images are shown in Fig.8.6A. The figure demonstrates that the distributions of ARm at di↵erent DDm werequalitatively similar to those obtained experimentally. The simulated histogramsfor PUTm were even-tailed with low and high DDm, and had positive skewness inthe mid-range of DDm — this pattern was not clearly visible by the measured ARhistograms.8.3.2 Comparison of Measured and Simulated GLCMsThe average PUT and PBB GLCMs for control and two groups of PD subjects areshown in Fig. 8.7. The most prominent di↵erence in PUT GLCMs between thecontrol and PD subjects was in the location of the peak of the distribution along1918.3. Model ValidationFigure 8.5: A. Box plots of the AR values for all control and PD subjects, obtainedwith PUT and PBB ROIs. The box plots for PD subjects are arranged accordingto the DD. B. Histograms of the ARs for a representative set of control and PDsubjects. The histograms for PD subjects are arranged according to the DD.1928.3. Model ValidationFigure 8.6: A. Histograms of ARm obtained from the synthetic images using PUTmand PBBm ROIs. B. GLCMs obtained from the synthetic images using PUT andPBB ROIs and used to compute the HF metrics.1938.4. Comparison of Measured and Model-predicted Metric Valuesthe diagonal; the widths of the distributions were approximately the same. Onthe other hand, between the two PD groups, the gray value distribution was morenarrower and less (positively) skewed with higher DD.In PBB, the main di↵erence between the GLCMs for di↵erent groups was in thewidth of the gray value distributions: the width increased with DD, and the loca-tion of the peak remained approximately the same. This observation may appearcounter-intuitive at first, given the fact that with disease progression the distri-bution of DTBZ AR should become more uniform as voxel values approach thebackground. However, the maxima of the AR distributions in PD subjects reducedwith DD, as demonstrated in Fig. 8.5B. The GLCMs were computed from the[min,max]-normalized gray value images. The preservation of the minimum AR,and the reduction of maximum AR, resulted in the wider GLCM distribution forPD subjects compared to control subjects, although on the absolute scale the ARvalues were lower in PD subjects.The GLCMs computed from the synthetic images are shown in Fig. 8.6B. ThePUTm-based GLCM changed with DDm according to two di↵erent trends. In therange 0<DDm<19.6 the peak of the GLCM shifted towards lower gray levels withhigher DDm, and in the range 19.6<DDm<30 the peak shifted towards higher graylevels and the width of the distribution broadened with higher DDm. The PBBmGLCMs had broader distributions with greater DDm, and the location of the peakwas closer to the middle gray levels.The simulated GLCMs changed with DDm similarly to GLCMs computed fromthe acquired images, at least in terms of the major visual trends. This providesgrounds to use the simulated GLCMs and HF metrics derived from them as referencein the analysis of measured HF metric values at di↵erent DDs.8.4 Comparison of Measured and Model-predictedMetric ValuesImage metrics computed from the synthetic images using the PUTm ROI are plottedin Fig. 8.8, in comparison to the metrics computed from the DTBZ AR images usingPUT ROIs. The simulated plots were computed from one particular instance of thesimulated image sequence (single noise realization). The range of clinical DD in thePD subjects (0–13 years) approximately corresponds to the range of DDm between7.6 and 20.6 years (as suggested by the fit). Thus, the experimental scatter plots1948.4. Comparison of Measured and Model-predicted Metric Values1 4 7 10 13 16Gray Level 1147101316Gray Level 200.51.01.51 4 7 10 13 16Gray Level 1147101316Gray Level 200.20.40.60.81.01.21.41.6 1 4 7 10 13 16Gray Level 1147101316Gray Level 200.51.01.52.01 4 7 10 13 16Gray Level 1147101316Gray Level 201.02.03.04.05.06.01 4 7 10 13 16Gray Level 1147101316Gray Level 200.51.01.52.02.53.03.54.01 4 7 10 13 16Gray Level 1147101316Gray Level 200.51.01.52.02.53.0PBBPUTControl PD 0-6 y PD 7-13 yControl PD 0-6 y PD 7-13 yx10-2 x10-2 x10-2x10-2x10-2x10-2Figure 8.7: Images of the average GLCMs computed from the images of controlsubjects, PD subjects with DD 0–6 years, and PD subjects with DD 7–13 years.Top row corresponds to PUT, bottom row to PBB.1958.4. Comparison of Measured and Model-predicted Metric Valuesfor PD subjects must be compared to the simulated metric values within this range(marked by the dashed lines). With control subjects, the measured metric valuescorrespond to the simulated values at DDm=0.All simulated and experimental metric values agreed at least in the order ofmagnitude. Additionally, in the following cases there was agreement in the upwardor downward trend (with DD and DDm):• ACRL, SAVG, mean AR (downward trend);• SENT (possible downward trend);• CLS, J1 (upward trend);The simulated and measured behaviors of the remaining metrics are more dicult tocompare; a detailed analysis of the metric behavior is provided in the next section.The measured separation between the PD and control subject groups did not agreewith the simulated data for metrics CRL, CLP, INF2, SENT, and COV: the metricvalues for control subjects were similar to those of PD subjects.Metrics computed from the synthetic images using the PBBm ROI are plottedin Fig. 8.9, in comparison to the metrics computed from the DTBZ AR imagesusing BPP ROIs. Compared to the PUTm ROI, fewer metrics had the “U”-typepattern. The ROI-associated change in the metric correlation with DD shown bythe bar plots in Fig. 7.4 was reflected in the simulated metric values. Specifically,• the simulated graphs for the metrics CRL, CLP, HOM, INF1, INF2, SENTwere “U”-shaped with PUTm and nearly monotonic with PBBm ROIs. Forthese metrics, the correlation coecient ⇢ measured with the acquired datawas higher with PBB ROIs;• conversely, the simulated graphs for the metrics ACRL and CTR were approx-imately monotonic with PUTm and were “U”-shaped with PBBm ROIs. Forthese metrics, the values of ⇢ were higher with PUT ROIs;• the simulated graph of CLS had an upward trend with PUTm, and downwardtrend with PBBm — consistent with the change of the sign of ⇢ in the acquireddata.With PBBm and PBB ROIs, the simulated metric values for control subjects agreedwith the measured data within error. Compared to the PUTm and PUT ROIs, there1968.5. Model-based Analysis of the Metric Behavior with Disease Progressionwas a better match between the simulated and measured metric values for controlsubjects.8.5 Model-based Analysis of the Metric Behavior withDisease ProgressionComparison of Trends in Measured and Simulated dataBased on the known spatio-temporal pattern of neurodegeneration, it was expectedthat metrics that capture the variance of the AR values, including the HF metrics,should have an upward trend followed by a downward trend with respect to theclinical disease severity or duration. However, such “U-shaped” behavior was notconclusively observed in the measured graphs. The simulated graphs in Figs. 8.8and 8.9 provide an explanation for this discrepancy.The simulated plots indeed demonstrate the non-linear behavior of image metricswith respect to DDm. Several HF metrics had the expected “U-shaped” graphs.However, according to the employed model, the range of clinical DDs represented inthe PD subject group only corresponds to a sub-range of DDm (7.6 to 20.6 years),and in this sub-region the “U”-type behavior was much less pronounced. This mayexplain the lack of the similar pattern in the experimental data. On the otherhand, a pronounced (compared to the noise) monotonic upward or downward trendin the simulated data in the range 7.6<DDm<20.6 was indicative of a significantcorrelation between the measured metric values and DD.The simulated graphs for PUTm contain regions where the metric values increase,decrease, and stay relatively constant with respect to DDm. Depending on whichregime of the metric behavior is captured by the choice of the ROI and the rangeof disease severity in the studied subject group, di↵erent trends may be observedin the measured data. For example, using the PBBm approximately corresponds tothe right sides of the graphs obtained with PUTm ROIs.The shapes of the simulated graphs for CTR, DIS and HOM (NHOM) withPUTm hint at the possibility that the corresponding measured scatter plots containunimodal trends obscured by the noise. Additional subjects or lower noise arerequired for a more conclusive analysis. On the other hand, the unimodal trendssuggested by the scatter plots of ENR, ENT and MPR likely represent random noise,since the corresponding simulated plots were constant ((relative to the simulated1978.5. Model-based Analysis of the Metric Behavior with Disease ProgressionACRL (similar to SAVG) Correlation (CRL)Cluster prominence (CLP)Cluster shade (CLS)DIS (similar to CTR)ENR (similar to ENT) HOM (similar to NHOM)INF1 (similar to INF2)Maximum probability (MPR) Sum entropy (SENT)J1ROI-mean AR0.40.60.81.01.20.10.20.30.40123 x 103-5005010015022.533.5681012 x 10-30.340.360.380.40.42-6-4-20 x 10-211.522.5 x 10-22.833.2123450.10.20.30 5 10 150.511.5 x 1020 5 10 1500.510 5 10 1500.20.40.6 x 1040 5 10 151.522.530 5 10 150.511.52 x 10-20 5 10 150.350.40.450.50 5 10 15-0.2-0.100 5 10 151234 x 10-20 5 10 152.833.23.40 5 10 1502460 5 10 1500.20.40.60 5 10 15-2000200400x 1020 7.6 20.6 30 0 7.6 20.6 300 7.6 20.6 300 7.6 20.6 300 7.6 20.6 300 7.6 20.6 30 0 7.6 20.6 300 7.6 20.6 300 7.6 20.6 30 0 7.6 20.6 300 7.6 20.6 30 0 7.6 20.6 30Figure 8.8: The simulated plots of the image metrics versus DDm in PUTm (bluegraphs, x axis is DDm), in comparison to the corresponding experimental scatterplots in PUT (x axis is DD). Horizontal lines represent control subjects. Dashedvertical lines mark the range of DDm that corresponds to the clinical DD. HF metricsthat had similar graphs to the ones shown are omitted for clarity.1988.5. Model-based Analysis of the Metric Behavior with Disease Progression4050607000.20.40.600.511.5 x 1040246 x 10222.533.50.511.522.5 x 10-20.30.40.5-15-10-50 x 10-202468 x 10-22.82.933.111.520.150.20.250.30 5 10 15204060800 5 10 150.40.60.80 5 10 1500.51 x 1040 5 10 15024 x 1020 5 10 151.522.50 5 10 15123 x 10-20 5 10 150.40.450.50.550 5 10 15-0.2-0.100 5 10 152468 x 10-20 5 10 152.833.23.40 5 10 1511.522.50 5 10 150.10.150.20.250 7.6 20.6 30 0 7.6 20.6 300 7.6 20.6 300 7.6 20.6 300 7.6 20.6 300 7.6 20.6 30 0 7.6 20.6 300 7.6 20.6 300 7.6 20.6 30 0 7.6 20.6 300 7.6 20.6 30 0 7.6 20.6 30ACRL (similar to SAVG) Correlation (CRL)Cluster prominence (CLP)Cluster shade (CLS)DIS (similar to CTR)ENR (similar to ENT) HOM (similar to NHOM)INF1 (similar to INF2)Maximum probability (MPR) Sum entropy (SENT)J1ROI-mean ARFigure 8.9: The simulated plots of the image metrics versus DDm in PBBm (bluegraphs, x axis is DDm), in comparison to the corresponding experimental scatterplots in PBB (x axis is DD). Horizontal lines represent control subjects. Dashedvertical lines mark the range of DDm that corresponds to the clinical DD. HF metricsthat had similar graphs to the ones shown are omitted for clarity.1998.5. Model-based Analysis of the Metric Behavior with Disease Progressionnoise in the metric values).A good discrimination between the control and PD subjects was observed underfollowing conditions applied to simulated metric behavior with DDm:• the metric changed nearly monotonously with DDm;• the di↵erence in the metric values between DDm=0 y and DDm=7.6 y mustbe several-fold;• the simulated noise in the metric value should be relatively low in the range7.6<DDm<20.6.For example, the simulated plot of INF2 (and CRL) has a relatively low slope inthe range 7.6<DDm<20.6 with low noise relative to the slope. The correspondingmeasured values of INF2 for the control subjects are relatively more consistentcompared to the other metrics (the horizontal lines in the plots are tightly groupedtogether), and a significant correlation is observed with DD.To analyze the trends in the metric behavior more rigorously, the distributionsof the AR values and GLCMs corresponding to the synthetic (Fig. 8.6) and ac-quired (Fig. 8.5 and 8.7) images must be brought into context. In the next severalsections, the trends in the simulated graphs are analyzed individually for each met-ric. The GLCMs computed from the acquired DTBZ AR images are referred to as“measured” GLCMs.The ordering of the simulated and measured histograms according to DD (DDm)revealed two trends in the acquired and simulated data that are important for theanalysis that follows: 1) the high-tails (and the maxima) of the PBB distributionsdiminished with DD, while the mode and the minima remained relatively constant;2) with higher DD, the PUT distributions resembled the PBB distributions, andboth were approximately bell-shaped.ACRLThe dependence of ACRL on DDm with di↵erent ROI types can be understoodfrom the analysis of simulated GLCMs plotted in Fig. 8.6B and measured GLCMsplotted in Fig. 8.7. In PUT ROI, the peak of the gray value distribution shiftedtowards lower gray levels with higher DD. Driven by the term (ij) in Eq. 5.13, thevalue of ACRL diminished with DD.2008.5. Model-based Analysis of the Metric Behavior with Disease ProgressionIn PBB, the location of the peak of the gray value distribution remained ap-proximately constant in the simulated GLCM with DDm<19.7 y and between thecontrol and PD groups in Fig. 8.7. Thus the measured correlation between ACRLand DD was insignificant with PBB ROIs.CRLThe terms x and y in the Eq. 5.15 are the gray level variances correspondingto the columns and rows of GLCM. Since the measured and simulated GLCMs aresymmetric, the power analysis of the defining equation concludes that the value ofCRL is driven by the ratio 1/, i.e. inverse of the second moment along x or y in theGLCM. In PUT, the widths of the gray value distributions are visually similar inthe control and PD groups; the correlation between CRL and DD was insignificant.On the other hand, in PBB ROIs, the widths of the gray value distributionsincrease with disease in simulated and measured GLCMs; thus, the value of CRLdiminished with DD.CLSCLS quantifies the third central moment of the gray value distribution in the GLCM(Eq. 5.17). The third moment is a measure of skewness of the distribution. In PUT,the plots of simulated GLCMs demonstrate the increase of positive skewness withDDm from 0 to 19.7 y, and a minor reduction of skewness with higher DDm. This isconsistent with the shape of the simulated plot of CLS against DDm. The plots ofmeasured GLCMs had negative skewness for control subjects, and positive skewnessfor PD subjects; thus, a signifiant positive correlation was observed between themeasured CLS and DD.In PBB, the distribution of gray values in the measured GLCMs was positivelyskewed for both conrol and PD subjects; however, the skewness was lower for PDsubjects. The simulated GLCMs also showed gradual reduction of skewness withhigher DDm. The simulated value of CLS approached zero as the GLCMs becamevisually more uniform. Consistent with the modeled behavior, the measured valuesof CLS had negative significant correlation with DD.2018.5. Model-based Analysis of the Metric Behavior with Disease ProgressionCLPCLP quantifies the fourth central moment of the gray value distribution in theGLCM (Eq. 5.16). The fourth moment (not normalized) measures the “peakedness”of the distribution, or the heaviness of the tails (the normalized fourth moment calledkurtosis measures these parameters relative to the normal distribution).In PUT, the peakedness of the measured GLCM was similar between the controland PD subjects. The simulated graph of CLP had a peak around DDm=15 y.However, in the clinically-relevant range of DDm the metric values were constantrelative to the noise. Thus, there was no correlation between the measured CLPand DD.In PBB, the measured GLCM was more sharply peaked for control subjects thanfor PD subjects. The simulated GLCMs decreased peakedness with higher DDm.Consequently, there was a significant negative correlation between the measuredCLP and DD, and a perfect separation was achieved between the control and PDsubjects.HOM and NHOMThe metrics HOM and NHOM quantify the gray value distribution in the non-diagonal elements of GLCM. The ratio 1/(1 + |i j|) in Equations 5.21 and 5.24diminishes away from the diagonal. An image filled with values drawn from arandom uniform probability distribution will produce a spatially uniform GLCM,regardless of the used direction and distance. On the other hand, a gradient image(or an image filled with values drawn from a normal probability distribution) willproduce a GLCM with higher density closer to the diagonal. In this case, theGLCM distance will play a role: larger distance will result in a greater number ofnon-diagonal co-occurrences and thus lower values of HOM and NHOM; this wasindeed shown experimentally in Fig. 7.9.In PUT, the simulated graphs of HOM and NHOM increase with DDm in therange between 0 and ⇠13 y. This matches with the expected behavior: the syntheticimages within PUTm were uniform at DDm=0, and had maximum average gradientaround DDm=13 (Fig. 8.2C ). The gradient in the synthetic images diminished withDDm¿13 as the values of ARm became closer to the background; the correspondingsimulated values of HOM and NHOM reduced.In other words, the synthetic images were least uniform around DDm=13, and2028.5. Model-based Analysis of the Metric Behavior with Disease Progressionthis corresponds to the maxima in the simulated graphs of HOM and NHOM inPUTm ROIs. The scatter plots of acquired data may also suggest a peak aroundDD=15, however the data are too noisy to determine this conclusively.In PBB, the trends observed in the simulated graphs can be explained usingthe same reasoning. One notable di↵erence is that the NHOM graph had a lowerslope than HOM with DDm¡10 y. This can be explained by the fact that HOM isa coarser metric than NHOM (since the term 1 + |i j| changes more rapidly than1 + |i j| /Ng), and in PBBm ROI (that is larger than the PUTm and containsa larger fraction of the background voxels) it is likely to lose sensitivity to smallchanges in the image gradient. The measured scatter plots of HOM and NHOMsuggest a downward trend, with p¡0.05.SAVGThe term px+y(k) in the expression for SAVG (Eq. 5.26) represents the sum ofGLCM elements along the k-th antidiagonal, k = 2...2Ng. Therefore, the value ofSAVG depends on the location of the GLCM peak along the diagonal, similar toACRL.In PUT, the shape of the simulated graph of SAVG was almost indentical toACRL (up to an o↵set and scale) in the entire range of DDm. The metric diminishedwith disease as the center of the gray value distribution shifted from higher to lowergray levels, in simulated and measured data.In PBB, the value of SAVG increased with disease, as the gray value distributionin the simulated and measured GLCMs shifted towards higher gray levels. Therewas a di↵erence between the simulated graphs of SAVG and ACRL with DDm¡10y, likely caused by the di↵erent weights that the metrics place on the non-diagonalGLCM elements.COVIn PUT, the COV measured from the synthetic images increased with DDm in therange between 0 and 15 y, and plateaud with higher DDm. This implies that themean and standard deviation of the ARm changed with DDm at the same rate,which may explain the lack of significant correlation in the measured scatter plots.In PBB, the simulated COV decreased monotonously. There was a significantnegative correlation between the measured COV and DD.2038.5. Model-based Analysis of the Metric Behavior with Disease ProgressionJ1 and J2Since the MI metrics J1 and J2 can be interpreted as the measures of spatial imagevariance and covariance, one expects the simulated graphs of these metrics to havewell-defined extrema on the range 0<DDm<30 y. Contrary to this expectation, thesimulated graphs for J1 and J2 monotonously increased with DDm in PUTm andPBBm. The measured data confirmed the simulated behavior. The reason for thiscan be understood from analyzing the MI-defining Equations 5.3 through 5.8. If weset f 0 = ↵f(x, y, z) where ↵ is a real non-zero scalar, from Eq. 5.6 it follows that⌘pqr(f0) =1↵p+q+r3⌘pqr(f) (8.5)and the values of J1 and J2 increase with diminishing ↵, or the mean AR in theROI. The values of the moments J1 and J2 therefore reflect the image variance,covariance, as well as magnitude (zeroth, fist, and second image moments).Since with texture metrics we are interested in quantifying the spatial distribu-tion of the AR values rather than its magnitude, the global magnitude informationcan be removed from the images through normalizationfnorm =f min(f)max(f)min(f) (8.6)where f is the original AR image and fnorm is the normalized AR image, simulatedor acquired.The simulated and measured graphs of J1 and J2 computed from the normalizedimages in PUT ROIs are shown in Fig. 8.10A. The simulated graphs demonstrate theexpected increase in the values of J1 and J2 with DDm, followed by the decrease. Thecorresponding measured scatter plots demonstrate a positive correlation between themetric values and DD. As expected, the correlation was weaker with normalized ARimages compared to the non-normalized images.The simulated and measured graphs of J1 and J2 computed from the normal-ized images in PBB ROIs are shown in Fig. 8.10B. The extrema in the simulatedgraphs were more pronounced compared to the PUT ROIs, and were located in theclinically-relevant range of DDm. The measured scatter plots did not contain lineartrends, and the measured correlation values were insignificant. On the other hand,the scatter plots suggested a unimodal trend, indicated by the dashed line in thefigure. A greater number of subjects is required to verify this observation more2048.6. Discussion0 10 20 300.440.460.480.50.520.540.560.580 10 20 300.040.0450.050.0550.060.0650.070 5 10 150.50.60.70.80.911.10 5 10 150.060.080.10.120.140.160.180.20.220.240 5 10 150.40.450.50.550.60.650 5 10 150.050.060.070.080.090.10.110.120 10 20 300.480.50.520.540.560.580.60.620.640.660 10 20 300.0350.040.0450.050.0550.060.065J1 simulatedJ1J2 simulatedJ1 simulatedJ2 simulatedJ2 J1 J2DDm (years)DD (years)DDm (years) DDm (years) DDm (years)DD (years) DD (years) DD (years)A BPUTm PBBmPUT PBBFigure 8.10: Simulated graphs of J1 and J2 that were computed from the ARmimages normalized using Eq. 8.6, and the corresponding measured scatter plots. A.PUT and PUTm ROIs. B. PBB and PBBm ROIs. The unimodal trend suggestedby the data is indicated by the dashed line.rigorously.8.6 Discussion8.6.1 Utility of the Proposed ModelIn this chapter, an analytic spatio-temporal model was employed that described thePD-associated change in the dopaminergic function, as revealed by specific bindingof DTBZ in the less a↵ected side of the putamen. The model was built based on thedata obtained from the analysis of DTBZ AR image profiles. This places bounds onthe generality of conclusions, with regard to the usefulness of the investigated metricsand their behavior, that can be made from the obtained results. For example,the HF metrics ENR and ENT were found to be poorly correlated with DD inthis work, however they may become more practical with di↵erent diseases andtracers. To evaluate the usefulness of texture metrics for the analysis of othertracers, anatomical regions and diseases, the model-based analysis must be repeatedin the appropriate context and under appropriate assumptions. The advantageof modeling the dopaminergic function/DTBZ binding in PD is that this tracerbinds predominantly in the striatum, and thus the distribution of activity in theimages is rather localized. Reduction of the tracer binding due to the disease occurs2058.6. Discussionpredominantly in the putamen, and follows a relatively well-known rostro-caudalgradient. Therefore, the generated synthetic images were expected to capture themost prominent aspects of that pattern. With other tracers, especially those thathave a distributed binding pattern that may or may not get a↵ected by a disease,the construction of realistic image models of the corresponding neuronal functionmay be dicult or not possible.The synthetic images of ARm were generated by adding Poisson noise and apply-ing Gaussian smoothing to the images with voxel values determined by the equationof the fitted model (Eq. 8.2). The voxel size, resolution and noise in the syntheticimages approximately matched those in the acquired images. However, this methodignores several factors that determine the appearance and quality of PET images:• the used image reconstruction algorithm,• number of acquired projections and angles,• non-uniform image resolution and Gibbs-like artifacts that may be present dueto resolution modeling,• possible contributions of randoms and scatter,• degradation of contrast due to motion,and other factors. A more realistic approach to generate the simulated AR imageswould be to construct a dynamic digital phantom with spatio-temporal activity dis-tribution governed by Eq. 8.2, and to simulate the coincidence data acquisitionand reconstruction on a scanner of given geometry (the activity distribution cor-responding to the chosen DDm may be set to be constant during the simulatedacquisition). Such approach would allow to di↵erentially investigate the influenceof various aspects of PET imaging on the behavior of the investigated image met-rics. One problem that may arise with this type of analysis is the large number ofparameters that determine the image quality. The investigation must be performedwith a specific research aim and in a hypothesis-driven manner to reduce the size ofthe explored parameter space.Nevertheless, the used in this work more direct method to generate syntheticARm images was able to provide a ground reference of the expected metric behavior.Analysis of the metric behavior based on the measured data alone would be dicultdue to the relatively high variability and the limited range of clinical DD in the PD2068.6. Discussionsubject group. By comparing the di↵erences between the simulated and measuredmetric behavior with respect to DD, several relevant insights into the measureddata were gained. For example, it was expected that many of the investigated HFand MI metrics would have a “U”-shaped dependence on DD, since the gradient ofAR values was known to increase in early disease and decrease in advanced disease.However, the measured scatter only contained an upward or a downward trend. Thesimulated graphs of the metric values did indeed have the expected “U”-shape, whichsuggests two possible reasons for the discrepancy: the insuciently wide range ofclinical DD in the PD subject group, or relatively high variability in the clinical andimage-derived data that obscures the observed trend. The simulated metric graphsalso provided a lower bound on the noise/variability that can be expected in themeasured metric values. Using the modeled graphs as a reference, it may be easierto distinguish between random patterns and systematic behavior in the measuredscatter plots. For example, the measured scatter plots of CTR, DIS, and HOM inPUT ROIs suggest alternating upward and downward trends — the shape that ispredicted by the model. On the other hand, the patterns observed in the scatterplots of ENR, ENT and MPR are likely due to noise/variability.Although the model-based metric analysis focuses on the particular disease,tracer and brain region, better understanding of the texture metric behavior gainedin this work with respect to the ROI definition and image characteristics may beuseful to guide the future choice of image metrics for the analysis of localized tracerdistributions. The sections below focus on the discussion and analysis of resultsspecific to the undertaken PD imaging study.8.6.2 Information Captured by Texture MetricsThe data presented in Chapter 7 combined with the analysis performed in thischapter suggest two conclusions with respect to the HF-based image analysis: 1) thespatial pattern of the dopaminergic function loss was not reflected in the measuredvalues of the HF metrics, and b) the observed changes in the HF metric valuesbetween di↵erent disease severities (i.e. DD) and ROI types stemmed from thecorresponding changes in the AR histograms.These two conclusions are most strongly supported by the following two obser-vations:• The shapes of the simulated HF metric graphs against DDm were explained2078.6. Discussionin Section 8.5 based only on the analysis of image histograms and the corre-sponding GLCMs, without the need to invoke the simulated spatial gradientin the synthetic images.• Whenever the HF values are governed by the spatial arrangement of gray levelsin the image, those values are expected to be relatively independent from thesize (or the shape) of the used ROIs, as long as the pattern of interest is insidethe ROI. In the least, the correlation between the metric values and disease isexpected to be preserved when using larger ROIs that contain no additionalinformation. The results demonstrate the opposite: the size of the ROI had aprofound e↵ect on the HF metric values and their correlation with DD. Theonly di↵erence in the image content between the PUTm and PBBm ROIs wereadditional background voxels, i.e. no new information was present in the largerROIs.Based on the above arguments, it stands reasonable to conclude that the mea-sured values of the HF metrics were indeed predominantly driven by the PD-inducedchanges in the AR histograms (i.e. not by spatial patterns but by marginalized voxelvalue distributions). On the one hand, this calls into question the propriety of usingthe HF metrics for the analysis of PET images with localized tracer binding pat-tern, when simpler metrics that directly quantify the shape of (normalized) imagehistograms are available. On the other hand, this demonstrates that in the previ-ous studies the HF metric values could have been determined solely by the imagehistograms, rather than by statistically robust spatial patterns. Other methods andmetrics may be more appropriate for quantification and visualization of the spatialpatterns when tracer binding is highly localized— for example, PCA- or ICA-basedmethods. The HF metrics may be more suitable for the analysis of PET imageswith distributed functional/tracer binding patterns.8.6.3 Importance of the ROI DefinitionThe dependence of most HF on DD was di↵erent with di↵erent ROIs, sometimeschanging from positive correlation with p<0.01 to significant negative correlation.This underscores the importance of consistent ROI definition and placement in imag-ing studies. The performed analysis demonstrates that the main mechanism bywhich the ROI definition a↵ected the values of the HF metrics was the inclusion ofadditional background voxels. The size of the used ROIs played a critical role, while2088.6. Discussionthe shape of the ROIs appeared to be irrelevant for the HF-based analysis (ROIshape may have a greater impact on the behavior of the MI metrics).The image model have demonstrated that the HF metrics measured using largerPBBm ROIs had reduced sensitivity to subtle di↵erences in the voxel value distribu-tions. This can be observed by comparing the simulated graphs in the Figs. 8.8 and8.9: in PUTm, many HF metrics had an extremum in the modeled range of DDm.On the other hand, in PBBm, most HF metrics either monotonously increased ordecreased. This di↵erence can be understood by considering an example of mea-suring the kurtosis of the AR distribution. While the PUT ROI that are moreregion-specific may capture the disease-related change of kurtosis, in PBB ROI thekurtosis may be largely defined by the inclusion of a large fraction of background,and the subtle disease-related changes of kurtosis would be lost in noise.The imperfect registration between the MRI and PET images, as well as thedi↵erences in the resolution between the modalities, may contribute to the variabilityof the image metrics through the inclusion of a random fraction of backgroundvoxels in the ROI (in addition to potentially missing the voxels with high tracerconcentration, as was demonstrated in Chapter 6). The relative fraction of theincluded background voxels is expected to vary more with the PUT ROIs than withthe PBB ROIs, since in the latter the baseline background fraction is considerablylarger. This expectation was confirmed experimentally: Fig. 7.5 demonstrates thatthe variability of the HF metrics with respect to the ROI orientation was indeedgreater with the PUT ROIs.8.6.4 Data Variability and NoiseThere are several factors that may contribute to the variability and noise in themeasured data:• variability of the image metric values due to the noise in the acquired images;• uncertainty in the measurement of DD and other clinical metrics;• di↵erences in the courses of disease progression between di↵erent subjects;• influence of confounding clinical factors such as subject age or age at diseaseonset;• inaccuracy of PET/MRI image registration and MRI image segmentation.2098.6. DiscussionThe synthetic images were generated to have approximately the same level ofnoise that was present in the acquired DTBZ images. However, the measured scatterplots revealed a much greater variability in the acquired data than was predicted bythe simulated metric graphs. This suggests that other factors besides the image noisecontributed substantially to the data variability. In particular, it is expected thata large degree of uncertainty exists in the measurement of clinical DD. Therefore,smoothing or denoising the images is expected to have a relatively minor e↵ecton the correlation between the clinical and image metrics. It may be of interestto explore using the denoising to reduce some degree of variability in the scatterplots of HF that suggested unimodal trends with DD, such as CTR, DIS and HOMmeasured with PUT ROIs. Nevertheless, the results suggest that a greater reductionof variability could be achieved by using more robust clinical measures of diseaseprogression.210Chapter 9Conclusions and Future WorkIn this thesis, two aspects of quantitative PET imaging were considered: motioncorrection and image analysis.In Chapters 3 and 4, an iterative image reconstruction method was developedbased on using unorganized point clouds, which can incorporate ecient correctionfor non-cyclic rigid and deformable motion. The method takes advantage of thehigh temporal resolution o↵ered by list-mode reconstruction. To the author’s bestknowledge, no previous methods have been reported that have similar characteris-tics. The quantitative accuracy and stability of the proposed method was validatedby reconstructing noise-free and noisy projection data from digital and physicalphantoms.In addition to the ability to handle complex motion types, the proposed point-cloud -based approach to image reconstruction provides the following importantadvantages over the conventional reconstruction methods:1. the approach can handle multiple objects in the FOV moving independently,as well as object splitting and merging;2. with incomplete motion data, the point trajectory inside (and outside) theFOV can be interpolated/extrapolated based on probabilistic or physical mo-tion/deformation models [201] — point clouds provide a framework for com-bining the motion tracking data from multiple sensors;3. variable point density can be used in di↵erent object regions to reduce thenumber of unknowns in the image reconstruction problem. For example, inpre-clinical brain imaging, lower sampling rate can be used in the body com-pared to the head;4. compared to the rigid event-by-event motion correction in the projection space,events not need to be re-mapped to di↵erent detectors, eliminating the situa-tion when motion-corrected events correspond to non-existent detector pairs.211Chapter 9. Conclusions and Future WorkThe developed method is expected to be particularly useful in the imaging ofunrestrained of partially restrained awake animals, alleviating the need for anesthe-sia that alters the brain function. To this end, a digital phantom of an unrestrainedmouse moving freely inside a virtual chamber was developed and used to validatethe reconstruction method. The geometry of the phantom was derived from theDigimouse atlas, and motion was generated manually by using an animation rigthat incorporated skeletal and harmonic coordinate-based deformation modifiers.The simulated motion was compared to the motion of a live mouse that was imagedusing a depth-sensing camera. While modeling of some poses exhibited by the livemouse proved to be a challenge, a good match was obtained between the simulatedand observed motion in terms of the general kinematic parameters. Thus, the phan-tom incorporated realistic motion parameters derived from a live animal, and noother such phantom was available at the time of writing this thesis. The phan-tom and the reconstruction method were validated by reconstructing the simulatedemission data a↵ected by continuous, non-periodic deformable motion.Although the phantom and the image reconstruction method can be used ontheir own, combined with the Monte-Carlo emission simulation they represent apreviously unavailable unified framework that can be employed to simulate variousphysical aspects of awake rodent imaging. Using the anatomical labels, the activitydistribution in the phantom can be modified to model di↵erent tracers distributions.Tracer kinetics can be modeled by imposing time dependency on the activity valuesin the phantom.It is expected that this framework will be particularly well suited to addressvarious aspects of the development of awake rodent imaging methodologies:1. the influence of the motion tracking accuracy on the reconstructed imagescan be estimated. For this investigation, the ground truth (simulated) motiondata can be corrupted according to the physics and the expected quality ofmotion tracking during the imaging experiment, and the corresponding changein the reconstructed images can be observed. Alternatively, the acquisition ofmotion-tracking data can be simulated using the animated mesh model of theanimal;2. similarly, the e↵ect of the motion tracking frame rate on the reconstructedimages can be estimated. For example, in the simultaneous PET-MRI scan-ners, a fast MR acquisition sequence could be used to obtain a snapshot of212Chapter 9. Conclusions and Future Workthe animal’s pose and position in the FOV. The developed framework can beused to estimate whether the MRI-based motion correction can yield quan-titatively accurate PET images. The minimum required repetition time canbe estimated to make the MRI-based motion correction practical for awakerodent imaging;3. the benefit of using motion interpolation between the acquired motion framescan be assessed. For example, if the position of the animal in the FOV ismeasured (known) at times t1 and t2, using the point-cloud reconstructionframework it is possible to interpolate the position of the animal at time t3 =(t1 + t2)/2. Using the interpolated motion data in the reconstruction processmay produce images with higher accuracy compared to the non-interpolatedmotion data;4. the framework can be used to evaluate the contributions of the scattered andrandom coincidences as functions of the rodent position inside the chamber.These contributions are expected to change with di↵erent animal positionsinside the FOV, as well as with di↵erent chamber geometries, animal bodysizes, and amounts of injected activity. The impact of these variables onthe reconstructed images can be assessed in order to optimize the imagingmethodology in the simulations, prior to developing the physical hardware;5. finally, the framework and phantom in particular can be used to validate andcompare various motion correction and image reconstruction methods. Themotion data can be utilized either in the point cloud trajectory form, or it canbe represented in the more traditional form of image transformation matrices.The exploration of these applications was left beyond the investigative scope of thisthesis since each of them requires a separate investigation.In Chapters 6-8, several image metrics that quantified the value, shape, and tex-ture of regions with high tracer uptake were investigated in terms of their ability tocapture disease-related information from high-resolution PET images. The metricschosen for the analysis have not been previously explored in brain PET imaging,particularly for tracers that have a localized binding pattern. The analysis wasperformed using DTBZ and RAC images of subjects su↵ering from PD, with tracerbinding sites primarily located in the striatum. The analysis methodology was basedon the use of di↵erent methods of ROI definition, including mixed PET-MRI ROIs213Chapter 9. Conclusions and Future Workthat were obtained using a controlled region fusion method. A detailed correlationanalysis was performed between the metric values and clinical disease severity andduration, and statistically significant correlations were found with multiple metrics.It was demonstrated that a) combining image metrics that convey di↵erent types ofinformation may improve the correlation with clinical measures of the disease, andb) the results of metric analysis may be highly sensitive to variations in the ROIdefinition. In the texture-based analysis, it was shown that the HF metrics thatwere deemed promising in previous studies change non-linearly with PD progres-sion, and that the metric behavior with disease progression was strongly a↵ected bythe used ROI type. A model of PD-related changes in the spatio-temporal bindingof DTBZ was developed and used to analyze the observed relationship between themetric values and clinical disease severity.The most important finding of this work is that quantifying the activity distri-bution pattern using descriptors of shape and texture can be a useful approach inthe analysis of some tracers that explore neurodegenerative diseases. Such descrip-tors present the advantage of not requiring dynamic scanning with known plasma ortissue input function. It may be dicult to directly relate such descriptors to phys-iological parameters that characterize the underlying neurochemistry (such as thebinding potential or kinetic rates). Instead, metrics of shape and texture quantifythe spatial distribution of the tissue function. Thus, since various neurological func-tions are known to be a↵ected in distinct spatial patterns, quantifying the spatialdistribution of the tracer uptake in addition to the mean BPND is expected to conveyinterpretable information on the pathways and mechanisms of disease progression.The results of this study provide a guide for selection of image analysis method-ologies in future studies of neurodegeneration that aim to understand the mecha-nisms of disease progression and evaluate possible intervention/prevention strategies.For example, the performed analysis suggests that the strongest correlation with dis-ease progression (and thus the strongest predictive strength) should be achieved withthose metrics that a) change linearly with respect to the image manifestations of thedisease, and b) are robust with respect to image noise. The results also suggest thatfor better disease prediction and discrimination, focus should be placed on methodsthat incorporate multiple metrics. Image metrics that were investigated should beused in the context of specific ROI definition criteria, possibly tailored to the specificfunction or structure under consideration. An examination of a metric’s behaviorunder di↵erent ROI definitions/perturbations should become a routine part of the214Chapter 9. Conclusions and Future Workmetric characterization, particularly in clinical practice.The study also underscored the importance of using robust methodologies toevaluate the predictive performance of image metrics and their out-of-sample gen-eralization. The traditional statistical approach that is based on computing thecorrelation coecients and significance levels has two limitations. Firstly, it wasshown that the dependence between the image-derived and clinical metrics may benon-linear, with alternating positive and negative trends. Thus, measures of cor-relation that assume either linear or monotonic relationship between the data maynot be adequate for analysis. On the other hand, fitting the data using non-linearfunctions may be dicult due to noise and not knowing the optimal functional formto be used for the fit. Secondly, testing multiple metrics using the same image datais prone to multiple comparison bias, which results in the under-estimated p-values.Although there are methods that take into account multiple comparisons, they aregenerally not accepted as standard in the field. It is advised that the robustness ofthe relationship between the clinical and image metrics should be evaluated usingcross-validation. Although cross-validation typically requires a relatively large num-ber of subjects, it also allows for a greater flexibility in choosing the appropriatemodel for the relationship between clinical and image data.A promising direction of future research is the investigation of machine learning-inspired metrics that can be trained to have high sensitivity for a specific disease-induced pattern or structure. Trained metrics may be designed to incorporate auto-mated selection of voxels that are determined to be most relevant for the analysis.Then, the ROI definition method essentially becomes incorporated into the met-ric computation algorithm. The encoding of the optimal region selection duringthe training procedure would eliminate the need to search for ROIs that could im-prove the correlation values. Trained metrics that are tailored to characterize aparticular tracer binding pattern are expected to be highly sensitive to small alter-ations in disease manifestations between di↵erent subject populations. Therefore,in neurodegenerative diseases such metrics may prove to be particularly useful inthe investigation of early intervention therapies.Compared to other medical imaging fields, a frequent problem in nuclear emissionimaging that limits the applicability of state-of-the-art machine learning methodsis a relatively small sample size. A possible solution to this limitation is to usesimulated images of di↵erent brain regions at di↵erent disease stages — similar tothe simulated DTBZ images of the putamen that were used in Chapter 8. The215Chapter 9. Conclusions and Future Workprocess of metric training can be “bootstrapped” to include both simulated andacquired images. For example, a large convolutional neural net consisting of manylayers can be trained on thousands of simulated tracer binding images that modelPoisson noise, variability in the shape and size of the analyzed structure, as wellas other sources of variability that can be estimated from real data. After pre-training the net on the simulated images, the last layer that represents high-levelimage features can be re-trained on the acquired image data. The last layer mayalso include inputs for additional relevant variables, such as the subject age, geneticprofile, and possible risk factors. This approach is typically referred to as “transferlearning” in literature. Since deep neural nets have shown excellent performance inimage classification tasks, it is of interest to investigate the predictive strength ofthis method in application to imaging of PD subjects. The spatio-temporal model ofputaminal DTBZ binding developed in this work can be improved by adding other apriori known information. For example, both sides of the brain can be incorporatedinto the model , and combined cross-sectional and longitudinal data may be usedto improve the accuracy of the model’s fitting coecients. Development of similarmodels for other tracers and structures should be considered.In conclusion, it must be pointed out that the development of better data cor-rection and image analysis techniques are closely linked. As PET technology con-tinues to advance, images with progressively higher quality are expected to becomeavailable. Novel PET tracers and simultaneous multi-modality imaging techniqueswill provide images that convey new types of information. Refined methods ofmulti-dimensional image analysis can be developed that take advantage of highimage quality. However, in this case correction of images for motion and otherquantification-degrading aspects becomes very important. A small contribution toquantitative correction techniques and image analysis methodologies was made inthis work. It is expected that further development of quantitative PET imagingtechniques in the coming years will contribute significantly to more accurate diag-nosis and understanding of neurological and other debilitating disorders.216Bibliography[1] C. S. Levin and E. J. Ho↵man, “Calculation of positron range and its e↵ecton the fundamental limit of positron emission tomography system spatial res-olution.,” Physics in medicine and biology, vol. 44, no. 3, pp. 781–799, 1999.[2] L. Sokolo↵, M. Reivich, C. Kennedy, M. H. Des Rosiers, C. S. Patlak, K. D.Pettigrew, O. Sakurada, and M. Shinohara, “The [14C]deoxyglucose methodfor the measurement of local cerebral glucose utilization: theory, procedure,and normal values in the conscious and anesthetized albino rat.,” Journal ofneurochemistry, vol. 28, pp. 897–916, may 1977.[3] J. P. B. O’Connor, A. Jackson, M.-C. Asselin, D. L. Buckley, G. J. M. Parker,and G. C. Jayson, “Quantitative imaging biomarkers in the clinical develop-ment of targeted therapeutics: current and future perspectives.,” The Lancet.Oncology, vol. 9, pp. 766–76, aug 2008.[4] S. S. Kelkar and T. M. Reineke, “Theranostics: combining imaging and ther-apy.,” Bioconjugate chemistry, vol. 22, pp. 1879–903, oct 2011.[5] A. D. Nunn, “The cost of bringing a radiopharmaceutical to the patient’sbedside.,” Journal of nuclear medicine : ocial publication, Society of NuclearMedicine, vol. 48, p. 169, feb 2007.[6] M. R. Palmer, X. Zhu, and J. A. Parker, “Modeling and simulation of positronrange e↵ects for high resolution PET imaging,” IEEE Transactions on NuclearScience, vol. 52, pp. 1391–1395, oct 2005.[7] H. E. Johns and J. R. Cunningham, Physics of Radiology. Charles C Thomas,4th editio ed., 1983.[8] K. Mercurio, P. Zerkel, R. Laforest, L. G. Sobotka, and R. J. Charity, “Thethree-photon yield from e+ annihilation in various fluids,” Physics in Medicineand Biology, vol. 51, pp. N323–9, sep 2006.217Bibliography[9] S. Berko and F. L. Hereford, “Experimental studies of positron interactions insolids and liquids,” Reviews of Modern Physics, vol. 28, pp. 299–307, jul 1956.[10] R. Nutt and J. S. Karp, “Is LSO the future of PET?,” European Journal ofNuclear Medicine, vol. 29, pp. 1523–1528, nov 2002.[11] C. W. E. Van Eijk, “Inorganic scintillators in medical imaging detectors,” inNuclear Instruments and Methods in Physics Research, Section A: Accelera-tors, Spectrometers, Detectors and Associated Equipment, vol. 509, pp. 17–25,apr 2003.[12] M. E. Casey and R. Nutt, “A Multicrystal Two Dimensional BGO DetectorSystem for Positron Emission Tomography,” IEEE Transactions on NuclearScience, vol. 33, no. 1, pp. 460–463, 1986.[13] C. Catana, Y. Wu, M. S. Judenhofer, J. Qi, B. J. Pichler, and S. R. Cherry,“Simultaneous Acquisition of Multislice PET and MR Images: Initial Resultswith a MR-Compatible PET Scanner,” Journal of Nuclear Medicine, vol. 47,no. 12, pp. 1968–1976, 2006.[14] V. C. Spanoudaki and C. S. Levin, “Photo-detectors for time of flight positronemission tomography (ToF-PET),” 2010.[15] B. H. Peng and C. S. Levin, “Recent development in PET instrumentation.,”Current pharmaceutical biotechnology, vol. 11, pp. 555–71, sep 2010.[16] C. L. Kim, G. C. Wang, and S. Dolinsky, “Multi-pixel photon counters for TOFPET detector and its challenges,” IEEE Transactions on Nuclear Science,vol. 56, no. 5, pp. 2580–2585, 2009.[17] S. R. Cherry, M. Dahlbom, and E. J. Ho↵man, “3D PET using a conventionalmultislice tomograph without septa,” Journal of Computer Assisted Tomog-raphy, vol. 15, no. 4, pp. 655–668, 1991.[18] F. H. Fahey, “Data Acquisition in PET Imaging,” J Nucl Med Technol, vol. 30,pp. 39–49, jun 2002.[19] S. E. Derenzo, “Mathematical Removal of Positron Range Blurring in HighResolution Tomography,” IEEE Transactions on Nuclear Science, vol. 33,no. 1, pp. 565–569, 1986.218Bibliography[20] P. Kinahan and J. Rogers, “Analytic 3D image reconstruction using all de-tected events,” Nuclear Science, IEEE Transactions on, vol. 36, no. 1, pp. 964–968, 1989.[21] M. E. Daube-Witherspoon and G. Muehllehner, “Treatment of axial data inthree-dimensional PET.,” Journal of nuclear medicine : ocial publication,Society of Nuclear Medicine, vol. 28, pp. 1717–1724, nov 1987.[22] M. Defrise, P. E. Kinahan, D. W. Townsend, C. Michel, M. Sibomana, andD. F. Newport, “Exact and approximate rebinning algorithms for 3-D PETdata.,” IEEE transactions on medical imaging, vol. 16, pp. 145–58, apr 1997.[23] L. A. Shepp and Y. Vardi, “Maximum likelihood reconstruction for emissiontomography.,” IEEE transactions on medical imaging, vol. 1, pp. 113–22, jan1982.[24] K. Lange and R. Carson, “EM Reconstruction Algorithms for Emission andTransmission Tomography,” Journal of Computer Assisted Tomography, vol. 8,pp. 306–316, apr 1984.[25] H. Hudson and R. Larkin, “Accelerated image reconstruction using orderedsubsets of projection data,” IEEE Transactions on Medical Imaging, vol. 13,no