Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Evaluation of new iterative and rebinning reconstruction algorithms in fully three-dimensional positron… Krzywinski, Martin 1998

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1998-0503.pdf [ 8.05MB ]
Metadata
JSON: 831-1.0085160.json
JSON-LD: 831-1.0085160-ld.json
RDF/XML (Pretty): 831-1.0085160-rdf.xml
RDF/JSON: 831-1.0085160-rdf.json
Turtle: 831-1.0085160-turtle.txt
N-Triples: 831-1.0085160-rdf-ntriples.txt
Original Record: 831-1.0085160-source.json
Full Text
831-1.0085160-fulltext.txt
Citation
831-1.0085160.ris

Full Text

EVALUATION OF NEW ITERATIVE AND REBINNING RECONSTRUCTION ALGORITHMS IN FULLY THREE-DIMENSIONAL POSITRON EMISSION T O M O G R A P H Y By Martin Krzywinski B. Sc. (Physics-Mathematics) University of Ottawa, Ottawa, Ontario, 1996 A THESIS S U B M I T T E D IN P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R O F S C I E N C E in T H E F A C U L T Y O F G R A D U A T E S T U D I E S D E P A R T M E N T O F P H Y S I C S We accept this thesis as conforming to the required standard T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A August 1998 © Martin Krzywinski, 1998 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this .thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Physics The University of British Columbia 2075 Wesbrook Place Vancouver, Canada V6T 1W5 Date: 11 A b s t r a c t A n evaluation of new reconstruction algorithms for 3D positron emission tomography is presented. Using phantom and patient data, the standard 3D filtered backprojection algorithm is used to assess the merits of faster alternatives to full 3D reconstructions: F O R E + 2D F B P , F O R E + O S E M and F O R E + S A G E . The phantom figures of merit that are used to evalu-ate these new algorithms are: resolution, noise, contrast recovery, signal-to-noise ratio, image uniformity and time activity curve variability. Using patient scans, the images are compared to 3D F B P reconstructions by examining the differences in radial profiles, region of interest activity, image root-mean-square, Logan slopes and computational demands. Both phantom and patient data analysis indicates that F O R E + 2D F B P performs equally well as 3D F B P in terms of all figures of merit examined, with the exception that image uniformity and time activity curve variability are better than for 3D F B P . One clear benefit in using F O R E .+ 2D F B P is that reconstruction is 10 times faster than 3D F B P . Based on the results obtained 3D F B P can be replaced by F O R E + 2D F B P with no significant impact on subsequent analysis. " Distribution volume ratios obtained with F O R E + 2D F B P were different from 3D F B P results by 0.2 ± 0.4 % for cortical input function and —1.5 ± 0.4 % for cerebellar input function. Both O S E M and S A G E performed well, with increased resolution and distribution volume differences being < 3% for either input function. i i i Contents A b s t r a c t i i List of Tables v i List of Figures v i i 1 Introduction 1 1.1 Goal of Thesis 1 1.2 Thesis Outline 2 1.3 Motivation 2 1.4 Previous Work 3 1.5 Positron Emission Tomography 4 1.5.1 Physics of P E T 4 1.5.2 Acquisition Mode: 2D and 3D P E T 8 1.6 Image Reconstruction Methods 14 1.6.1 Transform Methods 15 1.6.2 Algebraic Methods 16 1.6.3 Statistical Methods 18 1.6.4 Rebinning Methods 19 2 Iterative Algorithms 23 2.1 E M Algorithm 23 2.2 O S E M Algorithm 26 2.3 S A G E Algorithm 29 2.3.1 S A G E 1 29 2.3.2 S A G E 3 30 2.3.3 Regularized SAGE3 31 2.3.4 Notation 35 3 Figures of Mer i t for Algori thm Evaluation 36 3.1 Evaluation Philosophy 36 3.1.1 Phantom Studies 38 3.1.2 Digital Phantoms 39 3.1.3 human subject Brain Scans 40 3.1.4 Scanning Hardware 40 CONTENTS iv 3.2 Phantom Evaluation Criteria 40 3.2.1 Resolution 41 3.2.2 Noise 41 3.2.3 Contrast 42 3.2.4 Signal-to-Noise Ratio (SNR) 42 3.2.5 Axia l Uniformity 43 3.2.6 Radial Uniformity 43 3.2.7 In-plane Uniformity 43 3.2.8 Time Activity Curve Comparison 44 3.2.9 Calibration Factors 45 3.3 Human Subject Data Evaluation Criteria 45 3.3.1 Radial Profile Difference 45 3.3.2 Region-of-Interest Difference 46 3.3.3 Image Root-Mean-Square (RMS) 47 3.3.4 Kinetic Parameter Comparison 48 3.3.5 Computational Demands 52 4 Experimental Results 53 4.1 Choice of Algorithm Parameters 53 4.1.1 F O R E 53 4.1.2 2D F B P 53 4.1.3 O S E M 54 4.1.4 S A G E 3 54 4.2 Phantom Studies 54 4.2.1 Resolution 54 4.2.2 Noise 58 4.2.3 Contrast 63 4.2.4 Signal-to-Noise Ratio 64 4.2.5 Axia l Uniformity 64 4.2.6 Radial Uniformity 64 4.2.7 N E M A Uniformity 67 4.2.8 Time Activity Curve Comparison 67 4.2.9 Summary of Phantom Studies 67 4.3 human subject Studies 67 4.3.1 Radial Profile Difference 70 4.3.2 Region-of-Interest Difference 71 4.3.3 Image R M S 72 4.3.4 Kinetic Parameter Comparison 72 4.3.5 Computational Demands 73 5 Discussion and Conclusion 80 5.1 Algorithm Performance 80 5.1.1 F O R E and 2D F B P 81 5.1.2 F O R E and O S E M 81 5.1.3 F O R E and S A G E 83 CONTENTS 5.2 Algorithm Selection Bibliography vi List of Tables 4.1 Resolution for Various Reconstructions at the Center of the F O V . The parameter / represents the spatial filter and (3 is the regularization parameter for S A G E . . . 56 5.1 Comparison of Phantom FOMs for 3D F B P and F O R E + 2D F B P . Speed is given in minutes per frame 81 5.2 Comparison of Phantom FOMs for 3D F B P and F O R E + O S E M . A denotes the percent improvement in going from 3D F B P to F O R E and 2DFBP. Negative (positive) values of A signify a decrease (increase) in quality of the particular F O M . Speed is given in minutes per frame 82 5.3 Comparison of Difference in Phantom F O M s between 3D F B P and F O R E + O S E M . Negative (positive) values of A signify a decrease (increase) in quality of the par-ticular F O M 82 5.4 Comparison of Phantom FOMs for 3D F B P and F O R E + S A G E . Speed is given in minutes per frame 83 5.5 Comparison of Difference in Phantom F O M s between 3D F B P and F O R E + S A G E . Negative (positive) values of A signify a decrease (increase) in quality of the particular F O M 84 5.6 Comparison of difference in human subject study quantitation F O M s between 3D F B P and F O R E + 2D F B P / O S E M / S A G E 85 5.7 Comparison of difference in distribution volume ratios between 3D F B P and F O R E + 2D F B P / O S E M / S A G E 85 vii List of Figures 1.1 The event is registered as originating from along a line of response. The details of the decay and annihilation are shown in the figure inset 5 1.2 A point source gives rise to a sinusoidal sinogram 7 1.3 Scatter, randoms and attenuation all lead to a decrease in the number of true events detected 8 1.4 For each frame in a dynamic scan, the same ROI is placed on a given image plane. The activity within this region is found and plotted against time 9 1.5 Positioning terminology in P E T 10 1.6 The difference between 2D and 3D acquisition. The black star represents the actual decay location. The light star is the location as seen by the scanner after the event has been assigned to a plane 11 1.7 For sinograms beyond a certain azimuthal angle, 6 > 0 (see text), some projec-tions are missing, or incomplete. Here (j) is the standard polar angle 12 1.8 (1) Only the complete sinograms are used to reconstruct the image. (2) The missing projections are calculated based on the image. (3) A l l the sinograms are used to reconstruct the final image 13 1.9 The sinogram is a set of projections of the activity within the F O V taken at different angles 15 2.1 The effect of iteration number (here 3, 25 and 100 iterations are shown) on reconstructed image quality with the E M algorithm. After a certain number of iterations, the algorithm's performance is degraded by the presence of noise in the projections 26 2.2 The effect of number of O S E M iterations and subsets on image reconstruction. . 28 2.3 The effect of various values of f3 on the reconstruction of a uniform cylinder with S A G E : 32 2.4 The effect of number of S A G E iterations and regularization on final image quality. 34 3.1 The ROIs used for Logan analysis and ROI difference analysis 47 4.1 R M S of a single brain image plane between S A G E and 3D F B P 55 4.2 Variation in radial resolution with radial distance in the F O V 58 4.3 Variation in radial resolution with axial distance in the F O V 59 LIST OF FIGURES vi i i 4.4 Variation of axial resolution with radial distance in the F O V 60 4.5 Variation of axial resolution with axial distance in the F O V 61 4.6 Image noise for various reconstructions at two count rates. Note the scale differ-ence in the two plots 62 4.7 Hot spot contrast for various reconstruction algorithms 63 4.8 Axia l uniformity for various reconstructions at high count rates 65 4.9 Radial uniformity for various algorithms 66 4.10 N E M A uniformity for various reconstructions in two count regimes 68 4.11 Variability in the T A C for various reconstructions 69 4.12 Radial profiles taken vertically through a raclopride brain image. Several recon-structions are shown. The solid line represents 3D F B P and the scatter points are the algorithm indicated above the plot 70 4.13 Average radial profile difference for various reconstruction methods 71 4.14 Average normalized ROI differences for three ROI groups 75 4.15 Image R M S and signed difference for various reconstructions averaged over 10 human subject scans 76 4.16 Average difference in Logan slopes (DVR) between 3D F B P Hann 0.4 and various reconstruction methods. The error bars represent the standard deviation of the D V R among the 10 human subjects 77 4.17 Average Logan slope dispersion of the difference between 3D F B P Hann 0.4 and various reconstruction methods 78 4.18 Reconstruction times for standard 3D F B P and F O R E reconstructions. The time is significantly reduced because 2D reconstruction methods can be utilized after rebinning with F O R E 79 1 Chapter 1 Introduction 1.1 Goal of Thesis This thesis will present an evaluation and comparison of image reconstruction methods for 3D positron emission tomography (PET) data. The reconstruction schemes under study are based on rebinning a 3D data set to a 2D data set with Fourier Rebinning (FORE) , followed by 2D versions of one of the following algorithms: Ordered-Subsets Expectation Maximization (OSEM), Space-Alternating Generalized Expectation Maximization (SAGE) and 2D Filtered Backprojection (2D F B P ) . Based on the performance of these methods, as tested with phantoms and human subject studies, it will be possible to decide whether the current 3D reconstruction methodology, 3D Filtered Backprojection (3D F B P ) can be replaced. 3D F B P is disadvan-taged because it is computationally very costly. To motivate the replacement of the current reconstruction standard, any new method should yield images of at least equivalent quality, where quality is defined by a series of figures of merit. Factors such as speed, image quality and implementation should all be considered. CHAPTER 1. INTRODUCTION 2 1.2 Thesis Outline To understand the new algorithms and the significance of the tests applied to them, some background in P E T is required. A short summary of this imaging modality will be presented in this chapter, followed by a brief exposition of the general types of reconstruction algorithms available. Subsequent to the background, the candidate algorithms will be presented. A detailed understanding of them is not required to understand the results and therefore the presentation will be kept short. Again, the reader is referred to detailed references. The evaluation methodology is described next. The phantom and human subject scans figures of merit used to gauge performance are listed and described. The use of human subject studies is presented. The following results section presents the performance of each method. The discussion section will examine the results and propose which of the tested meth-ods provides optimal reconstructed images. 1.3 Motivation The work described in this thesis is motivated by the fact that (a) the current recon-struction standard in P E T is very computationally demanding and (b) new and still clinically untested algorithms exist which are both faster and have better noise characteristics. Recon-struction speed is of great importance and a fast algorithm is extremely desirable. Even with significant computer resources, the current reconstruction speed makes the processing of sever studies per day at a typical P E T center difficult to accomodate. For example, currently our center can process 48 frames per day, assuming constant computer uptime. It takes 10 minutes CHAPTER 1. INTRODUCTION 3 per frame to process the data for reconstruction and 20 minutes per frame to reconstruct the data to obtain an image. Scans are typically 16 frames long and it is not unusual to perform 3 such scans in a single day. This taxes the computers maximally and if a reconstruction needs to be redone, backlog can occur. Some scans have 25 frames and if two of these are done per day the computational requirements would exceed available resources. In addition to speed, decreased noise or better resolution with the same noise level would be highly valued, particularly for brain scans where the region of interest is anatomically small and contains relatively few counts. The algorithms which will be examined are both faster and offer lower noise images at equivalent resolution compared to the current standard. In order to replace the standard reconstruction algorithm, 3D filtered-backprojection, with confidence, any new method must be fully tested to ensure that the images can be used. Any such analysis of algorithms necessarily requires bringing to bear a full battery of phantom tests to establish the absence of any systematic artifacts, followed by human subject studies to ensure that the algorithms yield consistent results from biologically significant data. From the point of view of image analysis the switch in reconstruction, or for that matter any data analysis or methods, should be as transparent as possible, with no systematic impact on the data. 1.4 Previous Work During the period of 1997-1998, new algorithms have have been developed and refined for use in P E T reconstruction. Their acceptance into clinical and research centers as viable reconstruction methods has been slow largely due to the overwhelming use of a single recon-CHAPTER 1. INTRODUCTION 4 struction standard and lack of comparison in realistic scanning situations. Some studies exist [1] [2] [3] [4] [5] in which these algorithms have been used and compared but they are sparse and none include both phantom and clinical data. This thesis will provide the information required to answer the question whether these methods can be immediately used. 1.5 Positron Emission Tomography 1.5.1 Physics of P E T P E T , unlike some imaging modalities like M R I and X-rays, collects functional, rather than anatomical, information which can be used to probe into the biochemical function of specific cells or to extract information about kinetic parameters of metabolic pathways. Several excellent review articles exist [6] [7] [8] and the reader is encouraged to use them to supplement the introduction presented in this thesis. . The physical basis for P E T is fundamentally simple [9] [10] ([11], chapter 6). The human subject is injected with a chemical called a tracer which has been labeled with a positron-emitting radioactive isotope. The tracer is either a close analogue of the endogenous species present in the body, and can be used to study a metabolic pathway, or has a high specificity for a particular binding site whose population is of interest. For example, glucose metabolism in the body is studied with 1 8F-deoxygluxose. This compound is similar to glucose except that one of the protons has been replaced with the 1 8 F isotope and one of the - O H groups is missing. The 1 8 F allows the detection of the tracer in the body through the use of P E T and the missing - O H traps the tracer in a specific metabolic step. Once the tracer has had time to diffuse to the organ of interest and trapping has reached equilibrium, the human subject is placed in the CHAPTER 1, INTRODUCTION 5 Figure 1.1: The event is registered as originating from along a line of response. The details of the decay and annihilation are shown in the figure inset. detector and the tracer activity distribution is monitored with the scanner. Another common radioisotope that is used is n C , which has a half-life of 20 minutes, compared to 110 minutes for 1 8 F . A comprehensive list of labels and tracers can be found in chapter 9 of [11]. Figure 1.1 illustrates how the radioactivity, X(x,y), within the field of view (FOV) is detected. A P E T scanner is constructed from stacked rings of detectors. Figure 1.1 shows only one such ring and for illustration we will consider only those events coming from the plane of the ring. How events spanning more than a single ring are handled depends on the acquisition mode (vide infra). While in the body, the decay of the radiotracer nucleus is described by p ^ n + e+ + ise (1.1) where a proton decays to emit a neutron, n, and a positron, e + (anti-matter equivalent of an electron). A n electron neutrino, ve, is also emitted in the process[12]. The emitted positron CHAPTER 1. INTRODUCTION 6 subsequently annihilates with a nearby electron producing two photons. Because the positron is nearly at rest immediately before the annihilation, the total linear momentum of the e+e~ pair is almost zero and the two photons are emitted nearly at 180° to each other. There is about an average 2.5° off-set in this angle, shown as 9 in Figure 1.1. In addition, the decay and annihilation are not exactly spatially coincident, because positron travels about 2 mm, in the case of 1 8 F , from the decay site before annihilating 1. The two photons travel through the body and, if unhindered, are detected. The scan-ner logs an annihilation when two of its detectors register photons within a narrow coincidence time window. The position of the two detectors is the only information we have about the decay and therefore we only know the line along which the original decay must lie. This is called the line of reponse (LOR) and is parametrized as shown in Figure 1.1, where r is the distance of the L O R from the origin and <f> is the angle that the L O R makes with the y-axis (the orientation of the definition of <j> is arbitrary). Because there is a finite number of detectors in the ring, the number of L O R s is also finite. The number of events registered for a given L O R is histogrammed into a structure called the sinogram. For example, if we only had a point source in our field of view at (x, y), the only L O R s with events in them are described by the relationship r = x cos (f> + y svt\ (f> (1.2) The sinogram would therefore contain binned events along a single sinusoidal path shown in Figure 1.2. Compton scatter within the F O V is primarily responsible for scattering the photons 1This distance depends on the positron energy. The higher the energy the further the positron travels away from the decay before annihilation. CHAPTER 1. INTRODUCTION 7 Figure 1.2: A point source gives rise to a sinusoidal sinogram. away from their original LORs. Either the scattered photon is detected and the L O R is misreg-istered or multiple scattering, or possibly single large angle scatter, reduces the photon energy to below the set detector limit and the event is not registered at all. Typically as much as 30% of the detected events are scatters in a typical brain scan acquired without septa (vide infra). Unregistered events are broadly categorized as being due to attenuation. Figure 1.3 illustrates the principle of scatter. In addition to missing some events, the scanner can register events which did not actually take place. This can happen when two photons from two separate events in the scanner are detected within the coincidence time window. This is termed as randoms and pictured in Figure 1.3. Processing the sinogram involves correcting for randoms, deadtime, scatter, detector non-uniformities as well as attenuation. Once the sinogram is corrected so that it best reflects only the actual decays that took place in the scanner it is ready for reconstruction, which is the next step in data processing. Reconstructed data contain information about the tracer distribution in the object CHAPTER 1. INTRODUCTION 8 not detected Figure 1.3: Scatter, randoms and attenuation all lead to a decrease in the number of true events detected. of interest. When kinetics of metabolic processes are of interest, the fundamental object used for this analysis is the time activity curve (TAC) shown in Figure 1.4. In order to construct a T A C one needs to perform a dynamic scan. Dynamic scans are composed of sequential frames, with each frame being a single scan. Frames vary in number and duration depending on the tracer used and kinetics of the biological process under investigation. For example, the first frames might be short to sample rapid changes in the T A C and the last few frames might be longer, as is often the case when tracer distribution changes slowly at later times. A typical 16 frame study acquired in 3D mode (vide infra) contains about 16N2 sinograms, where N is the number of detector rings in the scanner, and the reconstruction time of such a scan is very long, typically longer than the scan duration. 1.5.2 Acquisition Mode: 2D and 3D P E T The example using Figure 1.1 was simplified in that it considered a single detector ring and decays occurring only in the ring plane. Indeed, one could perform a scan in this way but CHAPTER 1. INTRODUCTION 9 Figure 1.4: For each frame in a dynamic scan, the same ROI is placed on a given image plane. The activity within this region is found and plotted against time. many events would not be registered. At the decay site, photons are emitted isotropically and the chances that they will be detected by the detectors in the same plane is small. Sacrificing events like this yields low statistics images which are noisy and carry very little interpretable information for anatomical structures which are small and contain few counts, as is the case in many brain scans. Therefore, scanners are typically built as a stack of detector rings. Originally, P E T data were acquired in so-called septa-in or 2D mode. Between each detector ring were positioned heavy metal (high Z) septa which effectively absorbed any photons along a L O R which spanned more than several detector planes. For a scanner with N detector rings, the detector LORs were assigned to one of 2JV — 1 unique planes. N of these planes corresponded to direct planes which were exactly the detector ring planes. There were N — 1 indirect planes which were taken to lie between the direct planes. In this way the F O V was CHAPTER 1. INTRODUCTION 10 [ Positioning Terminology ~| tr&nsaxiai/direct sinogram Figure 1.5: Positioning terminology in P E T . segmented axially into slices. Events were binned into the direct planes if they were registered by detectors with a ring difference of either A = 0 or A = 2. Events sequestered into indirect planes were those from detectors with a ring difference of A = 1 or A = 3. The binned events are collected into sinograms, one for each of the 2N — 1 image planes. Some of this terminology is shown in Figure 1.5. The 2D acquisition method is shown in Figure 1.6. It can be seen that in 2D mode the scanner moves the axial position of any events associated with A > 0. The advantage of this is that all the data are segregated by the scanner hardware into 2N — 1 independent slices which can then be reconstructed with any 2D reconstruction method. More importantly, however, the number of scattered events is greatly reduced, because a large fraction of scattered events will be stopped by the septa as it is likely to span more than 3 detector rings. However, there is one main disadvantages with 2D P E T : a significant number of events are rejected (or even not detected) and the sensitivity is therefore sacrificed. Not long after 2D P E T was being routinely used, the motivation to increase the number CHAPTER 1. INTRODUCTION 11 Indirect plane ^ / direct pla: imnppoD rejected event direct plane events indirect plane events [ 3D Acquisition sinogram B • • • • • • • • • • • • • • • • sinogram C sinogram D 1DDDDDDDDDDDDDDD all events accepted Figure 1.6: The difference between 2D and 3D acquisition. The black star represents the actual decay location. The light star is the location as seen by the scanner after the event has been assigned to a plane. of acquired counts by increasing sensitivity urged the development of 3D or septa-less P E T . In terms of scanner hardware, all that was required was to remove the septa and accept events with ring differences A > 4. In terms of reconstruction, an entirely new method had to be devised. The relatively slow transition from 2D P E T to 3D P E T was limited by the reconstruction issues which followed the removal of septa. The difficulty stemed from the presence of not 2N — 1 but N2 individual image planes. Normally the scanner may be configured not to not accept some extremely oblique image planes, such as those with A = N — 1. What compounded the problem of 3D reconstruction most of all is that some image planes had incomplete, also termed missing, projections. This is caused by the fact that the detector does not afford full coverage of the F O V , but rather has an axial acceptance angle of about a = 10°, depending on the machine. This means that there are no sinograms for angles which are closer than | - a to the normal vector of the detector ring plane. However, even CHAPTER 1. INTRODUCTION 12 sinograms at smaller angles may have some of their projection bins unfilled due to insufficient coverage of the F O V . Figure 1.7 shows this phenomenon, where sinograms for angles 9 < 9 < a have missing projections. Figure 1.7: For sinograms beyond a certain azimuthal angle, 9 > O (see text), some projections are missing, or incomplete. Here tj> is the standard polar angle. The reason why missing projections are problematic is that it is not possible to recon-struct data which have a spatially variant point spread function (PSF) using Fourier methods. The extension of 2D F B P to 3D F B P was made difficult by this fact and until a reasonable way to solve this problem could be found, fully 3D reconstruction would have to wait. To solve the spatial variance problem one could potentially disregard all of the sino-grams for which 9 > © and accept only those without missing projections for reconstruction. Doing this, however, sacrifices sensitivity since valuable events are dismissed and nullifies the very benefit of 3D P E T . The methodology which was developed [13] centered around estimating detector rings Missing Projections CHAPTER 1. INTRODUCTION 13 Figure 1.8: (1) Only the complete sinograms are used to reconstruct the image. (2) The missing projections are calculated based on the image. (3) A l l the sinograms are used to reconstruct the final image. the missing data using the complete data available for 9 < O. One does this by reconstructing the low statistics image with only complete sinograms and then using this reconstructed image forward projects to fill in the missing data for sinograms 9 > O. Then, once all the sinograms are complete, the image is reconstructed for the second and final time. This is illustrated in Figure 1.8. Even without considering the implementation, this is clearly a tedious process in which nearly two complete reconstructions have to be performed for a single image. The com-putational effort is made entirely in order that sensitivity is not sacrificed by making use of as many detected events as possible. 3D P E T images, because of the increase in sensitivity, have less noise for the same amount of activity in the F O V when compared to 2D P E T . The standard reconstruction method for 3D P E T is 3D F B P . A good overall description CHAPTER 1. INTRODUCTION 14 of 3D P E T is given in [14]. Because this algorithm has heavy computational demands, a faster method to process a 3D data set has long been sought after. In the next section, a description of various algorithms is presented to contrast different philosophies in reconstruction. Immediately following that a more detailed examination of the algorithms that will be studied in this thesis is given. 1.6 Image Reconstruction Methods Image reconstruction describes the process of converting the projection data in the sinogram into a viewable image which represents tracer activity distribution, X(x,y,z), in the F O V . Since P E T does not collect direct information about \(x,y,z) but only the values of line integrals of this quantity 2, some method must be devised to invert the projection process (generally called backprojection). Ordinarily, recovering the image from a given projection is not possible because an entire dimension has been collapsed during the projection step This is shown in Figure 1.9. However, because projections from many different angles exist, each projection is correlated with all other ones and it is possible to find a way to invert P E T scan data into an image. methods, and statistical methods. Depending on the implementation, an algorithm can be either deterministic or iterative, in cases where it is not possible to formulate a purely deterministic approach. In addition, data may be manipulated with rebinning methods before undergoing 2This is the so-called Radon transform. (1.3) Reconstruction methods can be broadly categorized into transform methods, algebraic CHAPTER 1. INTRODUCTION 15 | Sinogram as Projections] Figure 1.9: The sinogram is a set of projections of the activity within the F O V taken at different angles. reconstruction. Each of these will be now briefly described, before the focus is put on the algorithms tested in this thesis. 1.6.1 Transform Methods 2D Filtered Backprojection Using the Central Section Theorem, which states that the Fourier transform of the projection p<f,(xr) is equal to the 2D Fourier tranform of the image sampled along the vXr axis, which lies in the rotated frequency coordinate system. Wi th this theorem, it is possible to write an analytical expression for the image in terms of an integral over the projections. For a given sinogram the expression is (1.4) CHAPTER 1. INTRODUCTION 16 where H(uXr) is a filter applied to the transformed projections. The projections are filtered first with a filter which is chosen to balance noise and resolution and then backprojected to obtain a given image pixel (picture element). 2D F B P is applied to sinograms which contain data from transaxial planes only and is regularly used to reconstruct images from 2D P E T scans. Relative to any 2D reconstruction technique, 2D F B P is the fastest. 3D Filtered Backprojection The method used in 3D P E T reconstruction is a natural extension of 2D F B P and is described by the equation multiplied by a filter and the inverse transform has been taken) similar to what was used in eq. (1.4). As mentioned above, 3D F B P first back-projects all the complete sinograms to estimate the image, then calculates the missing projections by forward-projecting the low statistics image and then finally reconstructs the image using all the sinograms by another back-projection step [13]. Consequently, the limit on the integral J dO in eq. (1.5) depends on the step taken. Initially, the image is reconstructed from complete projections using the 2D F B P method. Once the missing projections have been calculated, one uses the entire acceptance range of the scanner, 9 S (—a, a). 1.6.2 Algebraic Methods The P E T reconstruction can be formulated in at least two ways. The first way to think about the sinogram data is that it is a series of projections which have been obtained (1.5) where p*e<p represents the filtered projection (i.e. the Fourier transformed projections have been CHAPTER 1. INTRODUCTION 17 by integrating over projection lines passing through the object in the F O V . Transform meth-ods attempt to solve the problem by inverting the projection step with deterministic integral expressions. Another way to view the relationship between the projection and image data is through a generalized linear relationship. If f and p are the image and projections, stored in a vector, the expression p = L f (1.6) describes the general dependence of p on f by way of a matrix operator which, in this case, is the forward-projection operator3. Specifically, isolating a particular pixel in the image yields F Pi = ^Lijfj i = l...P in which the L , j can be interpreted as the contribution made by the j t h pixel on the i t h projection bin. The matrix L is calculated from geometrical considerations. Each L O R in the scanner can be represented by a projection tube of finite size. The area of intersection of the tube and a given pixel will give that pixel's contribution to the tube. Considering the geometry of the problem, it is clear that L is (a) very large and (b) very sparse. Furthermore, the problem cannot be solved by inverting L because the system, while overdetermined, is nearly never consistent because of noise in the projections from scatter and random events. Because a deterministic inversion of L cannot be performed, algebraic methods are iterative. One such method is A R T (algebraic reconstruction technique) which is based on the 3The term forward-projection describes finding the projections of an image. Back-projection is the reverse, describing the process of assigning the value of projection bins to all pixels lying along that projection path. CHAPTER 1. INTRODUCTION 18 Kaczmarz method of solving large sets of linear equations [15]. The prescription [16] is Pi - E Urff / j f c + 1 ) = / W + A | 2 Ltj k = 1,2,... (1.7) E Llj i=l where the A is a relaxation parameter and the new image on the k + 1 iteration, f( f e + 1 ) , is obtained from the image found as a result of the last iteration, f(fc). This equation represents a series of projections of the current image guess onto hyperplanes defined by one of the rows of the linear system. A full iteration of A R T cycles over all of the hyperplanes indexed by i in eq. (1.7). As mentioned, because the system is normally inconsistent, a unique answer does not exist. For this reason, the relationship p = L f itself strictly cannot be used and some form of relaxation is required [17]. This is achieved by allowing F F Liofj ~ei <Pi <Y1 Liifc + S i (1-8) 3 = 1 3 = 1 Relaxation acknowledges that p contains noisy projections and that L may not be exactly accurate. 1.6.3 Statistical Methods Statistical methods do not attempt to directly solve eq. (1.6). Instead, using the statistical properties of radioactive decay they attempt to maximize the probability that the reconstructed image is the one that is most compatible with the observed projections. The likelihood function L(f |p) can be formulated using the Poisson nature of decay and represents the probability linking the observed projections, p, with the image, f. Through an iterative process, the function Z(f |p) = logL(f|p) is maximized over f and the image which maximizes this function is used. It is generally easier to maximize l(f\p) rather than L(f\p) because all CHAPTER 1. INTRODUCTION 19 the products in L(f\p) are transformed into sums in Z(f |p) through the log function [18] [19]. Almost all statistical methods in P E T reconstruction are based on the classic Expectation-Maximization (EM) technique [20]. This is a general scheme of maximizing functions which can be applied to many problems, including P E T . The main difference between various statistical algorithms is in how they update the image during iteration and the data space over which the maximization is done. These two differences are themselves related and will be described shortly. Statistical methods tend to yield better resolution images at the same noise level as transform methods such as F B P [21]. Alterations to the original E M algorithms are made mainly to increase the convergence rate of the algorithm and to allow the analytical inclusion of regularization (vide infra) [22] [23] [24]. Statistical methods are generally not linear. In other words the sum of the recon-struction of two sinograms is not necessarily the same as the reconstruction of the sum of the sinograms. For this reason either filtering or regularization is required. 1.6.4 Rebinning Methods There are compelling reasons for the use of statistical methods for image reconstruc-tion. They are based on a physical model which attempts to model the system and they generally provide images with comparably less noise to F B P methods. Practically, however, their implementation in 3D P E T has always been hindered by the size of the matrix L in eq. (1.6). The reconstruction times for fully 3D statistical methods is exceedinly high and switch-ing from 3D F B P is simply computationally too costly. The advent of rebinning methods has made statistical methods usable on 3D data. Specifically, an intermediate algorithm is executed that converts a 3D data set into a 2D data set which can then be reconstructed with any 2D CHAPTER 1. INTRODUCTION 20 method, such as 2D F B P or statistical methods. In essence, one can take advantage of the higher sensitivity of the 3D acquisition mode while still enjoying the short reconstruction times of 2D data sets. Single Slice Rebinning Algori thm (SSRB) The simplest of these rebinning methods is the single-slice rebinning algorithm (SSRB) [25]. SSRB rebins any event detected by rings i and j into the plane formed by ring t ^ L . It is obvious that this method does not preserve the axial position of any events in oblique sinograms which fall off the axial axis of the scanner. SSRB preforms well in regions of interest close to the center of the F O V , such as those typically used for striatal 4 studies, but its performance flags for larger radial distances and thus overall the performance of SSRB in a clinical setting is found lacking and SSRB is not generally used [26] [27] [28]. Fourier Rebinning Algori thm ( F O R E ) Recently, a relationship between the Fourier transforms of oblique and transaxial sino-grams was developed and is at the heart of the Fourier rebinning algorithm (FORE) [29]. Ordi-narily the sinogram is parametrized by m(r, (f>, z, A ) where (r, (f>) are the transaxial parameters shown in Figure 1.1 and z and A are the quantities ^ (axial position) and i — j (ring dif-ference), respectively, given that the event was registered by the rings i and j. F O R E is built upon a different parametrization, namely /oo A(r cos0 — t sin 0, r sin 4> — tcosfi, z + t6)dt (1.9) -oo 4The striatum is an anatomical structure in the central brain region which contains neurons that use dopamine as a neutrotransmitter. These particular neurons are of interest in neurodegenerative diseases such as Parkinson's. C H A P T E R 1. I N T R O D U C T I O N 21 where 6 = tan 6, with 8 being the angle between the sinogram and the transaxial plane and X(x,y,z) is the activity distribution. The development of F O R E starts with considering the 2D Fourier transform of the sinogram, P(w,k,z,6) = p(p,(f),z,6)e-ik't,e-iajsd^ds (1.10) J-Rn JO where R& is the effective radius of the F O V and k and u are the Fourier index and radial frequency, respectively. Using the symmetry p(r, (f) + Ti,z,8) = p{—r, (j), z, —8) one can write eq. (1.10) with the use of eq. (1.9) in cylindrical coordinates as P(cv,k,z,8) = f ^ j ^ J 0 R ° \ ( p c o s ( 3 , p s m ( 3 , z - p 8 S m ( f > ) x e -**0 e -*»p cos <t>e-Wpdpdpd<b The next step is to take the Fourier transform of eq. (1.11) in the z direction, /oo P(uj,k,z,8)e~i,jj*zdz (1.12) -oo The key is that now the integral over <f> in eq. (1.12) can be taken separated out which yields the expression P(u;,k,wz,6) = 2n(-i)k f 0 R « F(p,p,uz)e-We-*k*rctan^ r — ^ r - ^ ( L 1 3 ) x Jfc (^ptuy/l + S'2u'j/uj'2^j pdpd/3 where Jfc is a Bessel function and F(p,8,uz)= \{p cos 13, p sin (3, z)e~lul*zdz (1-14) J-L/2 with L being the axial length of the F O V . F O R E is based on the exact rebinning formula obtained from eq. (1.13). We can set 8 = 0 to find the relationship between the Fourier transformed sinograms and their direct counterparts. This is given by P ( W ^ , W z , 0 ) = e - " £ a r c t a n ^ ? {ojJl + ^-,k,uz,8\ (1.15) CHAPTER 1. INTRODUCTION 22 The expression in eq. (1.15) is an analytical relationship, expressed in Fourier space, between 2D and 3D data sets. Its implementation is complicated and time consuming and can be approximated by expanding in a Taylor series in the variable a = ^f-. If we take the zeroth order approximation and set a = 0 we find 7>(w, k,toz, 6) = V (w, k,u>z, 0) + 0(a) (1.16) W which is exactly the SSRB algorithm. In the first order approximation one takes fcarctana = ka + 0(a3) (1-17) which leads to V(to,k,Loz,6) = e-*kaV(u,k,uz,0)+O(a2) (1.18) The inverse transform along z is easy to take since the exponential factor e~lka represents a shift in the z direction. In this way inverting eq. (1.18) yields the main F O R E result P(CJ, k, z, 6) = P(u, k,z- kS/tJ, 0) (1.19) Implementing eq. (1.19) is particularly straightforward and fast since no interpolation is required in the to and k directions and only a single interpolation along z is needed. In addition, unlike the exact rebinning result in eq. (1.15), the Fourier transform along z is not taken in eq. (1.19) and hence we do not need to know the data for all values of z. This obviates the need for the time consuming task of finding the missing projections. 23 Chapter 2 Iterative Algorithms In this chapter the algorithms that will be tested are described in more detail. The Expecatation-Maximization (EM) algorithm will be described, since it provides a foundation for all of the iterative methods under study. Subsequently, O S E M and S A G E will be presented. 2.1 E M Algorithm The theory of the E M algorithm has been presented in the seminal work of Dempster et al. in 1977 [20]. The essence of the method is as follows [18]: denote the measured projections by Pi (j — 1...P). Let the number of decays detected in projection tube i emitted from pixel j (i = 1...F) be given by riij. Strictly, n = n^- is not an observed quantity but is related to the observed projections by F 3=1 However, this quantity itself is not vital, as it can be related to the actual image through the expectation operator. Let the actual activity in pixel i be given by / j . From eq. (1.6) it can be CHAPTER 2. ITERATIVE ALGORITHMS 24 seen that E[mj} = fij = Lijfj (2.2) where E[ ] is the expectation operator and Lij represents the system matrix in eq. (1.6). Notice that n is the realization of a random vector whose mean is the image itself and it is not this vector, n, but rather the expectation of this vector which is sought. Because n represents Poisson variables, which are independent over both of the two indices, the probability that a particular set of values corresponds to the measured projection data is F P fnij j=li=l w where the external sum is over all possible combinations of riij such that eq. (2.1) holds. This sum is required because there is potentially more than one way to redistribute counts in all the bins in such a way that the measured projections are still the same. Although eq. (2.3) looks elaborate, particularly the vague definition of the outer sum, its explicit evaluation is not required. The E M method maximizes the function Z(f|n) = logL(f |n) (2.4) and the reader is referred to the original paper [18],[19] for the full development of the algorithm which gives the image on the k iteration in terms of the image from the previous iteration by f(fc) = f ( f c - i ) 0 jLT^p0Lf(fc-i)jj k = 1:2,... (2.5) where (g> and 0 signify component-wise multiplication and division. Eq. (2.5) can also be CHAPTER 2. ITERATIVE ALGORITHMS 25 presented as •(0) n = 1...P fo r k = 1,2,... F Pn — Lnjfj 3=1 n = 1...P (2.6) fo r i = l...F The E M method depicted in eq. (2.6) has certain desirable constraints already built in. Mainly, non-negativity of the image and perservation of counts from one image guess to another. The former is particularly attractive considering that F B P methods, because of a finite frequency bandwidth in the filters, produce negative pixel values in the image. On the other hand, the E M method is disadvantaged because, although it converges to a unique solution in the case of noiseless projections, it fails to do so for real, noisy data. After a certain number of iterations, the E M algorithm begins to fit the image exclusively to the noise in the data and the quality of the image deteriorates, as shown in Figure 2.1. For this reason, the number of iterations for this method must be carefully controlled and the process must be stopped before this deterioration occurs. tively slow. Many attempts at increasing E M convergence have been made [30],[31],[32]. The E M converges slowly because during the entire execution of eq. (2.5) projections calculated from the old image are used. Even when the last pixel of the new image is calculated, it is with the projections of the old image. This can be rephrased by saying that the maximization data space is which is exceedingly large [33]. Second, r e g u l a t i o n is not possible E M suffers from two other pratical disadvantages. First, the convergence rate is rela-CHAPTER 2. ITERATIVE ALGORITHMS 26 Figure 2.1: The effect of iteration number (here 3, 25 and 100 iterations are shown) on re-constructed image quality with the E M algorithm. After a certain number of iterations, the algorithm's performance is degraded by the presence of noise in the projections. to include analytically into the E M framework [34]. Regularization is a method to penalize certain types of images in an attempt to prevent the algorithm guesses from deteriorating after a certain number of iterations. Regularization will be discussed in conjuction with the SAGE algorithm. 2.2 O S E M Algorithm The slow convergence properties of E M urged modifications to the method. One such alteration is provided by the the ordered-subsets methodology, embodied in the Ordered-Subset Expectation Maximization (OSEM) algorithm. Instead of updating the entire image with the old projections, as done in E M , the projections are divided into subsets and the entire image is updated once for each subset with the projections used for the subsequent subset having been updated to include changes in the previous image subset [35]. If the projection subsets are given by Sm (m = 1...M) then a single OSEM step is given by CHAPTER 2. ITERATIVE ALGORITHMS 27 £ ( f c m o d i V , m ) f(kmodN,m—1) J2 L n i ( P 0Lf( f c m o d J V > m - 1 ) ) Jfc = 1,2, m = 1.. .M (2.7) The notation f ( . k m o d N > m ) [s meant to represent the image at iteration (kmodN) after m number of updates during that iteration with the provision that f k ' ° is equivalent to J f c - 1 > M . In terms of pseudo-code, (0,0) Pn — £ Lnjfj 3=1 fo r k = 0,l,... fo r m = 1...M F n = l...P - r f(k,rn—l) Q Pn — Z^i ^njjj fl fc £>n 3=1 (2-f o r « = l . . . F J-"ni -n£Sm Pn-fi = * f i One O S E M iteration, which is defined as the traversal of all projection subsets, is equivalent in speed to a single E M iteration. However, O S E M updates the image once for each subset so that in a single iteration the image may be updated as often as 16 times, or more, although practically the algorithm provides optimum results for some subset number, usually smaller than 16. O S E M provides an increase of the rate of convergence of about N/p where N is the number of projections.and p is the size of each subset. How the subsets are chosen reflects on the convergence rate. Commonly, orthogonal subsets are used so that each subsequent subset contains as much new information as possible. The free parameters in O S E M are the number of subsets, S, and the number of iterations, / . Their effect on the reconstructed image is shown in Figure 2.2. Because O S E M , and E M in C H A P T E R 2. I T E R A T I V E A L G O R I T H M S s u b s e t s 12 JttJlMk, ggg|£ .-aflMtts-. jfi^fe. 16 Figure 2.2: The effect of number of O S E M iterations and subsets on image reconstruction. CHAPTER 2. ITERATIVE ALGORITHMS 29 general, are not linear, filtering or regularization is required. In the case of O S E M , regularization cannot be analytically included, so a posteriori (post reconstruction) filtering is done with an isotropic Gaussian filter. The width of the filter is another O S E M parameter. 2.3 S A G E Algorithm Space-Alternating Generalized Expectation-Maximization (SAGE) has several vari-ants, such as SAGE1 , S A G E 2 and SAGE3 ([33]), coined in the manner by the original authors. SAGE1 is described because it is a relatively straightforward evolution of the E M algorithm, although not necessarily the one that offers the most benefit. SAGE2 offers intermediate bene-fit, but since SAGE3 , the most optimized S A G E method, is tested in this thesis SAGE2 is not described. 2.3.1 SAGE1 Statistical models can be used to calculate not only the true decays but also scatter, randoms and attenuation, since statistical models can be developed for each of these. Prior studies have demonstrated, however, that the more variables E M was asked to calculate, the slower the convergence rate was. The general statement is that small and less informative data spaces result in faster convergence [36] [37] [38]. In this case, the data space is the set of parameters over which the maximization step is performed. O S E M has a smaller data space than E M , because the maximization uses only a subset of projections rather than the full set. S A G E attempts to decrease the size of this data space further by updating the image one pixel at a time. If we index the pixels by i = 1...F, the simplest S A G E method [33] is therefore given by CHAPTER 2. ITERATIVE ALGORITHMS 30 k = 1 2 j(kmodF,i) _ y.(fcmodF,i-l) " j ^ T ^ Q ^ ( f c m o d F . i - l ) ^ ' ' ' ' ' ' ^ Q) 1 = 1...F One iteration would consist of updating each pixel individually and using this changed pixel in the calculation of p 0 L f for the next pixel. SAGE1 can also be written as Pn=ELnJri0) n = l...P 3=1 for k = 1,2,... fo r j = 1...F P r> r- - V T — C0 - ^n3 ^ n=L fn J 3 - C3h (2.10) Pr = Pn + ( / j f c ) - f f ~ l ) ) L n j Vn : L n j ± 0 2.3.2 SAGE3 Increased rate of convergence is obtained by choosing small data spaces. For SAGE1 the data space over which the maximization is performed is {nij}f=1 where j corresponds to the current pixel that is being updated. Other data spaces can be created. S A G E 3 [33] which is the variant that is being studied uses y ] L i n ft (fc) (fc) • n^j z\ ' = mm 3 '-wo Li zV-PoissoniLijifj + Zj)) ( 2 ' U ) B\f ~ PoisSOn Yl L i n f n - LijZj J WJ J The data space is {z\^\B^}n=1 and depends on the iteration. The quantity Pi = Zij+Bii (2.12) CHAPTER 2. ITERATIVE ALGORITHMS 31 has the correct distribution and the maximization can be done over the new data space. This new space attempts to merge the information from the image and its possible background (due to randoms, for example) to minimize the information content. Furthermore, eq. (2.11) shows that the background equivalent for pixel j, Zj, is taken to contain the values of all the other pixels. This is not intuitive but results in a data space responsible for faster convergence than by just including, for examples, randoms as the background for pixel j. S A G E 3 is nearly idendical to S A G E 1 in eq. (2.10) except that the step / j f c ) = c J / j f c _ 1 ) is replaced by / f = [c, ( / f " X ) + Zf-V) - zf~\ ( 2 . 1 3 ) x x > 0 0 x < 0 where [x], denotes the operator [x], = < 2.3.3 Regularized SAGE3 As mentioned, in an effort to further increase convergence, other data spaces have been found. In addition, in order to prevent the classic deterioration of the image after many iterations as observed in E M , regularization is included. Because S A G E data spaces are small, adding regularization is possible analytically, without resorting to expensive numerical proce-dures during reconstruction [34]. The principle of regularization is to maximize the function /'(f) = /(f) — (3R(f) rather than /(f), where R(f) is some measure of image roughness and (3 is a regularization parameter which weighs how much regularization is to be included. The range of (3 varies with the units used and may range from 2 - 2 0 to 2 2 0 . For our units we find that 2 - 9 < P < 2 - 3 is a useful range which we sample in differing powers of 2. Image roughness can be measured in many ways [39] [40] and a standard way is to do so by using differences of CHAPTER 2. ITERATIVE ALGORITHMS 32 Figure 2.3: The effect of various values of (3 on the reconstruction of a uniform cylinder with S A G E . neighbouring pixels, a w ^ E E f ^ - / / (2-14) where wzj are weight factors, usually inversely proportional to the distance between pixels i and j , and Ni denotes the appropriate indices for pixels in the neighbourhood of i. Any image which is locally rough, has a higher value of R(f) and thus low value of /'(f). Since the method maximizes /'(f) such noisy images will be penalized and the algorithm will tend to converge to images with a small value of R(i). This is illustrated in Figure 2.3, where a uniform cylinder is reconstructed with various amounts of regularization. In this figure, images with j3 < 0 were done with 10 iterations and all other images with 25 iterations. From Figure 2.4 it is evident that for extensive regularization, when j3 > 0 for example, convergence towards a final image is slow. The inclusion of regularization obviates the need for filtering. The present S A G E method that is tested is P M L - S A G E 3 (Penalized Maximum Likelihood) CHAPTER 2. ITERATIVE ALGORITHMS 33 [34]. The algorithm is given by Pn= n = l...P j = l F Wj- = E Wji i = l f o r k = 1,2,... f o r j = 1...F E t Fn (2.15) n = l Pn v 7 P (fc) _ , 'r i = i Pi Ak) zi = • m i 3 n -F~-fj ff = (-Bj + yJBj+Cjij^+Zj) /Pwr - zd Pn=Pn+ ( / j f c ) - f f i n j Vn : L n i ^ 0 Notice that the parameters Bj and are dependent on the iteration. The main attraction of S A G E is that regularization can be included analytically. This is due to the fact that the data space is small and equations that used to be coupled in regularized E M are now independent. The apparent increase in complexity in going from eq. (2.10) to eq. (2.15) is not accompanied by an increased computational demand. In fact, efficient implementations of these two variants of S A G E have execution times which are indistinguishable [33]. Wi th any S A G E algorithm, the adjustable parameters are the number of iterations and, if regularization is included (vide infra), one also chooses the value of the regularization parameter, j3. The impact of these two parameters on the final image is shown in Figure 2.4. CHAPTER 2. ITERATIVE ALGORITHMS 34 iterations 10 25 50 100 -9 -3 if<t Hp A A ft J l ft i i W w w w w V • • • • • • • • • • t Figure 2.4: The effect of number of S A G E iterations and regularization on final image quality. CHAPTER 2. ITERATIVE ALGORITHMS 35 2.3.4 N o t a t i o n To shorten references to the algorithms we will refer to F O R E + 2D F B P as F2D. Unless otherwise stated, the filter will be Hann with 0.4 cutoff. The Hann filter is defined by-means of the cosine function. It is equal to unity at zero frequency and smoothly goes to zero towards the cutoff, which is expressed as a fraction of the Nyquist frequency. F O R E + O S E M with S subsets, I iterations and a / (mm) Gaussian filter will be referred to as F 0 ( 5 , / , / ) . F O R E + S A G E with r = log2(3 and / iterations will be referred to as FS(r ,I) . 36 Chapter 3 Figures of Merit for Algorithm Evaluation In this chapter we will address the approach and methods used to critically evaluate the iterative algorithms. The key to successfully judging the reconstruction methods, both individually and against one another, rests in applying them to images which serve to focus on specific and well-defined performance criteria [41]. 3.1 Evaluation Philosophy Our task is to evaluate the performance of iterative algorithms in a clinical setting. We will approach our study methodically, at each step increasing the complexity of the figure of merit (FOM) until we are finally applying the algorithms to human scans. In principle, we could attempt to reconstruct human data immediately with the candidate methods and compare the images directly. In doing so, however, we run the risk of not noticing potential CHAPTER 3. FIGURES OF MERIT FOR ALGORITHM EVALUATION 37 systematic artifacts that are present in the image, human subject data, such as a brain scan, is a very complicated image with rich structure and may hide the fact that a given algorithm, for example, has poor radial uniformity. The ideal reconstruction algorithm will produce a reconstructed image which correlates to the activity distribution in the human subject within the resolution of the scanner. Such an algorithm, whether it makes assumptions or approximations, is an idealized map between the true activity in the scanned object and the data presented to the experimenter. This correlation, however, is extremely difficult to measure. First and foremost, we do not have access to the actual activity distribution in the human subject without performing invasive tissue studies. Secondly, P E T is a complicated imaging modality which requires much data processing and corrections, even before the actual reconstruction takes place. Wi th so much data manipulation, differences in algorithm performance may arise that are due to small fluctuations in one of the many physical parameters such as time of scan, amount of scatter, etc. Therefore, it would be difficult to justify simply on the basis of even a few tissue sections that any remaining scans will reach the same level of accuracy. Given that we cannot realistically know whether the algorithm has correctly recon-structed the actual activity distribution each and every time we perform a scan it is imperative that the figures of merit address as wide an array of reconstruction image characteristics as possible. If we expose the algorithms to such an arrangement of different inputs we can then say and make predictions of how the algorithm will behave with slightly altered or combined inputs. Because our goal is to see whether 3D F B P can be replaced with an iterative solution it is sufficient to treat the 3D F B P reconstruction as the gold standard and make all comparisons CHAPTER 3. FIGURES OF MERIT FOR ALGORITHM EVALUATION 38 against it. We will therefore treat actual activity and 3D F B P reconstructed activity as one and the same. Of course, it may be that 3D F B P itself does not correlate well with real activity distribution, in which case success in our study would merely mean that the iterative methods has the same problem. However, much study and evaluation of 3D F B P has been done and its wide implementation in nearly all current 3D P E T groups has given plenty of opportunity to expose potential weaknesses. 3.1.1 P h a n t o m Studies Three different evaluation methods can be applied. The first method utilizes simple phantoms to examine how an algorithm reconstructs basic shapes such as cylinders or point sources. Such studies provide us with global image characteristics such as reconstructed resolu-tion, noise and uniformity [52]. Resolution will define what the smallest reconstructed element size. Image noise and uniformity will give information about how an algorithm handles the reconstruction of a uniform image. These are the simplest tasks which must be successfully carried out before we apply more complicated tests, such as contrast studies which inform us how well an algorithm detects contrast variation in the image. This is extremely important since often we are dealing with small regions in the brain with increased activity in comparison to background. If an algorithm provides resolution and good uniformity but detects activity boundaries poorly then it clearly cannot be used. It should be noted that each of these methods is required as it affords information about the algorithm that cannot be obtained by another method. For example, it is possible to have a reconstruction method with good resolution and uniformity but extremely poor contrast recovery. This can typically happen if one over-regularizes the image. In addition, because CHAPTER 3. FIGURES OF MERIT FOR ALGORITHM EVALUATION 39 iterative methods tend to be non-linear we cannot assume that the reconstructed image is a sum of reconstructed point sources; therefore, we cannot assume that the algorithm will handle a small cylinder (e.g., point source) and a large cylinder (e.g., head sized) in the same way. For this reason, it is possible that radial uniformity may be poor whereas point source profiles are correctly Gaussian. We see this typically in incomplete O S E M reconstructions where there are an insufficient number of passes through the data. 3.1.2 D i g i t a l Phantoms Once basic properties are evaluated one can initially form an estimate of which algo-rithms will most likely serve better in clinical trials. However, it is possible that competing methods perform equally or nearly equally well, calling for more studies to identify the supe-rior one. A n intermediate way of analyzing performance is by way of digital brain phantoms. M R I images are digitized and sectioned into compartments in which simulated activity is varied both across the brain and over time. One can then use a Monte Carlo simulation to produce synthetic data from these images and after allowing for attenuation and scatter. In this case, we know the actual activity distribution and we can evaluate the algorithms in absolute terms. The drawback to these methods, however, is that we cannot be sure that all of the typical P E T scanner characteristics have been included in the simulation; for example, both attenuation and scatter simulation is an approximation (often parametrized) to real processes. On the other hand one can finely tune the digital phantom and perform spedific studies which are ordinarily intractable simple physical phantoms or human subject studies. Digital phantom studies are beyond the scope of this study. They are an addition to the overall evaluation process and are not explicitly required to bridge the gap between CHAPTER 3. FIGURES OF MERIT FOR ALGORITHM EVALUATION 40 phantom and human subject scans, which themselves necessarily provide results which must be consistent with digital phantoms. 3.1.3 human subject Brain Scans The final image that an algorithm is evaluated with is the actual human subject data. Good performance here, without phantom studies, cannot be extrapolated to future scans. However, if both phantom studies and human subject studies are consistent and show acceptable reconstruction we can be confident that we have fully evaluated the algorithm, having exposed it to much more image variability than present in a clinical setting. 3.1.4 Scanning Hardware A l l data was acquired with the Siemens E C A T 953B P E T scanner. This is a 16 ring tomograph with a 10.4 cm axial extent of the F O V . In 3D acquisition mode, which is the method of data collection for all phantom and human subject scans described in this thesis, the acquired data are histogrammed into 31 transverse and 225 oblique planes. 3.2 Phantom Evaluation Criteria These sections describe the F O M s used for the evaluation process. First, the F O M s for phantom data are described, followed by human subject data F O M s . CHAPTER 3. FIGURES OF MERIT FOR ALGORITHM EVALUATION 41 3.2.1 Resolution Resolution measures the width of the point spread function of the reconstruction algorithm, and generally varies with location in the F O V . Typically, an array of point sources1 is used and scanned for a long time to obtain adequate statistics. To measure the resolution 12 point sources were used at locations x = 0,5,10,15 cm and z — 0,2,4 cm within the F O V . The point sources were placed on styrofoam, a medium in which scatter is negligible. The activities of each point source were roughly 0.2 mCi in about 5-10 fil of solution. A total of 12 scans were taken by scanning the sources at four separate axial positions (shifted by 1 mm in each case) and three radial positions (shifted by 1 mm in each case). This method allows for fine sampling of the point source profile. Any values which were measured more than once were averaged (e.g., for each axial shift a separate transaxial resolution can be measured). Once the data from all the scans was amalgamated, each profile was fit with a standard gaussian in the x, y and z directions and the radial, tangential and axial resolutions were taken as the F H W M of the fitted profile curve. 3.2.2 Noise Noise is defined as the granularity of a uniform image. If the region of interest (ROI), R, is uniform then the noise is given as O-R/[IR where CTR and HR are the standard deviation and mean, respectively, of the pixel values in R. For this F O M , a 20 cm uniform cylinder was scanned. This cylinder will be refered to as the standard phantom. Three scans were performed 1Practially, a point source needs to be only a small fraction of the anticipated resolution. Small sources of 1-2 mm in size in a system with 5-6 mm resolution will be reconstructed in a manner nearly indistinguishable from true point sources. CHAPTER 3. FIGURES OF MERIT FOR ALGORITHM EVALUATION 42 to simulate low, medium and high count rates. The total counts in each scan were 1M, 15M and 130M counts. Noise was calculated by randomly sampling 1000 pixels within the reconstructed cylin-der and computing the value <TR//J-R. 3.2.3 Contrast The contrast recovery of a region R surrounded by background B is defined as C R = ^ (3.1) and can be measured with a phantom which includes hot spots. For this, a 20 cm elliptical phantom was used with three hot cylinders of 2 cm diameter placed at (x,y) = (0,0), (7,0) and (5.0) cm. The cylinders were filled with activity at a ratio of 5:1 to background with a total of 0.5 mCi activity in the phantom at time of scanning. Three separate scans were taken having 2M, 20M and 100M counts. C R was calculated for each image plane by sampling pixels within the hot cylinder and from background by using idendically shaped ROIs and evaluating eq. (3.1) using the summed pixel activities. The final C R value was taken as the average over all but the two edge planes. 3.2.4 Signal-to-Noise Ratio (SNR) This quantity combines contrast and noise in one F O M and is given by SNR = — ^ — (3.2) SNR was calculated using the 20 cm ellipse contrast phantom using the C R values from eq. (3.1) . The ratio CTR/^R was obtained from the background pixels in the same manner as CHAPTER 3. FIGURES OF MERIT FOR ALGORITHM EVALUATION described in Section 3.2.2. SNR was calculated for each plane. 43 3.2.5 Axial Uniformity Axia l uniformity measures reconstruction consistency across different planes. For this, the standard phantom was used. For each reconstructed plane in the phantom, the average pixel value, iip, was calculated using all the pixels within the phantom for plane p = 1... P. Axial uniformity was taken to be where ( ) is an average taken over the index p. 3.2.6 Radial Uniformity Radial uniformity measures how consistent reconstructed activity is along a radial pro-file. Using the standard phantom, 20 concentric annuli were drawn on each reconstructed plane as to entirely fill the phantom region. Average activity for each annulus, was calculated. Radial uniformity for plane p was taken as (3.3) max yUj — m m ^ (3.4) Eq . (3.4) was averaged for all but the two edge planes. 3.2.7 In-plane Uniformity Uniformity within a given plane was calculated according to the N E M A uniformity standards ([42]). Each plane was divided by a 1 cm 2 grid pattern and the total activity in each CHAPTER 3. FIGURES OF MERIT FOR ALGORITHM EVALUATION 44 grid k and plane p, C£, was found. Non-uniformity was evaluated by (3.5) (3.6) and the coemcent of variation was determined according to (3.7) 3.2.8 Time Activity Curve Comparison To test whether the algorithms produce consistent results at different count rates a dynamic scan of a n C standard phantom was performed. The scan lasted for 1 hour and was composed of 12 five minute frames. Total activity in the phantom was about 1 mCi at the beginning of the scan. Each frame was decay-corrected after the scan using the half-life of 1 1 C . For each plane, p, a region of interest, Rp, was drawn to include the phantom without its border. The time activity curve, T A C P , for the plane was found using Rp. The point on TACp associated with frame / = 1.. . 12 was where x\ denotes pixel [i, j) in plane p and frame / in the reconstructed image. Because each frame was decay corrected and the phantom was uniform, we expect the set of points given by { T A C P i / } for / = 1... 12 to be a flat line. To quantify how much fluctuation this time activity curve has we found (3.8) st.dev. of T A C mean T A C value (3.9) ( T A C p J ) and used ( A T A C p ) p as a measure of T A C fluctuation. s CHAPTER 3. FIGURES OF MERIT FOR ALGORITHM EVALUATION 45 3.2.9 Calibration Factors In order to reconstruct absolute activity, usually expressed in units of /LiCi/ml it is necessary to calibrate the reconstructed image against a known amount of activity. This is done by scanning a uniform phantom, such as a cylinder, with a known amount of activity inside, measured by means of a well counter. Once the image is reconstructed, it is possible to calculate the conversion factor between counts/pixel to fxCi/ml. Typically, each reconstruction algorithm will have its own calibration factors. Their actual values are not important as long as after their application the reconstructed activity correlates well to what is actually being measured. 3.3 Human Subject Data Evaluation Criteria 3.3.1 Radial Profile Difference So far, none of the F O M s that have been described test quantitation, the ability of the algorithm to reconstruct absolute activity values correctly. This aspect of reconstruction is important since often one does not merely need the relative change of activity in a ROI over time but also the absolute concentration of the tracer in that region. A radial profile is defined as a one-dimensional subset of the image sampled along some line. Let Xij be the image and SR be the set of index pairs which describes the line along which the profile is sampled. Then the summed radial profile is given by We are interested in the difference between the radial profile of an image found with the standard (3.10) (i,j)esR CHAPTER 3. FIGURES OF MERIT FOR ALGORITHM EVALUATION 46 reconstruction method and one obtained with the new algorithms. We define E Hd-^) (3-11) which is the averaged signed difference between the profiles normalized by the standard radial profile. This normalization allows one to compare the size of the profile difference to the actual image values. The image frame that is used for this comparison is the one in which the striatal activity is the highest. 3.3.2 Region-of-Interest Difference The ROI difference F O M is similar to the radial profile difference described above but rather than sampling the image over a particular line, one samples strictly from a predefined ROI. For this F O M we use a set of 23 circular ROIs of 5 pixels in diameter. The scan frame used for this comparison is, again, the one where the striatal activity is the highest. Planes 3,5,10,15,20,25 and 28 (out of 31 planes) are analyzed and all results are averaged over these planes. There are G = 3 groups of ROIs, g = 1,2,3. There were K\ = 6 stratial ROIs, Ki = 9 ROIs sampling near the edge of the brain and K3 = 8 ROIs scattered centrally. We use k as the ROI index with k = l...Kg The activity in each ROI for each algorithm was calculated and compared to values found with 3D F B P . We define where Sk,g is the subset of image index pairs which describe the kth ROI. Because each ROI has the same size, it is not necessary to normalize by the number of pixels in the ROI, since ROh,g = E XiJ (i,j)£Sk:g (3.12) C H A P T E R 3. F I G U R E S O F M E R I T F O R A L G O R I T H M E V A L U A T I O N 47 Logan Analysis ROI difference striatal \ cortical input \ cerebellar input striatal central outer Figure 3.1: The ROIs used for Logan analysis and ROI difference analysis. this quantity will be canceled out later in the normalized ratio. The difference is defined for each group, g (striatal, central and outer) as AROIKg = E (ROIg) (3.13) where 1 Kg (ROI9) = -^t E 46 (3-14) is the average ROI value for a given ROI group. Eq. (3.13) shows AROIg as the difference between the standard reconstruction ROI values and the test algorithm ROI value, normalized by the average ROI value within the group of the standard method. Figure (3.1) shows how the different ROI groups where placed. 3.3.3 Image Root-Mean-Square (RMS) So far, all the F O M s described in the above sections have sampled local characteristics of the phantom image. Image R M S can be used to find global differences between images produced by two different algorithms. CHAPTER 3. FIGURES OF MERIT FOR ALGORITHM EVALUATION 48 We define the R M S difference between two images x and xf as E (iff-xij R M S = (3.15) where R is a subset of image index pairs which describes the region over which the R M S is calculated. This region is taken to be the whole of the head with a small margin that is excluded around the boundary. This F O M estimates the difference between two images, normalized to the sum of the standard image, and is sensitive to systematic off-sets in quantitation. 3.3.4 Kinetic Parameter Comparison Compartmental Models P E T analysis often involves calculating kinetic parameters from the T A C data. These are rate constants in compartmental theories which are used to model the chemical transforma-tions of the tracer in the body. In a compartmental model the tracer is considered to occupy distinct areas of the brain called compartments. Compartments can be either spatial, signifying that the tracer can be at two different physical locations, or chemical, meaning that the tracer can undergo metabolism and take on a different chemical structure. The simplest such model is shown in eq. 3.17. The signed difference is also calculated, and given by (3.16) C i (3.17) CHAPTER 3. FIGURES OF MERIT FOR ALGORITHM EVALUATION 49 This is usually referred to as a two-compartment model, although some sources do not include the plasma as a compartment. In this model the blood plasma concentration is denoted by Cp. The tracer is assumed to enter and be trapped in a compartment, which could represent a family of cells with the appropriate membrane transports for this tracer. The rate constant for the entry is k\ and the concentration of the tracer in the compartment is C\. Realistically, both Cp and C\ are functions of time with Cp(t) being the input of the system, since it is this concentration that drives the transport of the tracer into the compartment. A model such as in eq. (3.17) is easy to solve, being a first order differential equation with in put Cp(t). Commonly, two compartment models are used to study non-specific binding and three compartment models are used for specific binding studies. A common model used for receptor studies which employ a reversibly binding tracer, such as raclopride [43][44], is shows as the three compartment model in eq. (3.18). (3.18) Now, the tracer binds to the receptor with a rate constant and dissociates with constant k±. C\(t) represents the concentration of tracer bound to the receptor at any one time. In such studies, the distribution volume ratio (DVR) is commonly reported as a characteristic figure of the system and can be used to estimate the number of receptors in the region [45]. Our center is concerned mainly with the study of dopaminergic neurons by means of tracers which have specific uptake in these cells. Compounds such as raclopride, F D O P A and dihydrotetrabenazine all target the dopamine system [46]. Cp ^ C 1 M M C 2 CHAPTER 3. FIGURES OF MERIT FOR ALGORITHM EVALUATION 50 Logan Analysis Attempting to solve compartmental models by solving the underlying set of differential equations requires knowledge of Cp(t), usually obtained by sampling and analyzing arterial blood. This is an invasive procedure that can be stressful on some human subjects. It is possible to analyze the data and obtain the D V R without knowledge of Cp(t). The D V R is then used to assess the amount of available receptors in a given region (in reference to another region). Typicall, if blood data are available and hence Cp(i) is known the following graphical analysis equation is employed [47] f^A(t)dt f0TCp(t)dt A(T) ~ D V A{T) + ° ( 3 - 1 9 ) where A(t) is the activity measured by the scanner at time i in a specified ROI. If the time dependence of the intercept, c, is small after a time t*, then for T > t* the slope yields the distribution volume, D V . In a similar manner, one can write eq. (3.20) for the non-receptor region JpCB(t)dt _ J?Cp(t)dt _ 1_ CB(T) ~ A CB(T) k2 [ 6 Uj where CB represents the activity measured by the scanner in receptor-free region. For raclo-pride, for example, the receptor-free region can be either the cortex or the cerebellum (vide infra). Here X = k\/k2 for the receptor-free region. If one isolates JQ Cp(t)dt in eq. (3.20) and substitutes this into eq. (3.19) the result is /"J A(t)dt J n T CB(t)dt + ^P-A(T) A(T) + { 6 - Z i ) where DVR = DV/X = B'max/K'd + 1 where B'max is the free receptor concentration (receptors CHAPTER 3. FIGURES OF MERIT FOR ALGORITHM EVALUATION 51 not occupied by endogenous neurotransmitter) and K'd is the effective equilibrium dissociation constant (K'd = K&I'/NS where f^s is the free fraction of tracer). [48]. Logan Analysis Implementation The Logan-type bloodless analysis described above was performed on human subject scans using cerebellar and tissue input functions. 8 regions of interest were placed in the striatum with 6 background tissue regions and 3 cerebellar regions. The D V R values were calculated for each method and directly compared to the results from 3DFBP. 10 human subjects were used for this comparison. Each of these scans used one of four tracers. In total there were 7 raclopride scans and one scan each with Sch 23390, tetrabenazine and methylphenidate. The Logan slope was calculated using both cerebellar and cortical input functions for each of the 8 ROIs in the human subject scans. The slope values were then averaged over the ROIs for a given human subject to yield the average value Asp = ^ s 3 D F B P — S P ) R O I -The dispersion in this value was taken to be the standard deviation. Once this value was found, it was averaged over all human subjects, A s = ( A s p ) p and the error was taken to be the standard deviation. The cortical input is taken from the same planes as the striatial activity and when A s c o r is used as a F O M we are testing whether the algorithm reconstructs various regions of the same plane properly. When A s c e r is used, where the cerebellar input is taken from different planes than the striatal activity, we are checking whether the algorithm preforms correctly across many planes. Figure (3.1) shows how the different ROIs where placed. We seek A s to be as close to zero as possible for both cortical and cerebellar inputs with a small standard deviation. We report A s and its error, which measures patient-to-patient CHAPTER 3. FIGURES OF MERIT FOR ALGORITHM EVALUATION 52 difference, as well as the average dispersion in A s p , which measures ROI-to-ROI differences for each human subject. 3.3.5 Computational Demands The speed of the algorithm was measured by the time taken to reconstruct a scan. Image reconstruction is generally composed of two separate steps: correction and reconstruction. In the correction step, the data is corrected for attenuation, scatter and normalization. In the reconstruction step, the corrected sinograms are processed towards an image. A l l of the processing is done at our center on an Ultrasparc I (128 M R A M ) and a CSPI Supercard 1. The Ultrasparc performs all calculations, except for (i) attenuation correction (ii) 2D F B P and (iii) 3D F B P , which are carried out on the Supercard. 53 Chapter 4 Experimental Results 4.1 Choice of Algorithm Parameters 4.1.1 F O R E F O R E requires several parameters to be specified. These are mainly cutoff values which divide the Fourier frequency space into three distinct regions in which a different rebinning step is being performed. The presence of these parameters is closely related to the fact that the implementation of F O R E is based on the first-order approximation, rather than the exact rebinning. It was found that varying these parameters did not influence the image in any appreciable way and therefore the default parameters, as optimized by the original authors of the F O R E code, were used. 4.1.2 2D F B P The current reconstruction method is 3D F B P with a Hann filter with 0.4 cutoff. In order to mimic this as closely as possible with F O R E and 2D F B P , the same filter was used in CHAPTER 4. EXPERIMENTAL RESULTS the 2D F B P step. 54 4.1.3 O S E M The number of iterations and subsets were varied in order to find the optimum choice. A single iteration was seen to be insufficient - in this case not enough passes were made through the projections to fully reconstruct an image. In particular, the radial uniformity for a single iteration showed a marked decrease towards the edge of the cylinder. This effect was not seen for two or more iterations. In some respect, Figure 2.2 can be used to visually estimate the parameter space which is of use. 4.1.4 SAGE3 Because regularized S A G E is used, the number of iterations has a natural minimum that should ensure convergence and the maximum number of iterations is simply defined from a practical sense of computing time. Examining image R M S for various iterations and values of (3, shown in Figure 4.1, shows that about 5-6 iterations is sufficient to reach a stable S A G E image. 4 . 2 Phantom Studies 4.2.1 Resolution The point source phantom gives an indication of the reconstructed resolution, as men-tioned in Section 3.2.1. We are interested in two main trends here: (a) resolution at the center of the F O V , which gives a general impression of how well the system 1 will resolve small ob-l rrhe system here refers to both hardware and software, since the camera itself has an intrinsic resolution. CHAPTER 4. EXPERIMENTAL RESULTS 55 Image RMS fer Various SAGE Iterations 0 2 4 6 8 10 iterations Figure 4.1: R M S of a single brain image plane between S A G E and 3D F B P . jects and (b) variation of the resolution across the F O V . Table 4.1 shows the resolution of the algorithms at the center of the F O V as measured using the (0,0) cm point source. It was noticed that F O resolution did not depend on the number of iterations and subsets if there were more than 8 subsets. The table shows F O resolution for S = 12 and 1 — 2. The / parameter is the filter and unless otherwise given it is isotropic and Gaussian. FS parameters are log 2 (3 and 10 iterations were used for each reconstruction. Two of the S A G E reconstructions (log 2 (3 = — 6 and —5) have been additionally filtered, to examine the effect of a posteriori filtering added to regularization. Center of F O V volume resolution for 3D F B P and F2D were nearly identical. Since F O R E is a rebinning algorithm, some compromise in axial resolution is expected but only minimally at the center of the F O V . Rebinning algorithms generally do very well at the center of the F O V because the axial mispositioning of events is smallest in this region. The volume resolution is reported in Table 4.1 to facilitate comparisons. Resolution is an important figure of CHAPTER 4. EXPERIMENTAL RESULTS 56 F W H M Parameters Radial (mm) Tangential (mm) Axia l (mm) Volume (mm 3) F O R E / O S E M S = 12,1 = 2 / = 0 mm 5.4 4.3 5.2 120 / = 2 mm 5.8 4.8 5.4 150 / = 3 mm 6.5 5.6 5.9 220 / = 4 mm 7.2 6.3 6.8 310 / = 5 mm 7.9 7.0 7.7 430 / = 6 mm 8.5 7.7 8.8 580 / = 7,9,2 mm 9.5 10.1 5.4 520 3D F B P Ramp 0.5 5.7 6.7 5.4 210 Hann 0.4 9.5 10.4 5.6 550 Hann 0.5 8.4 9.2 5.5 420 F 0 R E / 2 D F B P Ramp 0.5 6.5 6.0 5.3 210 Hann 0.4 10.1 9.9 5.5 550 Hann 0.5 8.8 8.7 5.4 420 F O R E / S A G E I = 10 log2(3 = - 9 5.5 4.9 5.2 140 log2(3 = - 6 6.2 5.4 5.4 180 log2f3 = - 5 6.4 5.7 5.4 195 = - 4 6.5 5.7 5.4 205 log2P = - 3 6.6 5.8 5.5 210 = 0 6.74 6.0 5.4 220 log2(3 = —5, / = 3 mm 7.6 7.3 6.0 330 Zog2/3 = —6, / = 4 mm 8.0 7.6 6.8 410 Table 4.1: Resolution for Various Reconstructions at the Center of the F O V . The parameter / represents the spatial filter and (3 is the regularization parameter for S A G E . CHAPTER 4. EXPERIMENTAL RESULTS 57 merit, but cannot be used by itself to evaluate an algorithm. Usually extremely high resolution reconstructions are noisy and are of little use clinically. Filtering or regularization is usually applied to decrease the noise. The resolution results will serve together with noise and human subject data to choose the best algorithm. For FO, the filter fx — 7 mm fy = 9 mm and fz = 2 mm was used to obtain a resolution-matched reconstruction compared to 3D F B P with Hann 0.4. This filter can be used to compare the algorithms on an equal footing at the resolution currently used with 3D F B P . How the resolution varies across the F O V is an important consideration, particularly because a rebinning algorithm is being applied. Figures (4.2) and (4.3) show how the radial resolution varies with radial and axial distance in the F O V , respectively. Both radial and axial dependent trends are the same for 3D F B P as for algorithm which uses F O R E . Figures (4.4) and (4.5) show how axial resolution varies within the F O V . It can be seen that axial resolution worsens as one moves radially out from the center of the F O V by about 1 mm in going from x = 0 cm to x = 10 cm. This trend is the same for 3D F B P as it is for F O R E , showing that the rebinning done by F O R E does not alter the variability in axial resolution. Resolution in water, a scatter medium, was also measured in the same way but was found to be identical to that measured in air. Furthermore, the profiles obtained from the point sources, in both air and water, were smooth and well suited for Gaussian fits.The resolution results show that the new algorithms reconstruct with the same consistency as 3D F B P within the F O V and show which parameters can be used to obtained a desired resolution. CHAPTER 4. EXPERIMENTAL RESULTS 58 Radial Dependence ef Radial Resolution •a =9 "i r 0 5 10 Radial distance from center of F0V (cm) O 3DFBP Ramp 0.5 • 3DFBP Hann 0.4 • F0RE/2DFBP Ramp 0.5 • F0RE/2DFBP Hann 0.4 • F0RE/0SEM 1=2 S=12 + F0RE/SAGE 1=10 log2p=-6 Figure 4.2: Variation in radial resolution with radial distance in the F O V . 4.2.2 Noise Figure 4.6 shows the amount of noise in each reconstruction for both low (L) and high (H) counts using the experimental conditions outlined in Section 3.2.2. The current 3D F B P Hann 0.4 method has a noise value of 0.12 (H) and 0.58 (L), where (H) and (L) indicates high and low counts, respectively. Notice that the ramp filter, although offering a better resolution, produces images with 210% (H) and 320% (L) more noise. F2D images have the same amount of noise as those obtained with 3D F B P . This fact, together with the identical resolution shown in Table 4.1 suggests that these two reconstruction CHAPTER 4. EXPERIMENTAL RESULTS 59 Axial Dependence of Radial Resolution 10 -4 Cu 7 —1 2 -3 5 5 -4 i r 0 2 4 Axial distance from center of P0V (cm) O 3DFBP Ramp 0.5 • 3DFBP Hann 0.4 • F0RE/2DFBP Ramp 0.5 • FGRE/2DFBP Hann 0.4 A F0RE/0SEM 1=2 S=12 + FORE/SAGE 1=10 log2p=-6 Figure 4.3: Variation in radial resolution with axial distance in the F O V . methods will perform similarly, if not identically, with further phantom tests. Of course, F2D must still be tested with biologically meaningful human subject data. O S E M noise is seen to vary with both subsets and iterations. Commonly with E M algorithms, the fewer passes one makes through the data the less sharp the reconstructed image. In the extreme case of S = I = 1, the noise properties are excellent but, of course, the reconstruction is far from complete. Based on radial uniformity studies, we have found that S = 8 with one iteration provides incomplete reconstruction, whereas S = 12 with one iteration gives better results. For this reason, we do not test 5 = 8 but only S = 12 and 16 with 2 and CHAPTER 4. EXPERIMENTAL RESULTS 60 Radial Dependence ef Axial Resolution fa •a ~i 1 r 0 5 10 Radial Distance from center of F0V (cm) O 3DFBP Ramp 0.5 • 3DFBP Hann 0.4 A F0RE/2DFBP Ramp 0.5 • F0RE/2DFBP Hann 0.4 A F0RE/0SEM 1=2 S=12 + FORE/SAGE 1=10 log2P=-6 Figure 4.4: Variation of axial resolution with radial distance in the F O V . 4 iterations each. Figure 4.6 illustrates that noise is proportional to the product S • I so that (5,7) = (12,4) has less noise than (16,4) but more than (16,2). Because F O should be filtered, the noise of filtered images is also included in the figure. As expected, a posteriori filtering reduces the image noise substantially, although at the cost of resolution. To make the comparison between 3D F B P and F O one can compare either 3D F B P with Hann 0.4 to resolution matched F O . Figure 4.6 shows that high count F O image noise is 40% lower for (5,1) = (12,2) and 14% lower for (12,4) in comparison to 3D F B P with Hann 0.4. For low count rates, the noise is 38% lower for (12,2) and 26% lower for (12,4). Alternatively, CHAPTER 4. EXPERIMENTAL RESULTS 61 Axial Dependence of Axial Resolution 0 2 Axial distance from center of FOV (cm) O 3DFBP Ramp 0.5 • 3DFBP Hann 0.4 • F9RE/2DFBP Ramp 0.5 • FGRE/2DFBP Harm 0.4 A F9RE/0SEM 1=2 S=12 + FORE/SAGE 1=10 log2fJ=-6 Figure 4.5: Variation of axial resolution with axial distance in the F O V . one can see that even unfiltered O S E M reconstructions have lower noise than 3D F B P with ramp 0.5, illustrating the improved noise handling of this algorithm. S A G E noise largely depends on regularization. Extremely regularized images, such as those with log 2 (3 > 0, have very little noise but also have very little detail as a result of the large regularization parameter. It was found that for log 2 (3 > —3 the images are too smoothed to be used, even though the point source reconstruction yielded an acceptable resolution 2. In the high count regime, FS(—4,10) (volume resolution V = 200 mm 3 ) has image noise at par with Because regularized iterative algorithms are not linear, one cannot directly predict the smoothness of the image only from the resolution found using a point source. CHAPTER 4. EXPERIMENTAL RESULTS 62 Noise Measurements High Counts Low Counts I I I I I I I I I I I I I I I I I 3DFBP 2DFBP OSEM SAGE • • • • • • • • n O • g A A O • „ • • • • • • I I I I I I I I I I I I I 3DFBP 2DFBP OSEM T T T SAGE Filtering (mm) • Nene a 7,9,2 o 3,3,3 A 4,4,4 Figure 4.6: Image noise for various reconstructions at two count rates. Note the scale difference in the two plots. FO(12,2,3) (V = 216 mm 3 ) and FO(12,4,4) (V = 312 mm 3 ) . FS(-5 ,10) (V = 195 mm 3 ) has image noise of 0.13 which compares to unfiltered FO(12,2,0) (V = 122 mm 3 ) . Once FS(-5,10) is filtered with a 3 mm filter (V = 330 mm 3 ) the noise drops to the same value as FS(—4,10). Less regularized FS(—6,10) has relatively high image noise and needs to be filtered with a 4 mm filter to bring it down to the same level as FS(—4,10) or FO(12, 2,3). Results in the low count regime, have the same relative pattern except that the absolute noise is increased about 10 fold. The noise results show that both S A G E and O S E M can be tuned to have similar noise properties with the same volume resolution. Furthermore, both iterative schemes offer lower noise images at equal or better resolution than 3D F B P . CHAPTER 4. EXPERIMENTAL RESULTS 63 4.2.3 Contrast Constrast results, measured as outlined in Section 3.2.3 are shown in Figure 4.7. The Contrast Ratio Measurements 5 4 g 3 2 -4 o O 1 High Counts Low Counts • • • • ° • I I I I I I I I I I I I I I I I %? v • 3DFBP 2DFBP 0SEM I I I I I I I I I I I I I I I I SAGE %? V J^ '7 3DFBP 2DFBP 0SEM SAGE Filtering (mm) • None • 7,9,2 o 3,3,3 A 4,4,4 Figure 4.7: Hot spot contrast for various reconstruction algorithms. C R is quite constant at high counts, except for the excessively regularized case of log 2 /3 = 0. At low counts, S A G E overestimates the C R significantly. A l l the other methods have C R values very similar to 3D F B P . The theoretical value for C R for a 5:1 hotspotbackground ratio is 4 and because of partial volume effects the measured values are about 3.2. Also notice in Figure 4.7 that filtering reduces the CR. This is expected for small hotspots, since the activity is smeared outside of the hotspot during filtering. CHAPTER 4. EXPERIMENTAL RESULTS 64 4.2.4 Signal-to-Noise Ratio Using noise and contrast data, the SNR values for each reconstruction can be calcu-lated. However, because the C R is fairly constant for all the methods, SNR will be inversely proportional to noise and the SNR F O M does not present new information. Highly regularized S A G E images which have very small noise, such as those with log 2/? = 0, will have very high SNR, even though the C R is lower. However, because the image is very blurred they cannot be used. 4.2.5 Axial Uniformity Axia l uniformity is an important measure of consistency across image planes. From Figure 4.8 it can be seen that all algorithms outperform 3D F B P with respect to axial unifor-mity by about 20%. Unfiltered F O and O S E M have nearly the same uniformity, regardless of parameters, and filtering images produced with either method yields the same result. Axia l uniformity at low count rates is significantly lower but relative trends between algorithms are unchanged. 4.2.6 Radial Uniformity Radial uniformity at high counts is shown in Figure 4.9. When O S E M is resolution-matched to 3D F B P it is found to have a better radial uniformity. S A G E with log 2 (3 = — 5 has the same radial uniformity as 3D F B P with Hann 0.4 and when regularization is increased to log 2/3 = —4 radial uniformity exceeds that of 3D F B P and is at par with FO(12,2,4). CHAPTER 4. EXPERIMENTAL RESULTS 2.4-2.2-1.80-g ^ 1.75-| .2 1.70 q 3 1.65 — 1.60-1.55 Axial Uniformity High Counts 3DFBP F0RE/0SEM 2DFBP FORE/SAGE: o o • • : A A o o A A I I I I I I I I I I I I I I I I Filtering (mm) • None • 7,9,2 O 3,3,3 A 4,4,4 Figure 4.8: Axia l uniformity for various reconstructions at high count rates. CHAPTER 4. EXPERIMENTAL RESULTS Radial Uniformity High Ceunts l l l l l l l l l l l l l l ^^^^^^%<?/*/<?'*> * v V v <? 3DFBP 2DFBP GSEM SAGE Filtering (mm) • None • 7,9,2 O 3,3,3 Figure 4.9: Radial uniformity for various algorithms CHAPTER 4. EXPERIMENTAL RESULTS 67 4.2.7 N E M A Uniformity In-plane uniformity is quantified using the N E M A standard outlined in Section 3.2.7. Results are presented in Figure 4.10. A l l N E M A F O M s favourably compare the iterative meth-ods to 3D F B P and do not show any difference between 3D F B P and F2D. 4.2.8 Time Activity Curve Comparison The T A C variability F O M is described in Section 3.2.8. Results are presented in Figure 4.11. F2D shows a flatter T A C for the cylindrical phantom than 3D F B P by about 30%. Equally, F O has excellent T A C consistency, particularly when filtered. S A G E shows significantly more fluctuation in the T A C , showing greater dependence of reconstruction on count rates. 4.2.9 Summary of Phantom Studies The phantom studies make it clear that F2D and 3D F B P perform on a nearly equal footing. In addition, iterative methods yield lower noise at equivalent resolution to transform methods, as well as better uniformity. Contrast measurements were not useful in distinguishing the algorithms. 4.3 human subject Studies At this point, based on phantom studies we disard the FO(16,x) and FS(-9,x) and FS(log 2 /5 > — 4,x) possibilities because of inferior phantom study results. CHAPTER 4. EXPERIMENTAL RESULTS N E M A Uniformity High Ceunts Lew Ceunts I I I I I I I I I I I I I I I I 3DFBP 2DFBP 0SEM SAGE I I I I II I I I I I I I I I I 3DFBP 2DFBP GSEM SAGE Filtering (mm) • none • 7,9,2 O 3,3,3 Figure 4.10: N E M A uniformity for various reconstructions in two count regimes. CHAPTER 4. EXPERIMENTAL RESULTS Average T A C Variability 1.75 1.70 1.65 -=| 1.60 1.55 q 1.50 1.45 -= 1.40 1 I I FORE/SAGE log2f3=-6 10 iter FORE/SAGE log2P=-5 10 iter 0.9 -s • r-t > 0.7 - \ 0.6 -J 0.5 16,4 ...^  F0RE/SAGE log2p=-4 10 iter 3DFBP Hann 0.5 16,1 F0RE/0SEM (subsets,iterations Figure ef merit avg p(stdev fT f t ) /avg fTj where R 0 I activity f frame 1...12 p plane 3...29 F0RE/2DFBP Ramp 0.5 F0RE/2DFBP Hann 0.5 F0RE/2DFBP Hann"OA i 1 r 2 3 4 xyz filter (mm) Figure 4.11: Variability in the T A C for various reconstructions. CHAPTER 4. EXPERIMENTAL RESULTS 70 4.3.1 Radial Profile Difference One of the raclopride human subject scans was used to visually examine the radial profiles. The vertical profile running down the center of the image, parametrized by SR = {(*>.?) I * = 64} (recal that the image is 128 x 128), was extracted for several reconstruction methods. Figure 4.12 shows these profiles. To estimate the average difference in the radial Horizontal Profile of a Brain Scan F2D Hann 0.4 FO(12,2,0) F0(12,2,3) FO(12,4,4) FS(-6,10) FS(-4,10) 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Figure 4.12: Radial profiles taken vertically through a raclopride brain image. Several recon-structions are shown. The solid line represents 3D F B P and the scatter points are the algorithm indicated above the plot. CHAPTER 4. EXPERIMENTAL RESULTS 71 profiles, all 10 human subject scans were used and the same vertical profile as in Figure 4.12 was calculated and averaged over the central planes 10-20 for all tested reconstruction methods. The subset SR was further restricted to SR = | i = 64,25 < j < 100} so that values near zero would not be compared, since their relative difference would artificially increase A R . Figure 4.13 shows the average A R value for each reconstruction method. The correspondence between 3D F B P and F2D is excellent and most methods fall within 1% of 3D F B P , showing that there is no systematic quantitation offset. Radial Profile Difference i r i i i i r **V. <*Vv "Av "A* o, ©, a. a. "a. "s, ~<8-o t7-> 0-> Q-> < A < A r v . 'vs\ 'v . •O) 'J/ '^i -O) 'ty '^i -O/ -Of -J} -Oj Figure 4.13: Average radial profile difference for various reconstruction methods. 4.3.2 Region-of-Interest Difference Figure 4.14 shows average, normalized ROI differences for 3 ROI groups. The differ-ences are slighly larger than found using radial profiles. Although both methods test quantita-tion, the radial profiles were averaged over the central planes (10-20) whereas the ROI differences were averaged over planes 3,5,10,15,20,25 and 28. Outer planes typically contain fewer counts CHAPTER 4. EXPERIMENTAL RESULTS and low statistics images are expected to yield larger differences between algorithms. 72 4.3.3 Image R M S Figure 4.15 shows image R M S and signed difference averaged over 10 human subject scans. The R M S is used to gauge the overall difference regardless of sign and the signed difference is used to test any systematic offset. F2D continues to prove itself as an excellent replacement for 3D F B P , with the R M S and signed difference being very small. S A G E shows a significantly larger signed difference than all the other algorithms showing that it reconstructs absolute activities smaller than 3D F B P . This large difference was not seen in the radial profiles but was seen in the ROI differences for the outer ROI group (but not central). This suggests that there is a larger difference between 3D F B P and FS as one moves radially outward. 4.3.4 Kinetic Parameter Comparison The kinetic parameters were calculated and compared as outlined in Section 3.3.4. Figures 4.16 and 4.17 show the average difference in Logan slopes and average dispersion in these slopes, respectively, for a group of 10 human subjects. The standard deviation of the D V R shown in Figure 4.16 represents variability between human subjects. The dispersion shown in Figure 4.17 represents the average variability for different ROIs for a single human subject. F2D yields the best results. For both cortical and cerebellar inputs, A s is consistent with zero and has a very small dispersion of < 0.5%. Most of the other algorithms give slope values greater than those found with 3D F B P , as would be expected for methods with a higher resolution. Both F O and FS perform consistently, but do show a systematic offset in the Logan slopes. CHAPTER 4. EXPERIMENTAL RESULTS 73 FO(12,4,3), for example, shows a difference A s c o r t = -2 .5 ± 1.8% and A s c e r e b = —3.9 ± 1.1. Cortical input results for F O overlap, except for the (12,4,4) case which yields a lower slope than expected. Cerebellar input results are more scattered but show the same relative trends as cortical inputs. 4.3.5 Computational Demands A n important F O M is the amount of time required for reconstruction. Currently, each frame of a scan requires about 8 minutes for transmission processing and 20 minutes for 3D F B P . Rebinning with F O R E requires about 0.6 minutes per frame. F2D requires approximately the same amount of time per frame so the entire F2D reconstruction requires 1.2 minutes per frame. Both F O and FS are slighly slower because of the nature of the iterative reconstruction. For 4 iterations O S E M takes about 2.4 minutes per frame (per iteration it takes the same amount of time as 2D F B P ) . S A G E requires about 0.8 minutes per frame per iteration, so 5 iterations require 4 minutes. Figure 4.18 compares frame reconstruction times graphically assuming 8 minutes for transmission processing, 0.6 minutes for F O R E and 4 iterations for F O and 5 iterations for FS. The total reconstruction time for a 16 frame study, without counting transmission processing, is 320 minutes (3D F B P ) , 16 minutes (F2D), 48 minutes (FO(5,2,/)) and 64 min-utes (FS(log2(3,5)). If complete processing is included (normalization, scatter correction and attenuation) then 160 minutes should be added to each of these times. Another way to present the computational demands is to give the maximum number of frames that can be reconstructed in a 24 hour period using these algorithms, assuming 100% computer uptime. Wi th 3D F B P one can reconstruct 50 frames per day. F O R E radically CHAPTER 4. EXPERIMENTAL RESULTS 74 increases the potential number of frames that can be reconstructed: 155 with 2D F B P , 130 with O S E M and 115 with S A G E . •S CHAPTER 4. EXPERIMENTAL RESULTS 75 Regional R 9 I Differences 4 -4 o -A -2 -4 Striatial ROIs © 0 -2 10 Central R01s - Outer ROIs 6 4 - | 2 0 -2 i 1 1 1 1 ' i i1 1 1 1 1 1 A ^ ^ ^ A> A> Av A JuJ/ J<9/ -"c?/ ^ ^ ^ ^ ^ Figure 4.14: Average normalized ROI differences for three ROI groups. CHAPTER 4. EXPERIMENTAL RESULTS 76 Image R M S and Signed Difference RMS 0.5 0.4 - j erf 0.2 0.1 0.0 4.5 o d « St) Signed Difference i i i i i i i i r % \ \ \ % \ W W W <-> O Q fsp Q ? 0 \ A a \ <-e. <s, % 3 % 3 <5> Tjr 'Ig 'SQ 'SQ >S0 >S0 'OJ 'Jj Y'Oj *>y ">o, "oj "jj "qj Figure 4.15: Image RMS and signed difference for various reconstructions averaged over 10 human subject scans. CHAPTER 4. EXPERIMENTAL RESULTS 77 CD O CI <D Legan Slope Differences Certical Input Function Cerebellar Input Function A <*k 4, A> <*k 4 A> 4 4. A <9, S„. ©„. ©„. Q, t£- ^ * ^ ^ iTp <7p <j> r v *V <k 'Sn 'Jn '-?n '^n s 'O) J t £ / >Oj '¥j 'Oj 'Oj '<}j J Figure 4.16: Average difference in Logan slopes (DVR) between 3D F B P Hann 0.4 and various reconstruction methods. The error bars represent the standard deviation of the D V R among the 10 human subjects. CHAPTER 4. EXPERIMENTAL RESULTS 78 Legan Slepe Difference Dispersien Certical Input Function 6 -q Cerebellar Input Function i i i i i i i r A -^ r. ^  "^ o <*!r> ^ ^ A> A> A> A> A S , S , S , S , 9 , S , < f < f < f * « -s >Oj *&) J ¥ j >Oj J & ) J ¥ j >0) J ? j >0/ - / Figure 4.17: Average Logan slope dispersion of the difference between 3D F B P Hann 0.4 and various reconstruction methods. CHAPTER 4. EXPERIMENTAL RESULTS 79 I a a B Cemputatienal Demands ef Tested Algorithms 30 20 10 28 I | Reconstruction BB Processing (scatter, normalization, attenuation) 13 l u l l 3D FBP F2D F9 FS Recenstructien method Figure 4.18: Reconstruction times for standard 3D F B P and F O R E reconstructions. The time is significantly reduced because 2D reconstruction methods can be utilized after rebinning with F O R E . 80 Chapter 5 Discussion and Conclusion 5.1 Algorithm Performance When evaluating the performance of any algorithm one should identify the F O M s that are to be considered first, followed by secondary ones. For example, if speed is of the essence, a faster algorithm which has slighly lower resolution would be prefered to one which has perhaps higher resolution but is slower. On the other hand, if computing resources are not an issue, one must still weight the noise/resolution factors and choose what noise level or resolution is acceptable. If the decisive factor is noise then the parameters should be chosen such that the noise is the highest allowable to provide best resolution. A l l phantom F O M and F O M differences results are summarized the following tables: F O R E + 2D F B P in Table 5.1, F O R E + O S E M in Tables 5.2 and 5.3 and F O R E + S A G E in Tables 5.4 and 5.5. human subject scan F O M differences are given for all algorithms in Table 5.6 (quantitation) and Table 5.7 (DVR). CHAPTER 5. DISCUSSION AND CONCLUSION 81 Method F O M VR (mm 3) Noise C R N U M A X CV Uax Urad (%) Speed 3 D F B P 548 0.12 3.23 0.25 0.09 2.30 5.80 0.94 20 F2D 548 0.13 3.24 0.27 0.10 1.80 5.60 0.62 1 A ( % ) 0.0 -8 -0.3 -8 -11 22 3 34 95 Table 5.1: Comparison of Phantom F O M s for 3D F B P and F O R E + 2D F B P . Speed is given in minutes per frame. 5.1.1 F O R E and 2D F B P A l l of the phantom F O M s show that F2D either equals in performance to 3D F B P or supersedes it, as in the case of axial uniformity and T A C variability. Practically, it is the 95% speed increase (i.e. 20 times faster) over 3D F B P , with all other FOMs relatively unchanged, that makes the F O R E + 2D F B P choice so appealing. Tables 5.6 and 5.7 show that F2D quantitation is excellent, within 1% of 3D F B P and D V R differences are also very small. Importantly, the dispersion of D V R is < 0.5%, signifying that on average for a single human subject all of the ROI values used to calculate the average D V R fall very close to the 3D F B P values. 5.1.2 F O R E and O S E M Table 5.3 illustrates the need for a posteriori filtering for O S E M . Certainly the FO(12,2,0) and (12,4,0) options have very poor noise characteristics compared to 3D F B P . The N E M A uni-formity values suggest that these images are rougher than 3D F B P . Looking at the noise and radial uniformity FOMs, the only cases which are as good or better than 3D F B P are (12,2,3), (12,2,4) and (12,4,4). When using the wider 4 mm filter, the resolution is not not as good CHAPTER 5. DISCUSSION AND CONCLUSION 82 Method F O M VR (mm 3) Noise C R NUmax CV Uax UTad T ACVar (%) Speed 3 D F B P 548 0.12 3.23 0.25 0.09 2.30 5.80 0.94 20 FO(12,2,0) 120 0.14 3.40 0.27 0.093 1.80 5.2 0.80 2 FO(12,2,3) 220 0.10 3.34 0.20 0.077 1.71 4.9 0.76 2 FO(12,2,4) 310 0.087 3.29 0.18 0.068 1.67 4.7 0.68 2 FO(12,4,0) 120 0.22 3.36 0.32 0.12 1.81 6.9 0.81 3 FO(12,4,3) 220 0.14 3.31 0.25 0.094 1.72 6.3 0.68 3 FO(12,4,4) 310 0.11 3.27 0.21 0.083 1.67 6.0 0.61 3 Table 5.2: Comparison of Phantom FOMs for 3D F B P and F O R E + O S E M . A denotes the percent improvement in going from 3D F B P to F O R E and 2DFBP. Negative (positive) values of A signify a decrease (increase) in quality of the particular F O M . Speed is given in minutes per frame. Method A(%) VR Noise C R NUmax CV Uax Urad Speed FO(12,2,0) 78 -20 5 -6 -3 22 10 15 90 FO(12,2,3) 61 14 3 18 14 26 16 19 90 FO(12,2,4) 43 27 2 29 24 27 19 28 90 FO(12,4,0) 78 -83 4 -24 -29 21 -19 14 85 FO(12,4,3) 61 -17 2 0 -5 25 -10 27 85 FO(12,4,4) 43 0 1 28 8 27 -5 36 85 Table 5.3: Comparison of Difference in Phantom F O M s between 3D F B P and F O R E + O S E M . Negative (positive) values of A signify a decrease (increase) in quality of the particular F O M . but noise is reduced. For 4 iterations, O S E M image noise is the same as 3D F B P but volume resolution is improved by 43%. Radial uniformity, however, is reduced by 5% for the (12,4,4) case. Based on phantom F O M s it appears that 2 iterations is sufficient, with 4 iterations resulting in increased image noise and decreased uniformity. Performing only 2 iterations re-quires 2 minutes per frame whereas 4 iterations require 3 minutes per frame. This is still 10 CHAPTER 5. DISCUSSION AND CONCLUSION 83 Method F O M VR (mm 3) Noise C R N U m a x CV Uax Urad (%) Speed 3 D F B P 548 0.12 3.23 0.25 0.09 2.30 5.80 0.94 20 FS(-6,10,0) 180 0.16 3.33 0.30 0.11 1.77 6.70 1.68 7 FS(-6,10,4) 410 0.11 3.23 0.21 0.08 1.60 6.10 1.67 7 FS(-5,10,0) 195 0.13 3.24 0.27 0.10 1.78 5.90 1.52 7 FS(-5,10,3) 330 0.10 3.17 0.22 0.08 1.69 5.60 1.47 7 FS(-4,10,0) 205 0.11 3.07 0.24 0.09 1.77 4.90 1.50 6.50 Table 5.4: Comparison of Phantom F O M s for 3D F B P and F O R E + S A G E . Speed is given in minutes per frame. and 7 times faster than 3D F B P . Quantitation tests using human subject scans do not reveal any benefit in performing 4 iterations instead of 2. In fact, quantitation is a little better with only 2 iterations as seen in Table 5.6. D V R difference between 3D F B P and F O R E + O S E M is smaller using 4 iterations as seen in Table 5.7, but only for cerebellar input function. In fact, with FO(12,4,4) cerebellar input D V R difference is -0.6% whereas FO(12,2,4) yields a -3.7% difference. Generally, O S E M appears to overestimate the cerebellar input D V R yielding a negative difference. 5.1.3 F O R E and S A G E The results for S A G E are slighly less favourable than for O S E M . Based on noise characteristics, Table 5.5 shows that the log 2 /3 = —4 case is the only one with noise lower than 3D F B P , if no filtering is used. In general, the filter can be seen to (a) decrease resolution (b) decrease noise and (c) increase uniformity. If the inclusion of a filter is acceptable, then log 2 j3 = —5 with a 3 mm filter is nearly equivalent to log 2 /? = —4 from the point of view of phantom FOMs. In filtering the log 2 0 = — 6 case with a 4 mm filter the resolution is increased CHAPTER 5. DISCUSSION AND CONCLUSION 84 Method A(%) VR Noise C R N U m a x CV Uax Urad T A C v a r Speed FS(-6,10,0) 67 -33 3 -20 -26 23 -16 -79 65 FS(-6,10,4) 25 13 0 17 9 30 -5 -78 65 FS(-5,10,0) 64 -9 0 -8 -13 23 -2 -62 65 FS(-5,10,3) 40 13 -2 12 6 27 3 -56 65 FS(-4,10,0) 63 11 -5 5 1 23 16 -60 65 Table 5.5: Comparison of Difference in Phantom FOMs between 3D F B P and F O R E + S A G E . Negative (positive) values of A signify a decrease (increase) in quality of the particular F O M . significantly and is only 25% higher than 3D F B P . Wi th none of the phantom F O M s showing dramatic improvement over O S E M , the fact that S A G E is significantly slower (requiring more iterations) makes this algorithm a less favourable replacement. human subject scan quantitation is good, but as already mentioned, outer ROI dif-ferences are large. Additionally, the signed image difference is large, caused most likely by differences in the images at larger radial distances. The D V R values calculated with FS are best for the log 2 (3 — —5 case with a 3 mm filter. As for O S E M , the D V R is systematically overestimated, except for log 2/3 = —4 where it is 2.4% and 1.4% too small for cortical and cerebellar inputs. Table 5.4 shows that F O R E and 10 iterations of S A G E require require about 7 minutes. As mentioned, 5 iterations is sufficient to reach a stable image so this time can be expected to be closer to 4 minutes in any practical reconstructions. CHAPTER 5. DISCUSSION AND CONCLUSION 85 Method A(%) AR AROIs AROIc AROI0 R M S Ax F2D 0.04 1.1 1.3 1.2 0.05 -0.13 FO(12,2,0) 0.4 1.5 1.5 1.0 0.23 0.02 FO(12,2,3) -0.7 1.3 1.7 0.7 0.13 0.18 FO(12,2,4) -0.9 1.1 1.8 0.5 0.12 0.28 FO(12,4,0) 1.3 1.3 0.8 1.1 0.39 0.10 FO(12,4,3) 1.0 1.0 1.3 0.5 0.19 0.26 FO(12,4,4) 0.7 0.5 1.4 0.3 0.14 0.36 FS(-6,10,0) 1.3 1.7 1.9 4.3 0.19 3.6 FS(-6,10,4) 0.5 1.2 1.8 3.7 0.12 3.7 FS(-5,10,0) 0.7 2.8 2.4 4.8 0.13 4.1 FS(-5,10,3) 0.3 2.5 2.5 4.6 0.11 4.1 FS(-4,10,0) -0.9 1.0 0.8 3.4 0.12 3.7 Table 5.6: Comparison of difference in human subject study quantitation F O M s between 3D F B P and F O R E + 2D F B P / O S E M / S A G E . Method A(%) Ascor Dispersion A s c e r Dispersion F2D 0.2 0.4 -1.5 0.4 FO(12,2,0) -5.4 5.0 -8.9 5.6 FO(12,2,3) -2.5 4.2 -5.7 4.4 FO(12,2,4) -0.7 3.7 -3.7 3.9 FO(12,4,0) -2.5 4.4 -5.7 4.8 FO(12,4,3) 0.2 3.6 -2.3 3.9 FO(12,4,4) 1.8 3.2 -0.6 3.4 FS(-6,10,0) -6.9 2.8 -9.4 3.4 FS(-6,10,4) -2.3 2.3 -3.9 2.4 FS(-5,10,0) -2.5 2.2 -4.5 1.6 FS(-5,10,3) -0.1 2.1 -1.5 2.1 FS(-4,10,0) 2.4 2.0 1.4 1.9 Table 5.7: Comparison of difference in distribution volume ratios between 3D F B P and F O R E + 2D F B P / O S E M / S A G E . CHAPTER 5. DISCUSSION AND CONCLUSION 86 5.2 Algorithm Selection Many F O M s have been calculated for each algorithm to understand how it behaves under various artificial as well as practical conditions. A l l of the algorithms tested are faster than 3D F B P so if any one of these is chosen to replace 3D F B P , reconstruction time will be decreased significantly. A l l of the tests suggest that replacing 3D F B P with F O R E + 2D F B P does not change any of the image characteristics, nor any of the Logan analysis. The sole major difference between these two reconstruction methods is speed: F O R E + 2D F B P is 80% faster than 3D F B P , allowing our center to process 120 frames per day instread of only 48. Accuracy and reproducibility are important in clinical studies. Increased resolution may improve accuracy, but reproducibility may be compromised. On the other hand, if the image is greatly smoothed, accuracy can suffer while reproducibility is increased1. Both O S E M and S A G E can be tuned to yield equivalent phantom F O M s to those of 3D F B P with associated improvements in resolution. In fact, with O S E M volume resolution can be increased by as much as 61% (FO(12,2,3)) and 40% for S A G E (FS(-5,10,3) where D V R agreement with 3D F B P is best). O S E M quantitation is excellent, whereas S A G E quantitation is poorer. In addition, both algorithms appear to overestimate D V R values by 2-3% compared to 3D F B P . For studies where high resolution and minimum noise is desired F O R E + O S E M is a strong candidate, although the biological constants will be slighly different from the F2D results. Certainly, F O R E + 2D F B P can immediately replace 3D F B P , with an increase in overall reconstruction speed as well as the quality of some figures of merit, such as T A C variability 1If one smooths all every human subject scan with an extremely wide Gaussian, all the scans will look the same every time (excellent reproducibility) but none of the analysis will be correct (very poor accuracy). CHAPTER 5. DISCUSSION AND CONCLUSION 87 and image uniformity. On the other hand, it is important to realize that practically a 2% difference in D V R values will never be statistically significant in any P E T study of a biological system, owing to (a) large inherent variability in anatomy and physiology between human subjects (b) detector drift and normalization and (c) fluctuations in the data due to the statistical nature of radioactivy decay as well as the presence of scatters and randoms. Indeed, typically differences of <10% in D V R are not interpretable without very large populations of >50 subjects. 88 Bibliography [1] Matej S, Herman G T , Narayan T K , Furuie SS, Lewitt R M , Kinahan P E (1994) Evaluation of task-oriented performance of several fully 3D P E T reconstruction algorithms. Phys Med Biol 39: 355-367. [2] Kinahan P E , Matej S, Karp JS, Herman G T , Lewitt R M (1995) A comparison of transform and iterative reconstruction techniques for a volume-imaging P E T scanner with a large axial acceptance angle. I E E E Trans Nuc Sci 42: 2281-2287. [3] Defrise M , Geissbuhler A , Townsend D W (1994) A performance study of 3D reconstruction algorithms for positron emission tomography. Phys Med Biol 39: 305-320. [4] Matej S, Karp JS, Lewitt R M , Becher A J (1998) Performance of the Fourier rebinning algorithm for P E T with large acceptance angles. Phys med Biol 43: 787-795. [5] Reader A J , Visvikis D , Erlandsson K , Ott R J , Flower M A (1998) Intercomparison of four reconstruction techniques for positron volume imaging with rotating planar detectors. Phys Med Biol 43: 823-834. [6] Ter-Pogossian M M , Raichle M E , Sobel B E (1980) Positron-emission tomography. Scientific American 243: 169-181. [7] Phelps M E , Mazziotta J (1985) Positron emission tomography: human brain function and biochemistry. Science 228: 799-809. [8] Phelps M E (1991) P E T : A biological imaging technique. Neurochem Res 16: 929-940. [9] Phelps M E , Hoffman E J , Mullani N A , Ter-Pogossian M M (1975) Application of annihila-tion coincidence detection to transaxial reconstruction tomography. J Nuc Med 16:210-233. [10] Phelps M E , Hoffman E J , Huang S-C, Kuhl D (1978) E C A T : A new computerized to-mographic imaging system for positron-emitting radiopharmaceuticals. J Nucl Med 19: 635-647. [11] Edited by Phelps M E , Mazziotta J C , Schelbert HR. Positron Emission Tomography and Autoradiography: Principles and applications for the brain and heart. Raven Press, New York, 1986. [12] Perkins, D H . Introduction to High Energy Physics. 3rd edition. Addison-Wesley Publishing. 1987 [13] Rogers J G , Harrop R, Kinahan P E (1987) The theory of three-dimensional image recon-struction for P E T . I E E E Trans med Imag MI-6: 239-243. B I B L I O G R A P H Y 89 Bailey D L , Miller M P , Spinks T J , Bloomfield P M , Livieratos L , Young H E , Jones T (1998) Experience with fully 3D P E T and implications for future high-resolution 3D tomographs. Phys Med Biol 43: 777-786. Kaczmarz S (1937) Angenaherte auflosung von systemen linearer gleichungen. Bull Acad P o l o n a i s e Sci Lett Classe Sci Math Natur Series A , pp. 355-7. Herman G T , Meyer L B (1993) Algebraic reconstruction techniques can be made compu-tationally efficient. I E E E Trans Med Imag 12:600-9. Herman G T , Lent A , Lutz P H (1978) Relaxation methods for image reconstruction. Coram of A C M 21:152-8. Shepp L A , Vardi Y (1982) Maximum likelihood reconstruction for emission tomography. I E E E Trans Med Imag MI-2:113-122. Lange K , Carson R (1984) E M reconstruction algorithms for emission and transmission tomography. J Comp Assist Tomography 8:306-16. Dempster A P , Laird N M , Rubin D B (1977) Maximum likelihood from incomplete data via the E M algorithm. J Roy Stat Soc B 39:1-38. Liow J-S, Strother SC, Rehm K , Rottenberg D A (1997) Improved resolution for P E T volume imaging through three-dimensional iterative reconstruction. J Nuc Med 38:1623-31. Kaufman L (1987) Implementing and accelerating the E M algorithm for P E T . I E E E Trans Med Imag MI-6: 37-51. Llacer J , Andreae S, Veklerov E , Hoffman E J (1986) Towards a practical implementation of the M L E algorithm for positron emission tomography. I E E E Trans Nuc Sci 33: 468-477. Reader A J , Erlandsson K , Flower M A , Ott R J (1998) Fast accurate iterative reconstruction for low-statistics positron volume imaging. Phys Med Biol 43: 835-846. Daube-Witherspoon M E , Muehllehner G (1987) Treatment of axial data in three-dimensional P E T . J Nucl Med 28: 1717-1724. Sossi V , Stazyk M W , Kinahan P E , Ruth T J (1994) The performance of the single-slice rebinning technique for imaging thehuman striatum as evaluated by phantom studies. Phys Med Biol 39: 369-380. Erlandsson K , Esser P D , strand S-E, van Heertum R L (1994) 3D reconstruction for a multi-ring P E T scanner by single-slice rebinning and axial deconvolution. Phys Med Biol 39: 619-629. Lewitt R M , Muehllehner G, Karp JS (1994) Three-dimensional image reconstruction for P E T by multi-slice rebinning and axial image filtering. Phys Med Biol 39: 321-339. Defrise M , Kinahan P E , Townsend D W (1995) A new rebinning algorithm for 3D positron emission tomography: principle, implementation and performance. Proceedings of the 1995 International Meeting on Fully 3D Image R e c o n s t r u c t i o n in Radiology and Nuclear Medicine pp 235-240. Lange K , Bahn M , Little R (1987) A theoretical study of some maximum likelihood algo-rithms for emission and transmission tomography. I E E E Trans Med Imag MI-2 pp. 106-14. Lewitt R M , Muehllehner G (1986) Accelerated iterative reconstruction for positron emis-sion tomography based on the E M algorithm for maximum likelihood estimation. I E E E Trans Med Imag MI-5 pp. 16-22. BIBLIOGRAPHY 90 [32] Nuyts J , Suetens P, Mortelmans L (1993) Acceleration of maximum likelihood reconstruc-tion, using frequency amplification and attenuation compensation. I E E E Trans Med Imag 12: 643-652. [33] Fessler J A , Hero A O (1994) Space-alternating generalized expectation-maximization algo-rithm. I E E E Trans Sig Proc 42:2664-77. [34] Fesser J A , Hero A O (1995) Penalized maximum-likelihood image reconstruction using space-alternating generalized E M algorithms. I E E E Trans Image Proc 4:1417-29. [35] Hudson H M , Larkin RS (1994) Accelerated image reconstruction using ordered subsets of projection data. I E E E Trans Med Imag 13:601-9. [36] J A Fessler (1992) Hidden-data spaces for maximum-likelihood P E T reconstruction. 1992 I E E E Nuclear Science Symposium and Medical Imaging Conference. Conference Record. [37] Fessler J A , Clinthorne N H , Rogers W L (1993) On complete-data spaces for P E T recon-struction algorithms. I E E E Trans Nuc Sci 40:1055-61. [38] Fessler J A , Hero A O (1993) New complete-data spaces and faster algorithms for penalized-likelihood emission tomography. 1993 I E E E Nuclear Science Symposium and Medical Imag-ing Conference. Conference Record. [39] Fessler J A et al. (1996) Spatial resolution properties of penalized maximum-likelihood image reconstruction methods. I E E E Trans Image Proc 5:1346-58. [40] Fessler J A , Rogers W L (1994) Uniform quadratic penalities cause nonuniform image res-olution (and sometimes vice versa). In Conf Rec of the I E E E Nuc Sci Symp Med Image Conf vol. 4, pp 1915-1919. [41] Furuie SS, Herman G T , Narayan T K , Kinahan P E , Karp JS, Lewitt R M , Matej S (1994) A methodology for testing for statistically significant differences between fully 3D P E T reconstruction algorithms. Phys Med Biol 39: 341-354. [42] Brix G et al. (1997) Performance evaluation of a whole-body P E T scanner using the N E M A protocol. J Nucl Med 38:1614-1623. [43] Ehrin E et al (1985) Preparation of 1 1 C-labelled raclopride, a new potent dopamine receptor antagonist: preliminary P E T studies of cerebral dopamine receptors in the monkey. Int J Appl Radiat Isot 36: 269-273. [44] Farde L , Halldin C, Stone-Elander S, Sedvall G (1987) P E T analysis of human dopamine receptor subtypes using n C - S C H 23390 and 1 ^-raclopride. Psychopharm 92: 278-284. [45] Logan J , Fowler JS, Volkow N D , Wang G-J , Ding Y-S , Alexoff D L (1996) Distribution volume ratios without blood sampling from graphical analysis of P E T data. J Cereb Blood Flow Metab 16: 834-840. [46] Volkow N D , Fowler JS, Gatley SJ, Logan J , Want G-J , Ding Y-S , Dewey S (1996) P E T evaluation of the dopamine system of the human brain. J Nuc Med 37: 1242-1256. [47] Logan J et al (1990) Graphical analysis of reversible radiologand binding from time-activity measurements applied to [N-1 1C-methyl]-(-)-cocaine P E T studies in human subjects. J Cereb Blood Flow Metab 10:740-747. [48] Logan J et al (1994) Effects of blood flow on [ n C ] raclopride binding in the brain: model simulations and kinetic analysis of P E T data. J Cereb Blood Flow Metab 14: 995-1010. BIBLIOGRAPHY 91 [49] Ollinger J M , Goggin A S (1996) Maximum likelihood reconstruction in fully 3D P E T via the S A G E algorithm. 1996 I E E E Nuclear Science Symposium. Conference Record. [50] Kaufman L (1993) Maximum likelihood, least squares and penalized least squares for P E T . IEEE Trans Med Imag 12:200-214. [51] Fessler J A (1994) Penalized weighted least-squares image reconstruction for positron emis-sion tomography. IEEE Trans Med Imag 13:290-300. [52] Kinahan P E , Karp JS (1994) Figures of merit for comparing reconstruction algorithms with a volume-imaging P E T scanner. Phys Med Biol 39: 631-642. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0085160/manifest

Comment

Related Items