Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Model-based randoms correction for 3D positron emission tomography Pointoin, Barry William 2007

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2007-267745.pdf [ 24.9MB ]
Metadata
JSON: 831-1.0085806.json
JSON-LD: 831-1.0085806-ld.json
RDF/XML (Pretty): 831-1.0085806-rdf.xml
RDF/JSON: 831-1.0085806-rdf.json
Turtle: 831-1.0085806-turtle.txt
N-Triples: 831-1.0085806-rdf-ntriples.txt
Original Record: 831-1.0085806-source.json
Full Text
831-1.0085806-fulltext.txt
Citation
831-1.0085806.ris

Full Text

M O D E L - B A S E D R A N D O M S C O R R E C T I O N F O R 3D P O S I T R O N E M I S S I O N T O M O G R A P H Y by B A R R Y W I L L I A M P O I N T O N B.Sc, The University of British Columbia, 1985 M . S c , Simon Eraser University, 1990 Dip.C.S., Regent College, 1996 A I III .SIS S U B M I T T E D I N P A R T I A L F U L F I L L M E N T OI T H E R E Q U I R E M E N T S FOR T H E D E G R E E O F D O C T O R O F P H I L O S O P H Y in T H E F A C U L T Y O F G R A D U A T E STUDIES (Physics) T H E U N I V E R S I T Y O F BRITISH C O L U M B I A April 2007 © Bnny William Pointon, 2007 f A B S T R A C T Random coincidences (randoms) are frequently a major source of image degradation and quantitative inaccuracies in positron emission tomography. Randoms occurring in the true coincidence window are commonly corrected for by subtracting the randoms measured in a delayed coincidence window. This requires additional processing demands on the tomograph and increases the noise in the corrected data. A less noisy randoms estimate may be obtained by measuring individual detector singles rates, but few tomographs have this capability. This work describes a new randoms correction method that uses the singles rates from an analytic calculation, rather than measurements. This is a logical and novel extension o f the model-based methods presently used for scatter correction. T h e singles calculation uses a set o f sample points randomly generated within the prekminary reconstructed radioactivity image. T h e contribution of the activity at each point to the singles rate in every detector is calculated using a single photon detection model, producing an estimate o f the singles distribution. This is scaled to the measured global singles rate and used to calculate the randoms distribution which is subtracted from the measured image data. This method was tested for a M i c r o P E T R4 tomograph. Measured and calculated randoms distributions were compared using count profiles and quantitative figures o f merit for a set o f phantom and animal studies. Reconstructed images, corrected with measured and calculated randoms, were also analysed. T h e calculation reproduced the measured randoms rates to within < 1.4% for all realistic studies. T h e calculated randoms distributions showed excellent agreement with the measured, except that the calculated sinograms were smooth. Images corrected with both methods showed no significant differences due to biases. However, in the situations tested, no significant difference in the noise level o f the reconstructed images was detected due to the low randoms fractions o f the acquired data. ii T h e model-based method o f randoms correction uses only the measured image data and the global singles rate to produce smooth and accurate random distributions and therefore has much lower demands on the tomograph than other techniques. It is also expected to contribute to noise reduction in situations involving high randoms fraction. in T A B L E O F C O N T E N T S Abstract.... Table of Contents List of Tables List of Figures ^ A ckriowledgments Dedication Chapter 1 • •. 1.1 Overview and Motivation : '. 1.2 The Realm of Medical Imaging 1.3 Nuclear Medicine 1.4 Positron Emission Tomography , 1 Chapter 2 '. 2 2.1 ' P E T Tomograph Design 2 2.2 Tomograph Performance Characteristics 2 2.3 P E T Data Acquisition and Reconstruction • 3 2.4 Quantitative Corrections : 4 Chapter 3 .• 5 3.1 Single and Coincidence Counting 5 3.2 Random Coincidences 5 3.3 Delayed Coincidence Measurements of Randoms 6 3.4 Variance Reduction of Measured Delayed Coincidence Estimates 6 3.5 Singles-based Corrections..... 6 3.6 Discussion of Low Variance Randoms Correction Methods .7 Chapter 4 : 7 4.1 Introduction , 7 4.2 Singles Distribution Calculation 7 4.3 Geometric Calculations • 8 4.4 Photon Survival Calculation 8 4.5 Detector Efficiency Calculations 8 4.6 The Effects of Photon Scatter in the Object .• .......9 4.7 Calculating, Scaling and Subtracting the Randoms Distributions 10 Chapter 5 '. 10 5.1 Development of the Randoms Correction Algorithm : 10 5.2 Structure of the Randoms Correction Code 11 5.3 The MicroPET R4 Scanner 12 5.4 The MicroPET 114 Detector Model ....12 Chapter 6 .... 13 6.1 Measurements 13 6.2 Data Analysis : 147 6.3 Calculations of the Randoms Distributions 153 i v 6.4 Results 158 Chapter 7 193 7.1 Discussion of Results 194 7.2 Possible Improvements to the Single Photon Detection Calculation 200 7.3 Future Directions for this Work 203 7.4 Summary and Conclusion 205 Bibliography 207 v L I S T O F T A B L E S Number • Pa Tabic 2-1 Properties of scintillators used in P E T 2 Table 6-1 Experimental parameters of the studies with a high activity point source at various positions in ait-filled and water-filled cylinders. There are no scans position labelled 3 and 4. '. 13 Table 6-2 Studies with a high activity point sources in complex attenuation objects 1 4 Table 6-3 Details of the parameters in-the' uniform cylindrical phantom studies 14 Table 6-4 Details of the parameters in the contrast phantom studies 14 Table 6-5 The values of the FoMs between measured and calculated randoms distributions in detecting block structure in the central slice sinogram 1 5 Table 6-6 Measured and calculated counts rates and FoMs comparing the randoms central slice sinograms from frame 3 of the line source in air for different values of y.iLi 15 Table 6-7 Measured and calculated counts rates and FoMs comparing the randoms distributions from an off-centre point source in a water-filled cylinder for different values of y.hmlll 1 5 Table 6-8 Measured (calculated) count rates from high radioactivity line source at one hour time intervals over a 12 hour scan 15 Table 6-9 FoMs comparing the measured and calculated randoms distributions from the high radioactivity line source at one hour time intervals over the 12 hour scan. The values are for the central slice sinograms. In brackets.are the FoMs over all the pixels in the data set 16 Table 6-10 Measured and calculated count rates from a high radioactivity point source in an air-filled and water-filled cylinder at various positions .....17 fable 6-11 FoMs comparing the measured and calculated randoms distributions from the different point source configurations. The values are for the central slice sinograms. In brackets are the FoMs over all the pixels in the data set 17 Table 6-12 Measured and calculated count rates for cylindrical phantom studies 18 Table 6-13 The measured noise in slice 24 of the hot cylinder when the images are randoms corrected by difference methods ' 18 Table 6-14 Measured and calculated average count rates from the two mouse study 18 Table 6-15 The values of the FoMs between measured and calculated randoms distributions in the mouse study for the central slice sinogram and the summed slice sinogram : 19 LIST OF FIGURES Number 1 Page Figure 1-1 Classical anatomical study ot a dissected body by Leonardo DaVinci. These dissection studies and sketches showed the beginnings of anatomy as a serious discipline 4 Figure 1-2 Four coronal views from a P E T - F D G whole body scan. The tracer is seen to accumulate in a few organs, especially the brain and bladder. I D G also accumulates in tumours presenting in the image as hot spots. This image shows normal physiological'response and no evidence of malignant tumours 8 Figure 1-3 The tracer method illustrated. Radioactively labelled tracer is injected into a patient. Over time it accumulates in an organ or lesion where it may be imaged. The dynamic uptake process may also be studied by imaging the tracer distribution as a function of time 9 Figure 1-4 The energy spectrum of the positrons from l h F. From ref [8] 12 Figure 1-5 The basic process of positron decay and annihilation. A positron is emitted from a nucleus by beta decay. The positron slows by a series of elastic collisions and thermalizes. A collision of the thermal positron with an atomic electron results in annihilation. Alternately, the positron-electron pair may form Ps before annihilation 13 Figure 1-6 A pair of detectors in coincidence is used to detect positron annihilation events. The positron annihilation is assumed to have occurred on the line of response between the two detectors : : 14 Figure 1-7 The upper and lower energy thresholds and the energy spectrum. The ' photopeak from the 511 keV photons is shown along with the Compton plateau. The thresholds bracket the photopeak and reject events withenergies outside of the defined window 15 Figure 1-8 The effects of positron range. While the decays occurs within a volume defined by the source, the annihilations occurs within a larger volume defined by the positron range from the source v .....16 Figure 1-9 The deviation in the measured L O R due to non-collinearity 17 Figure 1-10 A true coincidence in the detector ring of a P E T tomograph 18 Figure 1-11 A scatter coincidence rn the detector ring of a P E T tomograph 18 Figure 1-12 A random coincidence in the detector ring of a P E T tomograph 19 Figure 2-1 The basic layout of the detectors rn a P E T tomograph. Detector rings of a given diameter surround the detection volume (dashed line). The subject is positioned on a bed in the detection volume : 22 Figure 2-2 The aperture and axial field of view of a dedicated whole body P E T scanner .... 23 vii Figure 2-3 A cut-block detector from a P E T tomograph. The scintillator is cut into channels which guide the light to the PMTs. The light from the photon interactions in each crystal produces a unique pattern in the PMTs. From this pattern, the hit detector element is identified 26 Figure 2-4 The detector ring of a whole body dedicated P E T tomograph. Detector blocks form a ring around a central aperture and multiple rings increase the axial extent. (Courtesy G.E . Medical Systems) 27 Figure 2-5 2D and 3D acquisition for a P E T head tomograph. Tn 2D mode (top) the septa absorb the axially oblique photons. In 3D mode (bottom) oblique LORs are accepted, significantly increasing the number of LORs sampled and, therefore, die sensitivity 29 Figure 2-6 The relative coincidence count rates from trues, randoms, scatters and the resulting N E C R are shown for different activities. This is for a typical scanner-opera ting in 3D mode 33 Figure 2-7 The geometric parameters describing a L O R within one slice plane. xr is the radial displacement of the L O R from a parallel chord through the centre and Ois the azimuthal angle 36 Figure 2-8 An example of a sinogram. P E T acquisition is performed on an off-axis point source. All the LORs of valid coincidences form a sine wave in the histogram of counts vs. radial positions and azimuthal angle. The various parallel projections at different angles are shown outside the tomograph ring. .37 Figure 2-9 Attenuation in P E T scanning. The intensity of annihilation photons is reduced in the object along the L O R from the emission point to the edge of the attenuating medium 45 Figure 2-10 The arrangement for a transmission scanning system. The external sources are used to sample the counts rates for LORs through the object 46 Figure 3-1 Anmhilations near the centre of the tomograph'have identical geometric efficiency for singles and coincidences 55 Figure 3-2 Annihilations near the axial ends of the tomograph have similar geometric efficiency for counting singles photons as annihilations near the tomograph centre. However the coincidence counting efficiency is much lower : 55 Figure 3-3 Singles events originating from radioactivity outside'the axial field of view. These result in single events, but not true coincidences 56 Figure 3-4 The theoretical smgles (solid red line) and coincidence (dotted blue line) geometric efficiencies for a point source as a function of axial position 56 Figure 3-5 The theoretical singles (solid red line) and coincidence (dotted blue line) total sensitivities for a point source as a function of axial position. The total sensitivity depends on both the geometric efficiency and the detector efficiencies '. 57 Figure 3-6 Singles events originating from physical interactions. Annihilations occur within the detector volume which should cause both photons to be incident vm on the tomograph. However. Cotnpton scatter causes the loss of one photon. However, a single event may still be detected 58 Figure 4-1 A set of randomly positioned radioactivity points (blue circles) from a uniform cylindrical bottle with a neck. The green circles and lines show the F O V of the tomograph 78 Figure 4-2 'The coordinate-system for a cylindrical dedicated tomograph. The z-axis goes from back to front ' 81 Figure 4-3 The tomograph physical parameters relevant to the solid angle calculation. Note that the crystal dimensions are not to scale. , 82 Figure 4-4 The elements of the solid angle calculation from activity sample point /' to detector^ 83 Figure 4-5 The axes of the sinograms are related to the tomograph parameters for transverse slices 85 Figure 4-6 .Attenuation sample points for along the path from an activity sample point to a detector. Both the primary (blue) and partner (pink) sample points are shown. The green lines show the F O V of the tomographs 87 Figure 4-7 The diagonal pattern seen in a measured randoms sinogram. This pattern is caused by the "block structure," the systematic-efficiency variations within a detector block 91 Figure 4-8 The differential cross section vs. scattered energy for object-scattered 511 keV photons; from the backscatter energy (170 keV) to full energy (511 keV). The vertical red lines show two common lower level thresholds use in PfciT 95 Figure 4-9 Scattered photons are scattered into a cone around the rav between the emission point and the detector. Some of the scattered photons hit other detectors 99 Figure 4-10 The normalized differential (solid) and integral (dashed) cross sections for the Compton scattering of 511 keV photons at angles from 0 to 180" 100 Figure 4-11 Some of the photons scattered along /} hit detector X a n d some scattered along r x hit detector./!. Tins calculation assumes that the losses and gains, of counts due to scatter in neighbour photon beams are roughly balanced < 101 Figure 5 - 1 Graphical user interface (GUI) for rcind_coir. This interface allows control of all the calculation parameters and the selection of the data sets used for the randoms correction '. 110 Figure 5-2 The graphical 2D data analysis tool used in CO DUX. A 2D sinogram is displayed 111 Figure 5-3 The 1-D profile analyze/display tool used in CODEX. This is used to analyze and compare the 1-D profiles from different data sets 112 Figure 5-4 The structure of the random correction algorithm .• 113 Figure 5-5 The structure of the routine in 'rand_corr' that calculates the contribution of an activity sample point to the singles and trues distributions. In each element is a reference to the corresponding equation. The "blue" photon refers to the primary photon under consideration, while the "pink" photon is the annihilation partner 116 Figure 5-6 The calculated singles distribution to the 32 x 192 arrays of the detectors in the tomograph. The singles distributions before (above) and after (below) correction for relative individual detector efficiencies are shown 118 Figure 5-7 The calculation of the randoms for the even numbered LORs (left) and the odd numbered interleaved LORs (right). The dashed line shows the electronic I O Y '. 119 Figure 5-8 The MicroPET R4 small animal tomograph. The aperture is 120 mm and axial 1'OV is 78 mm, large enough to accommodate two mice or one.large rat 121 •Figure 5-9 The detector block unit from the MicroPET R4 tomograph. An L S O cut- • block scintillator is optically coupled to a position sensitive P M T through a multi-clad fibre-optic bundle. Four boards at the backside of the P M T provide front-end readout. From reference |99], Used with permission 122 Figure 5-10 The distribution of the energy deposited by 511 keV photons Compton scattered in the detector (blue). This is convoluted with an energy blurring function ( F W H M = 23% at 511 keV) (green). The black dashed lines show the 350-650 and 250-650 keV energy thresholds 129 Figure 5-11 The energy distribution of photons Compton scattered in the object,/K V, (blue). This is corrected by the intrinsic efficiency (green), the fraction of photoelectric absorption in the detector/,;,., (reel) and convoluted with an energy dependent energy blurring function (teal). These assume <xdr/^> — 10 mm and energy resolution = 23% at 511 keV. The black dashed lines show the 250, 350, and 650 keV energy thresholds 131 Figure 5-12 Relative detector efficiencies in detector block model 133 Figure 6-1 Reconstructed coronal (left), sagittal (centre) and transverse (right) slice images of the off-axis line source. The lines and cross hair mark the approximate position of the tomograph axis 138 Figure 6-2 The large water bottle used in the point source measurements. The tube containing the point, source is seen at the left, sticking out of the neck of the bottle ' . 140 Figure 6-3 Positions of point sources in an au--filled cylinder (red asterisks) and the corresponding positions in a water-filled cylinder (blue circles). The green lines show the electronic aperture of the tomograph. The black lines show the location of die phantom cylinder walls 140 Figure 6-4. A coronal slice of the reconstructed attenuation (left) and radioactivity (right) images for the point source in air beside a water-filed ampoule ("complex point 1"). The cursor in the attenuation image marks the position of the point source 141 Figure 6-5 A sagittal slice of the reconstructed attenuation (left) and radioactivity (right) images for the point source in a water-filled cylinder beside an air-filed x sphere ("complex point 2"). The cursor in the attenuation image marks the position of the point source 141 Figure 6-6 Reconstructed images of a small uniform cylinder scanned with increasing axial displacements (0.0, 1.0, 2.0 cm and 3.0 cm) (left). Larger displacements resulted in increased radioactivity outside the axial F O V . The reconstructed image of the larger uniform cylinder is shown on the right ..142 Figure 6-7 Reconstructed images of the three contrast phantom configurations: "contrast phantom 1" (left), "contrast phantom 2" (centre), and "contrast phantom 2a" (right.) Phantoms 2 and 2a are identical except that the entire phantom was moved between the two scans , 144 Figure 6-8 Reconstructed transverse, sagittal and coronal slices of the reconstructed scan of two mice injected with ' , SF-F,F5 145 Figure 6-9 Histograms of attenuation coefficient values from the transmission scan of a water-filled cylinder using a " C o singles source. The uncorrected histogram (left) shows a water peak.at an erroneous value of 0.065 cm"'. The histogram of the segmented attenuation object (right) shows only discrete attenuation values, and the water peak is now at the correct value (0.09.58 cm"1.) The y-axis shows the frequency of attenuation values ; 146 Figure 6-10 The comparison of a measured sinogram (left) to a calculated sinogram without block structure (centre) and a calculated sinogram with block structure (right.) : 152 Figure 6-11 The measured (left), calculated (centre) and percent difference (tight) central slice randoms sinograms for the 12 hour frame of the off-centre line source study :• 160 Figure 6-12 The measured (left), calculated (centre) and percent difference (light) sinograms of a slice midway between the centre and the axial end (slice 48) for the 12 hour frame of the off-centre line source study : ....160 Figure 6-13 Radial profiles for azimuthal bin ("az") 48 and different axial slices ("ax")of the measured (red) and calculated (blue) randoms sinograms of the 12 hour frame of the off-centre line source study 162 Figure 6-14 Azimuthal profiles for radial bin ("racl") 42 and different axial bins ("ax")of the measured (red) and calculated (blue) randoms sinograms of the 12 hour frame of the off-centre line source study 163 Figure 6-15 Axial profiles for azimuthal bin ("az") 48 and different radial bins ("rad") of the measured (red) and calculated (blue) randoms sinograms of the 12 hour frame.of the off-centre line source study 164 Figure 6-16 The radial deft), azimuthal (centre) and axial (right) summed count profiles of the calculated (blue) and measured (red) randoms distributions from the 12 hour frame of the off-centre line source 165 Figure 6-17 The radial (left), azimuthal (centre) and axial (right) summed percent difference (datal = calculated, data2 = measured) profiles for the 12 hour frame of the randoms distributions of the off-centre line source 165 X I Figure 6-18 Radial profiles of the cltrect planes (segment 0) of the measured (red) and calculated (blue) radial profiles for azimuthal bin ("az") 48 and different axial slices ("ax"). Some profiles are missing because those oblique sinograms are not sampled in this segment 166 Figure 6-19 Radial profiles of the oblique planes (segment -7) of the'measured (red) and calculated (blue) radial profiles for azimuthal bin ("az") 48 and different axial slices ("ax"). Some profiles are missing because those oblique sinograms are not sampled in this segment 167 Figure 6-20 Radial profiles of the oblique planes (segment +7) of the measured (red) and calculated (blue) radial profiles for azimuthal bin ("az") 48 and different axial slices ("ax"). Some profiles are missing because those oblique sinograms are not sampled in this segment 167 Figure 6-21 The radial summed count profiles of the measured (red) and calculated (blue) randoms distributions for segments -7 (left), 0 (centre) and +7 (right) 168 Figure 6-22 The measured (left), calculated (centre) and % diff (right) central slice randoms sinograms for frame 6. The reduced counts in the measured sinogram greatly increase the noise.' 70 Figure 6-23 The W I S I . (left) and scl(% diff) (right) per slice for the 12 hour frame. The values for NMSE and .ul("/ocli[]) are different than those in Table 6-9 because, in the profiles, the calculated data is used in the denominator of (6.3) and (6.5) to avoid "divide-by-zero" errors in the profiles 171 Figure 6-24 T h e / " / N (left) and 2D correlation coefficients (right) per slice for the 12 hour frame. The values (on j(/N are different than those in Table 6-9.because, in the profiles, the calculated data is used in the denominator of (6.4) to avoid "divide-by-zero" errors in the profiles '. 172 Figure 6-25 The radial (left) and axial (right) randoms summed percent difference profiles where datal = measured frame 0, and data2 = measured frame 11. The counts in frame 11 were scaled to those in frame 0 .173 Figure 6-26 The radial (left) and axial (right) randoms summed percent difference profiles where datal = calculated and data2 — measured randoms. The top row shows the profiles for frame 0 (high rates) and the bottom row shows the profiles for frame Tl (low rates) 174 Figure 6-27 Measured (left), calculated (center) and percent difference (right) randoms • sinograms for the central slice of the scan of the point source in water at position 1 .' 176 Figure 6-28 Measured (left) calculated (center) and percent difference (right) randoms sinograms for the central slice of the scan of the'point source in air at position i : 177 Figure 6-29 The'radial (left), azimuthal (centre) and axial (light) summed percent difference (data! = calculated, data2 = measured) profiles for the randoms distributions of the point source position 1 in water 177 xii Figure 6-30 The radial' (left), azimuthal (centre) and axial (right) summed percent difference (datai = calculated, data2 = measured) profiles for the randoms distributions of the point source at position 1 in air 1 figure 6-31 The radial (left), azimuthal (centre) and axial (right) summed count profiles of the measured (blue) and calculated (reel) randoms distributions from the large uniform cylinder study '. 1 Figure 6-32 The radial (left), azimuthal (centre) and axial (right) summed percent difference (datai — calculated, data2 = measured) profiles for the large cylindrical phantom : 1 Figure 6-33 The axial summed count profiles of the measured (blue) and'calculated (red) randoms distributions for the small uniform cylinder at 0.0 cm (left) and 3.0 cm (right) displacements .....1 Figure 6-34 The axial summed percent difference profiles of small uniform cylinder at 0.0 cm (left) and 3.0 cm (right) displacements 1 Figure 6-35 The radial (left), azimuthal (centre) and axial (right) summed percent difference (datal = calculated, data2 = measured) profiles for contrast phantom 2, study 1. The profiles are summed over the other binning directions '. 1 Figure 6-36 Profiles through transverse slice of the reconstructed image of contrast phantom 2 with measured (pink squares) and calculated (blue diamonds) ra n d o m s c o r re c ti o n 1 Figure 6-37 Profiles through coronal slice (long axis) of the reconstructed image contrast phantom 2 with measured (pink squares) and calculated (blue diamonds) randoms correction 1 Figure 6-38 The placement of the ROIs on the hot uniform cylindrical phantom image. Note the image artifacts in the axial direction 1 Figure 6-39 The measured noise in.the slices of the hot phantom for different randoms corrections 1 Figure 6-40 Measured (left) and calculated (centre) and percent difference (right) randoms sinograms for the central slice of the two mouse study. Note that the measured and calculated sinograms do not have the same scale 1 Figure 6-41 Measured (left) and calculated (centre) and percent difference (right) randoms sinograms for the summed slice of the two mouse study. Note that the measured and calculated sinograms do not have the same scale 1 •Figure 6-42 The radial (left), azimuthal (centre) and axial'(right) summed count profiles of the measured (blue) and calculated (red) randoms distributions from the two mouse study 1 Figure 6-43 The radial (left), azimuthal (centre) and axial (right) summed percent difference (datal = calculated, clata2 = measured) profiles for die mice study 1 Figure 6-44 Profiles through transverse slice of the reconstructed mice image using measure (pink squares) and calculated (blue diamonds) randoms correction 1 xm Figure 6-45 Profiles through coronal slice (long axis) of the reconstructed mice image . using measure (pink squares) and calculated (blue diamonds) randoms correction • : 191 F'igure 6-46 Reconstructed randoms central transverse slice images from measured (left) and calculated (centre) randoms sinograms. Also shown is the subtraction image (right) 192 Figure 6-47 Reconstructed randoms coronal images from measured (left) and calculated (centre) randoms sinograms. Also shown is the subtraction image (light) 192 XIV A C K N O W L E D G M E N T S I would like to especially thank D r . Vesna Sossi for her guidance, help, and support throughout this long, winding and arduous road. H e r keen insights, wise direction and coolness in crisis were invaluable in moving the research, and this thesis, forward. I would also like to thank D r . A n n a Celler, D r . Alex M a c K a y and D r . T o m Ruth for their sage advice and patience in serving on my committee. A s well, I would like to thank those at the U B C / T R I U M F P E T Physics group for all their assistance and discussions, especially Er i c Vandervoort, Marie-Laure Camborde, Siobhan M c C o r m i c k , D r . Stephan Blinder, Katie Dinelle, and D r . Kelv in Raywood. Most helpful for the experimental work were Carolyn English and Caroline Williams and the T R I U M F P E T cyclotron group, especially K e n Buckley. M y colleagues at B C I T in the Physics and Nuclear Medicine Technology departments were also supportive and helpful. I am also grateful for the financial support from B C I T through the Appl ied Research and Development F u n d , as well as the B C I T Professional Development Fund. xv D E D I C A T I O N T h i s work is dedicated, with love, to m y wife Fiona, for her love and longsuffering. It is also dedicated to my two treasures, Isabelle and Lucy, who missed their daddy while he worked so hard. I also offer special thanks to my parents and in-laws for all their help through this academic trial. sic, xov 8o£ocv 6eoo xvi Chapter 1 The Principles of Positron Emission Tomography i 1.1 Overview and Motivation 1.1.1 Overview and Motivation of this Work This thesis describes the investigation o f a new approach to randoms correction in positron emission tomography (PET) . P E T is an advanced medical imaging technology that measures the amount and distribution o f a radioactive tracer in a subject using the radiation from positron decays. T h e accuracy and precision o f P E T measurements depends on the quantitative corrections that are applied to the data. These corrections are necessary to account for the physical interactions o f the emitted radiation in the subject and in the tomograph detectors. T h e "randoms correction" is used to remove the counts caused by erroneous coincidence events that degrade the accuracy o f P E T data. T h e standard method o f randoms correction requires the acquisition and processing o f additional data during the P E T scan. A s wel l the correction itself increases the noise in the image data set, reducing the measurement precision. Alternate randoms correction methods have been investigated to minimize the effects o f image noise. These generally also require additional data acquisition, storage and processing and may introduce significant biases into the corrected image data. This new method of randoms correction is based on modeling the physical processes involved in the counting o f the erroneous randoms events. A model-based approach is already successfully used for other P E T data corrections, such as scatter and normalization, and so model-based randoms correction is a novel and logical extension. This scheme is straightforward, in that it does not require the additional measurements or data storage necessary for the other methods, reducing the hardware requirements for a P E T scanner. This correction is applied post-acquisition and therefore does not interfere with the acquisition. A s well, the correction produces smooth randoms distributions and should contribute to noise reduction in the image. 1.1.2 Thesis Overview This thesis first presents the principles and technology o f P E T imaging and then describes the justification, theory, development and testing o f the model-based randoms correction. Chapter 2 One begins with a description o f the field o f medical imaging and the function and place o f P E T imaging within it. The basic physical principles o f die positron decay, annihilation and detection are then explained in the context o f positron imaging. Chapter Two describes in detail the dedicated detector systems used for P E T imaging. The physical layout, components, features and performance o f tomographs are given. These details are incorporated in the detector modeling in the randoms correction calculation, and, therefore, require explanation. T h e form o f the acquired P E T data, as well as the quantitative corrections and image reconstruction methods are also discussed, as these may influence the accuracy o f the randoms correction. Chapter Three explains the basic causes and effects o f random coincidences. T h e commonly used delayed coincidence method is described. Then , current randoms corrections are compared, showing the advantages and disadvantages that need to be addressed in any new randoms correction method. Chapter Four presents the theory behind the new model-based randoms correction method. 1Tie single photon detection model is described. A s well, the scaling method and other corrections are discussed. Chapter Five details the present implementation o f the model-based randoms correction, beginning with an explanation o f the algorithm and computer code. T h e n , the specific tomograph used for testing, the M i c r o P E T R 4 ™ , is described, along with ways this specific design is modeled in the calculation. Chapter Six describes the experimental phantom and animal measurements used to test randoms correction calculation. T h e results o f the tests and figures o f merit used in the validation are then presented. Chapter Seven gives a final discussion o f the results o f the validation tests. T h e thesis then concludes with a summary o f the work and potential future directions. 1.2 The Realm of Medical Imaging 1.2.1 Introduction T h e human body is one o f the most complex physical systems in the universe. T h e body consists o f many varied dynamic systems which interact with each other and with the environment. T h e 3 intricate chemical and biochemical reactions give rise to many apparently unique emergent properties including self-consciousness. T h e exploration o f this beautiful and intricate world is one o f the greatest and most rewarding challenges to m o d e m science. T h e detailed, systematic study o f the human body began with mapping o f the large and small morphological and structural characteristics and relationships. These studies, such as those undertaken by Leonardo D a V i n c i (Figxire 1-1), belong to the discipline o f Anatomy. Figure 1-1 Classical anatomical study of a dissected body by Leonardo DaVinci. These dissection studies and sketches showed the beginnings of anatomy as a serious discipline. Physiology is the study o f the macroscopic properties o f the dynamic physical and biochemical systems and has been pursued since the late nineteenth century. T h e microscopic interactions responsible for physiological processes have begun to be investigated in vivo with the use o f functional medical imaging devices. 4 Since the beginning o f the twentieth century, imaging techniques have greatly enhanced the study o f the systems o f the human body for both research and clinical purposes. These include radiography (planar and computed tomography (CT)), magnetic resonance imaging (MRI), ultrasound, standard nuclear medicine (planar and single photon emission computed tomography (SPECT)) and positron emission tomography (PET) . These employ complex detector systems, similar to those employed elsewhere in physics and astronomy, each optimized to detect the different types o f signals used to probe the physical and chemical characteristics o f the human body. T h e main task o f medical imaging is to provide the tools to assist in answering research and clinical questions in anatomy, physiology and pathophysiology. T h e imaging disciplines also provide interesting and fruitful research questions for radiation and detector physics, as well as image data and information analysis. Thus, medical imaging physics provides a fertile cross-disciplinary field to which physicists have made, and will continue to make, vital contributions. 1.2.2 X-ray, MRI and Ultrasound T h e details o f medical imaging physics are described in references [1] and [2] with only a short overview provided here. In radiography, X-rays are produced on one side o f the subject by a high efficiency generator tube and detected on the opposite side. T h e detectors vary in sophistication, from simple photographic film plates to arrays o f high pressure X e n o n proportional chambers. X-rays interact in the body mainly through photoelectric absorption and Compton scatter, and thus probe the local mass densities and atomic numbers o f tissue. T h e differential attenuation o f the X-rays in the non-uniform media o f the body may be used to produce simple, qualitative planar images or sophisticated tomographic slice images when C T is employed. Radiographic studies yield mainly anatomic information. In magnetic resonance imaging, the densities and local environments o f the nuclear dipole moments o f the constituent atoms in tissue are probed using nuclear magnetic resonance techniques. T h e subject is placed in a large external magnetic field to align the magnetic moments 5 and create a magnetization vector. A spin flip or sequence o f spin flips is induced in one nuclear species, usually 1H, by the application o f an external radiofrequency (RF) pulse(s) at the appropriate resonance frequency. Relaxation spin-flips then induce a signal in the R F coils that depends on the local *H densities, due, for example, to H 2 0 or lipids. Gradient magnetic fields are applied to measure the signal from particular volumes within the subject. Slice images are formed using tomographic image reconstruction. Contrast enhancement may be performed using measurements that weight the signal by the longitudinal and/or transverse relaxation times ( T l and T2). M R I images show tissue properties, but advanced techniques, such as functional M R I (fMRI) and magnetic resonance spectroscopy (MRS), are able to derive functional information, such as regional blood flow. In ultrasound imaging, the acoustic properties o f tissue are probed using high frequency sound waves produced by a transducer coupled to the subject. Ultrasound measures the signal from the reflection and refraction in the tissue and produces an image based on tissue density. Advanced techniques, such as Doppler ultrasound, yield some functional information. Tomographic imaging is also being done using ultrasound. 1.2.3 Computed Tomography Tomographic imaging is the calculation o f the distribution o f the signal generating material in a slice o f an object from the set o f projection data taken at many angles. Tomographic reconstruction techniques have been used in mathematics and physics since the practical implementation o f computers. Computed tomography in medical imaging was first practically achieved by Godfrey Hounsfield in 1971 with the introduction o f the X-ray C T imager [3]. This was immediately followed with the introduction of the first modern (that is, post C T ) P E T scanner [4]. T h e main benefits o f tomographic imaging are twofold. First, tomography yields the proper three dimensional (3D) distribution o f material in the subject, not a projection o f it. This allows for quantification o f the properties o f the material. T h e other benefit is that tomography provides better signal contrast for visual interpretation. X-ray C T , for example, images soft tissue organs that would not be visualized in planar images due to the low contrast. 6 In planar imaging, the signal at each detector element is the line integral projection through the subject at a given angle. T h e projected data at a radial position, xn and azimuthal angle, 6, p(xr, &), are related to the two dimensional distribution in the subject, f(x,j), by the Radon transform: p(xr,0) = J J f ( x , y ) S ( xcos0 +ysin0-xr)dxdy (1.1) T h e 2 D distribution, f(xj), may be reconstructed from the acquired projection data by inverting the Radon transform. E a c h 2 D distribution is a cross-sectional slice image o f the object and the set o f slices forms a 3 D image volume. In practice, different mathematical techniques are used for tomographic image reconstruction, broadly categorized as analytic and iterative. Analytic reconstruction involves directly inverting the radon transform, and is linear, fast and robust. However, analytic reconstruction alone cannot model the physical and statistical properties o f the detection process. Iterative reconstruction makes successive approximations to the image, updating it based on comparisons between the projections o f the estimated image and the measured projection data. Some iterative methods model the detection process and may produce images with better noise and accuracy characteristics. However, the use o f multiple iterations and added layers o f modeling complexities increases the image reconstruction times. A s well, these techniques are not linear and may not have good convergence criteria. Tomographic reconstruction in P E T is described in more detail in Sect ion 2.3.4. 1.3 Nuclear Medicine 1.3.1 O v e r v i e w Nuclear medicine uses nuclear tracer techniques to image the functional and biochemical systems o f the body. Nuclear medicine includes planar single photon imaging, as well as, tomographic single photon and positron imaging. Routine clinical studies produce qualitative images for visual 7 interpretation. However, P E T is able to quantitatively measure a wide variety of functional and biochemical information in vivo. These include such important parameters as regional blood flow, glucose and oxygen metabolism and receptor binding. Many of these measurements are only possible in human subjects through tracer imaging and only P E T has the physical calibrations and corrections necessary for accuracy. Thus, P E T has become an important tool for enhancing medicine with the capabilities of quantitative physical science. This is already leading to important advances in both basic medical research, (such as brain activation and neuroreceptor studies or tumour kinetics) and the clinical practice of human medicine (such as the whole body tumour detection or assessment of response to therapy). Figure 1-2, below shows a typical set of clinical P E T images. Figure 1-2 Four coronal views from a PET-FDG whole body scan. The tracer is seen to accumulate in a few organs, especially the brain and bladder. FDG also accumulates in tumours presenting in the image as hot spots. This image shows normal physiological response and no evidence of malignant tumours. 1.3.2 Tracer Imaging Tracer techniques in chemistry, biology and biochemistry begin with the labelling of a compound with a different atom or molecule (called a "tag" or "label") that is not native to the original. Labels 8 are chosen so that they are detectable externally or following the process under investigation in some by-product. It was recognized in the early days o f nuclear physics that radioactive isotopes would provide very suitable labels for the molecule or compounds ("tracers") used to study a specific chemical and biological system. G e o r g Hevesy first investigated this approach while working with Niels Bohr in Copenhagen in the 1920s and, therefore, Hevesy is considered the "Father" of nuclear medicine while Bohr has been given the title o f the "Godfather" [5]. In nuclear medicine, the radiolabelled tracer is introduced into the subject and the physical distribution is measured by an external radiation detector. By measuring the time course o f the tracer, temporal information about the tracer distribution may be determined, from which physiological parameters are extracted using various modeling techniques. Figure 1-3 shows the tracer accumulation and detection process. Figure 1-3 The tracer method illustrated. Radioactively labelled tracer is injected into a patient. Over time it accumulates in an organ or lesion where it may be imaged. The dynamic uptake process may also be studied by imaging the tracer distribution as a function of time. T h e radionuclides used in nuclear medicine determine the appropriate imaging system. Single photon emitters, usually producing gamma rays from isomeric transitions (e.g. 9 9 m T c ) or following 9 electron capture decays (e.g. 1 2 3 I , m I n and 6 7 G a ) , are imaged with a large area, position sensitive N a l radiation detector called a "scintillation gamma camera." Images are formed on the detector using a lead parallel-hole collimator that selectively absorbs the gamma rays that are not perpendicular to the detector face. Acquisition times o f several minutes are necessary to obtain adequate counts to form an image with a sufficient signal-to-noise ratio. In P E T , positron emitting radionuclides are imaged with dedicated tomographs. T h e imaging principles o f P E T are described in Section 1.4. Both P E T and S P E C T use tomographic reconstruction to produce 2 D slice images o f the spatial distribution o f radioactive tracer in the subject. They are "emission tomography" because the radiation used for imaging is emitted from the tracer distribution; in contrast to X-ray C T where the radiation is transmitted through the subject ("transmission tomography"). In S P E C T , reconstruction is done using planar image data which are acquired using a rotating gamma camera at a set o f angles around a subject [6]. In P E T , the projection data is measured from all angles simultaneously. O n e o f the major benefits o f emission tomography is that by determining the spatial distribution o f the radioactivity distribution in three dimensions, the resulting images may be quantified i f all the appropriate corrections are applied to the acquired data. Thus , the values in each reconstructed voxel may be corrected and calibrated to obtain units o f radioactivity concentration (Bq/ml.) More significantly, through the analysis o f the spatial distribution or time course o f the radioactive tracer, other physiological information, such as blood flow or neuroreceptor binding potential, may be measured. This involves mathematical modeling o f the detailed biochemical processes and measuring the time course o f a specific appropriate tracer through imaging the relevant biological structures. Measurements o f the time dependent activity in the blood plasma or metabolites may also be necessary. T o obtain quantitatively accurate values, the instrumentation calibration must be performed accurately and the effects o f photon interactions in the subject (absorption and scatter) must be corrected, which requires additional measurements and processing. A s well, P E T also requires a correction for accidental or "random" coincidences that occur during data acquisition. 10 In SPECT, corrections for detector blurring are also important. The development and testing of accurate data corrections is the subject of ongoing research in medical physics. 1.4 Positron Emission Tomography 1.4.1 Introduction The suitability of easily produced positron emitting radionuclides for biological tracer studies was recognized soon after their initial synthesis in the first cyclotrons during the 1930's [7]. The usefulness of the basic quartet of positron emitters, n C , 1 3 N , 1 D O and 1 8 F , in biochemistry research was almost immediately recognized. The unique properties of positron annihilation radiation present the possibility of significant technical advantages over single photon imaging with collimated gamma cameras. The collimators on gamma cameras reduce the geometric efficiency to about 0.01%. They are also a significant source of image blurring. P E T tomographs, which do not use collimators, have 50 — 100 times the counting efficiency of gamma cameras. As well, P E T has superior spatial resolution. 1.4.2 The Physics of Positron Annihilation Positron decay is a nuclear beta decay mode in which the weak interaction in the nucleus transform a proton into a neutron accompanied by the emission of a positron, and an electron neutrino, v. Since positron decay occurs in proton-rich nuclei, positron emitters may be easily produced in cyclotrons using (p,n) or (d,n) reactions. The [3+s are emitted in a continuous spectrum up to a maximum energy, E^max. Figure 1-4 shows the energy spectrum of (3+s from 1 8 F [8]. 11 F-18 9.00E-02 -| 8.00E-02 J o 7.00E-02 -§ 6.00E-02 -g" 5.00E-02 -t 4.00E-02 -•B 3.00E-02 -ro "55 2.00E-02 -1.00E-02 -O.OOE+00 -0.000 0.100 0.200 0.300 0.400 0.500 0.600 0.700 Energy (MeV) Figure 1-4 The energy spectrum of the positrons from 1 8 F . From ref [8]. T h e physics o f positron annihilation is well known [9][10]. Positrons slow in matter through a series o f elastic collisions with atomic electrons. Nearly all annihilations occur only after the positron has thermalized. T h e annihilation probability depends on the relative spins o f the electron and positron upon interaction. If the spins are anti-parallel (singlet state), the probability o f annihilation is —1115 times more likely than for parallel spin (triplet state). Annihilation from the singlet state produces two photons because each photon has an angular momentum o f \h and two polarization states. Annihilation from the triplet state produces three photons. Since triplet state collisions occur with three times the frequency o f singlet state collisions, the final ratio o f two to three photon annihilations is 372. M o r e exotic processes have been discovered that produce more or less photons, but at very low branching ratios. Alternately, the electron and positron may form a short-lived stable state positronium (Ps) before annihilating. T h e probability o f positronium formation depends on the medium; in water-equivalent matter it occurs in only 36% o f decays. Ps exists in either a singlet-state, with spins anti-parallel, ^S0 (para-Ps), or a triplet state, with spins parallel, 3Sj (ortho-Ps). T h e singlet state is 12 shorter-lived (mean Ufetime =125 ps) than the triplet state (mean lifetime = 140 ns) and again mainly decays by producing a pair o f photons. T h e two photons emitted from the singlet annihilations with the electron and positron at rest each have an energy equal to the electron rest energy ( E 0 =511 keV) with exactly opposite momenta (nearly collinear in opposite directions) so that the null angular momentum o f the singlet state is conserved. In reality, the distribution o f positron energies causes small variations in the energies and the angles between the photons (Section 1.4.5). T h e triplet state decays to three photons (total angular momentum = \h) and total energy equal to the sum o f rest mass o f the electron-positron pair (1022 keV). Figure 1-5 shows a simplified view o f the most frequent two-photon positron annihilation process. nucleus (e.g. 18F) Y ( 5 i i keV)^T Figure 1-5 The basic process of positron decay and annihilation. A positron is emitted from a nucleus by beta decay. The positron slows by a series of elastic collisions and thermalizes. A collision of the thermal positron with an atomic electron results in annihilation. Alternately, the positron-electron pair may form Ps before annihilation. 1.4.3 Coincidence Detection of Annihilation Radiation Positron emission tomography uses the coincidence detection o f the nearly collinear annihilation photons as projection data from which tomographic images are produced. A P E T tomograph is an array of radiation detectors surrounding an imaging volume. E a c h positron decay within this 13 volume produces two photons. When one detector is hit by a photon, the other detectors are polled for the detection of a second photon. If a second photon is detected within a coincidence time window (~6-15 ns), one coincidence event is counted. If no second photon is detected, no coincidence event is registered and the event is considered a single photon detection (a "single"). Figure 1-6 shows a pair of detectors in coincidence. Figure 1-6 A pair of detectors in coincidence is used to detect positron annihilation events. The positron annihilation is assumed to have occurred on the line of response between the two detectors. When two photons are detected in coincidence, it is assumed that they have both originated from the same annihilation and that the positron decay occurred somewhere along the line between the two hit detectors called the "line of response" (LOR). The L O R does not provide the exact location of the annihilation and decay. Nevertheless, the total number of events per L O R is proportional to the line integral of the radioactivity along the L O R . When multiple detectors are used, the coincidence counts from many lines of response are measured simultaneously to produce a spatial projection profile. A ring of detectors around the radioactivity distribution measures the counts in all the L O R s needed to tomographically reconstruct one image slice of the radionuclide distribution. Multiple detector rings are used to measure the counts in the L O R s through different slices of the radioactivity distribution. Chapter 2 describes P E T tomographs in detail. 14 1.4.4 Energy Thresholding T h e d e t e c t e d e v e n t s a r e n o t o n l y t e s t e d f o r c o i n c i d e n c e , t h e y a r e a l s o a n a l y z e d b y a n e n e r g y w i n d o w . T h e d e t e c t e d e v e n t s w i t h e n e r g i e s b e l o w t h e l o w e r t h r e s h o l d o r a b o v e t h e u p p e r t h r e s h o l d a r e r e j e c t e d . Figure 1-7 s h o w s t h e t h r e s h o l d s i n t h e e n e r g y s p e c t r u m . Figure 1-7 The upper and lower energy thresholds and the energy spectrum. The photopeak from the 511 keV photons is shown along with the Compton plateau. The thresholds bracket the photopeak and reject events with energies outside of the defined window. T h e e n e r g y w i n d o w i s u s e d t o r e j e c t e v e n t s d u e t o t h e p h o t o n s t h a t h a v e C o m p t o n s c a t t e r e d t o l o w e r e n e r g i e s i n t h e o b j e c t . I n C o m p t o n s c a t t e r , p h o t o n s a r e e l a s t i c a l l y s c a t t e r e d f r o m t h e l o o s e l y b o u n d a t o m i c e l e c t r o n s a n d l o s e e n e r g y . T h e e n e r g y a n d s c a t t e r i n g a n g l e o f t h e s c a t t e r e d p h o t o n a r e r e l a t e d b y t h e C o m p t o n e q u a t i o n : photopeak (511 keV) • Compton plateau Energy (keV) E ( 1 . 2 ) \ + a(\-cosdsc) w h e r e Esc = e n e r g y o f t h e s c a t t e r e d p h o t o n . E - e n e r g y o f t h e u n s c a t t e r e d p h o t o n . Bsc - t h e p h o t o n s c a t t e r i n g a n g l e . a = E / m e = i n i t i a l p h o t o n e n e r g y / e l e c t r o n r e s t m a s s ( 5 1 1 k e V ) . 1 5 The finite energy resolution of the tomograph detectors causes significant overlap between the photopeak and the Compton plateau. Thus, the lower energy threshold may accept and reject both scattered and unscattered events. The choice of threshold value is, therefore, a compromise between counting sensitivity and scatter rejection. Typical values of the lower threshold are 250-350 keV. The optimal threshold depends on the tomograph design and the scanned object. 1.4.5 Positron-range and Photon Non-coll ineari ty Two details of the positron decay and annihilation process have an impact on the spatial resolution in P E T imaging: the positron range and non-collinearity. The emitted positrons travel a short distance (up to a few mm) in the subject before annihilation. This displacement results in the annihilations occurring in a volume larger than the volume of the radionuclide distribution from which they were emitted. The distributions of annihilations from each point of the source overlap, blurring the spatial distribution and reducing the image spatial resolution. The positron range, and therefore the severity of the blurring, depends on the -E^ O T a v which is radionuclide dependent [11]. The mean positron range in water-equivalent matter for P E T radionuclides vary from 0.6 mm for 1 8 F (E ? > m a x = 0.630 MeV) to 5.9 mm for 8 2 Rb ( E ^ = 3.36 MeV). Figure 1-8 illustrates the effects of positron range. r a n g e o f >; \ p o s i t r o n s V ; \ s o u r c e \ \ - " ^ < Figure 1-8 The effects of positron range. While the decays occurs within a volume defined by the source, the annihilations occurs within a larger volume defined by the positron range from the source. 16 T h e second physical factor that affects spatial resolution in P E T imaging is the small non-collinearity o f the annihilation radiation. Annihilation photons are perfectly collinear i f the total momentum o f the electron-positron pair before annihilation = 0. However, annihilation may occur when the positron-electron pair has a small, but finite momentum; thus, for momentum to be conserved, the angle between the annihilation photons is < 180° . T h e resulting distribution o f angles has a F W H M o f 0.5°, so that the angle between the photons is 1 8 0 ° ± 0 .25° . N o n -collinearity results in the errors in the measured L O R and incorrect positioning o f the decay, as shown in Figure 1-9. T h e magnitude o f the error depends on the design o f the tomograph (Section 2.2.1). ^ I T I I T ^ detector ring - true LOR angular deviation due to non-collinearity /• ' \ . annihilation ^^ /XOXIl5>^ ^^  resulting LOR Figure 1-9 The deviation in the measured LOR due to non-collinearity. 1.4.6 Types of Detected Events A s described above, positron annihilation radiation consists o f two nearly collinear 511 keV photons. T h e detection o f an individual photon is a single event and the detection o f two photons within the coincidence time window is a coincidence event. However, the physical meaning o f a coincidence event depends on the histories o f two photons involved. If neither photon interacts with the object containing the radioactivity before hitting the detectors the coincidence is a "true coincidence" or "true." Such coincidences measure counts on the true line o f response along which the annihilation occurred, as shown in Figure 1-10. 17 Figure 1 - 1 0 A true coincidence in the detector ring of a PET tomograph However, one (or both) of the annihilation photons may Compton scatter in the object. If both photons are still detected after scattering, they may result in a coincidence event. This accurately reflects the detection of a single annihilation; however, the calculated line of response is incorrect because one (or both) of the photons has been displaced from the original L O R , as shown in F i g u r e 1-11. Such events are called "scatter coincidences" or "scatters." Given that they result in an inaccurate L O R , scattered events degrade the image. A fraction of the scatter is euminated by the energy threshold, but scatter rejection is incomplete. Figure 1 - 1 1 A scatter coincidence in the detector ring of a PET tomograph. 18 A third type of coincidence event occurs when single photons from unrelated annihilations are detected within the coincidence timing window and are counted. This is an "accidental" or "random" coincidence. Randoms incorrectly count the event as originating from a single annihilation and produce an incorrect L O R for the event, as shown in Figure 1-12. Randoms degrade image quality and measurement accuracy. The theory of random coincidences and their corrcction is described in detail in Chapter 3. Figure 1-12 A random coincidence in the detector ring of a PET tomograph. 19 Chapter 2 Tomograph Design and Quantitative Corrections 20 2.1 P E T Tomograph Design 2.1.1 Chapter Overview This chapter describes the details of the P E T detector system that are modeled in the randoms correction calculation. First, the physical design parameters and characteristics of the tomograph and the scintillation detectors are discussed. Then, the various performance characteristics of tomographs are given. Next, the structure of the acquired P E T data and the methods of tomographic reconstruction are explained. The chapter ends with a description of the various quantitative corrections that are applied to P E T data. 2.1.2 Tomograph Physical Parameters The detector system used in P E T is variously called a "tomograph," "scanner," or "camera." This system is designed to optimally detect the positron annihilation photon pairs emitted from the radioactivity inside a detection volume. Radiation detectors are positioned around this volume and coincidence events in detector pairs are counted. Many designs of positron imaging systems have been explored; including rotating opposing planar detectors[12] and partial rings [13]. Dual-head gamma cameras with upgraded detectors and coincidence acquisition electronics, called "dual head coincidence imaging" (DHCI) systems, have also been developed [14]. None of these designs are presently in common use. The present standard design of a dedicated tomograph is a set of circular or polygonal rings of scintillation detectors, as shown in Figure 2-1. Tomographs vary in the size, type and number of detectors, as well as in the size and number of the detector rings [15] [16]. The detector rings enclose the central detection volume, called the "aperture" or "port." This aperture is smaller than the diameter of the detector rings. A bed supports the subject to be imaged and moves them within the port. 21 detector Figure 2-1 The basic layout of the detectors in a PET tomograph. Detector rings of a given diameter surround the detection volume (dashed line). The subject is positioned on a bed in the detection volume. The size of the aperture determines what may be imaged. "Whole-body" scanners require an aperture large enough to image a human torso (56.2 - 70.0 cm). Plead scanners have an aperture only large enough to image a human head; around 35.0 cm (CTI-Siemens HRRT™)[17] . Dedicated animal scanners have smaller apertures, varying from 12.0 cm for dedicated rodent scanners (Siemens MicroPET Focus 120™) to 22.0 cm for small primate scanners (Siemens MicroPET Focus 220™) [18]. The aperture size affects the tomograph performance because the solid angle and blurring effects of non-collinearity depend on the diameter of the detector rings. Larger apertures also increase the cost and complexity of a tomograph because many more detectors are required. The other major aspect of tomograph geometry is the axial length of the detection volume or the "axial field of view" (AFOV). The size of the A F O V depends on the size of the individual detectors in the axial direction and on the number of detector rings. The A F O V determines the coverage of the subject that may be imaged at one bed position. Large axial extents are imaged by scanning at multiple overlapping bed positions and knitting the images together. Again, the values of the A F O V vary between tomograph types. Both the MicroPET Focus 120 and Focus 220 animal scanners have the A F O V = 7.8 cm. The H R R T dedicated brain scanner has A F O V = 25.2 cm, which is large enough to image the entire brain at one bed position. For whole-body dedicated 22 P E T scanners the A F O V = 15.2 - 25.6 cm. Figure 2-2 shows the aperture and axial F O V for a typical whole-body P E T tomograph. Figure 2-2 The aperture and axial field of view of a dedicated whole body PET scanner. 2.1.3 Detector Materials The detectors in P E T tomographs typically consist of inorganic scintillator crystals coupled to photomultiplier tubes (PMTs) [19]. Scintillators with high density and a high effective atomic number, Z e f f , are used because of their high attenuation properties for 511 keV photons. The most commonly used scintillators used in modem P E T cameras are Lutetium Oxyorthosilicate (LSO), Gadolinium Oxyorthosilicate (GSO) or Bismuth Germanate (BGO), although Sodium Iodide (Nal), Cesium Fluoride (CsF) and Barium Fluoride (BaFj) have been used in the past. The properties of P E T scintillators are given in Table 2-1 [20]. 23 Table 2-1 Properties of scintillators used in PET. Scintillator Density (g/cm2) Zeff Attenuation Coefficient at 511 keV (cm1) Decay time (ns) Light Output (photons/MeV) BGO 7.1 75 0.95 300 9,000 NaI(Tl) 3.67 51 0.34 230 41,000 GSO(Ce) 6.71 59 0.70 60 8,000 CsF 4.61 52 0.42 2.5 25,000 BaF2 4.89 54 0.44 0.6/620 1,600/8,200 LSO(Ce) 7.4 66 0.88 40 30,000 L S O , G S O and B G O all have the very good attenuation characteristics for gamma rays. In recent years, L S O has become the scintillator of choice because it also has a high scintillator light output (which influences the energy and spatial resolution) and a fast light output decay time (which influences the high count rate characteristics). The fast decay times also allow for a shorter coincidence timing window, which reduces the rate of random coincidences (Section 3.2.1). L S O is slightly radioactive due to the naturally occurring isotope 1 7 6 L u . This has an abundance of 2.58% and a half-life of 3.8 x 10 1 0 years [21]. 1 7 6 L u emits one |3" ( E ^ v g = 420 keV) and three gamma rays with energies and branching fractions of E y — 306 keV (94%), 201 keV (78%) and 88 keV (15%) in a single cascade per decay. The total effect of the 306 keV and 201 keV gamma rays is a summed peak at 507 keV with an average yield of 88%. The intrinsic radiation creates a uniform background singles rate in the detector crystals that in turn causes random coincidences. The measured background singles rate in L S O is 240 cps/cc. Simultaneous detection of a beta and gamma from the same decay in different detectors also results in a true coincidence with a spurious L O R . The impact of the 1 7 6 L u decays on tomograph count rates depends on the tomograph design and the energy window settings [22] [23]. 24 The total efficiency of each detector depends on the intrinsic efficiency, EinP modified by the effects of the energy window (Sect ion 1.4.4). The intrinsic efficiency is given by: £ i m = l - e - ^ (2.1) where pdet = detector scintillator attenuation coefficient. xda — detector scintillator thickness. The energy window reduces the number of counted photons by rejecting those with energies outside the window, lowering the total detector efficiency. The effect of the window on efficiency depends on the characteristics of the detector (die energy resolution and photo fraction). In an imaging situation, the effects of the window also depend on the photon scatter in the imaged object. 2.1.4 D e t e c t o r D e s i g n The major determining factor for spatial resolution is the size and spacing of the individual detectors. In early P E T detector designs, a single scintillator crystal was coupled to a single P M T so that the minimum size of a detector was limited by the size of the P M T . Cut-block detectors were later developed in which many crystal elements could be read by a few PMTs [24]. In this design, a block of scintillator crystal material is cut into an array of elements. The saw cuts are done at various depths and filled with reflecting material. The cuts are designed to give a unique light pattern to the PMTs for annihilation photons interacting in different crystals, thus allowing the hit crystal to be identified. This increases the effective number of detectors (and reduces the effective detector size) without a corresponding increase in the number of PMTs. For example, the block detector in a typical whole body tomograph (CTI-Siemens ACCEL™) uses an 8 x 8 matrix of 6.45 x 6.45 x 25 mm crystals (64 detector elements total) read out by only four PMTs, as illustrated in F i g u r e 2-3. 25 , s\/' s\s\ s y \ s y . y y y y y y y y PMT B PMTD PMTC PMT A PMTC PMT B PMTD Figure 2-3 A cut-block detector from a PET tomograph. The scintillator is cut into channels which guide the light to the PMTs. The light from the photon interactions in each crystal produces a unique pattern in the PMTs. From this pattern, the hit detector element is identified. The block addressing electronics calculates the interaction point of the detected photon by a simple analog ratio, similar to Anger logic used in scintillation gamma cameras. For four PMTs (A, B, C, and D) , the event position within the block (x,y) is found from the ratios of the output signals: x _ i A + i B - ( i C + i D ) ^ y _ i A + i C - ( i B + i D ) lA^~lB ~^~lC lD lA lB lC lD where iA etc = signal from P M T A , etc These position signals are then used to identify the hit detector element from preset crystal identification map. The summed signal, zz = iA + zB + ic + iD is proportional to the total energy deposited in the detector block. In other designs, the cut-block crystal is optically coupled to a large position sensitive P M T that determines the hit detector element. 26 Decreasing the crystal sizes (by making more cuts) improves the spatial resolution. However, more cuts also results in less scintillator material being present in the block. This decreases the "packing fraction" of the detector and reduces the total counting efficiency of the tomograph. 2.1.5 Tomograph Structure and Electronics The tomograph detector structure is built up from the basic unit of the detector block. In most designs, several blocks are combined into a "module" or "bucket" in which the detectors share a common enclosure and/or common electronics. The modules are arranged around the central aperture to form rings, and multiple rings of detector blocks are typically used. Figure 2-4 shows a ring arrangement of detector blocks around a central aperture. Figure 2-4 The detector ring of a whole body dedicated PET tomograph. Detector blocks form a ring around a central aperture and multiple rings increase the axial extent. (Courtesy G.E. Medical Systems) Many layers of electronics are required to process events. The front-end electronics are responsible for anode signal integration and digitization. The P M F signals within a block detector are used to identify the hit crystal and are summed to yield an energy signal which is tested by an energy threshold. In older systems, each detector signal was tested for coincidence by dedicated 27 coincidence electronics. In modem tomographs, with a large number of detectors, testing each signal for coincidence with all other detectors is impractical. Instead, each detected event is given a time-stamp and sent to the digital coincidence processor. After a fixed time, each event in the buffer is tested for coincidence with every other event by comparing the differences in detection time to a predetermined coincidence timing window (CTW). The coordinates of the line of response (LOR) are calculated for each identified coincidence event from the positions of the detectors involved. The event may be counted by incrementing the value in a memory location that corresponds to its L O R . Alternately, the event information, including the detection time, may be stored in a list of the event-words ("list-mode"). Many aspects of the electronics are user controllable or have parameters in updatable look-up tables. For example, P M T voltages and pre-amplifier gains are usually adjustable through software for tuning. As well, crystal identification map parameters may be adjustable. The final energy window and coincidence liming windows are set as part of the acquisition parameters. 2.1.6 2D and 3D Acquisition modes Potential coincidences between detectors in different rings may be processed in different ways. In "2D mode" acquisition, used in some clinical imaging protocols and older tomographs, thin annular lead shielding called "septa" are positioned axially between the rings of crystals. The septa absorb photons at large oblique angles, physically preventing true coincidences between all but the neighbouring rings. In addition, coincidences are electronically limited to detectors within the same ring or between neighbouring rings. 2D mode has the benefit of reducing the contribution from the scattered photons and photons from radioactivity outside the axial field of view of the tomograph, which contaminate the image data. As well, data acquired in 2D mode require simpler reconstruction methods. Virtually all modern tomographs acquire data in "3D mode," in which no inter-shce septa are used and coincidences may be acquired between detectors in any rings. The 3D acquisition mode increases the sensitivity of the tomograph (6-7 times) by allowing events along the oblique L O R s 28 to be counted; thus, increasing the total number o f L O R s sampled. This increases the total number o f counts collected, improving the statistical quality o f the acquired data. However, the increase in sensitivity is at the expense o f increasing the contribution o f scatter and random coincidences. The reconstruction of 3 D data is also more complex because o f the increased number o f L O R s and the difficulties caused by the oblique L O R s intersecting multiple transverse slice planes. F i g u r e 2-5 compares 2 D and 3 D mode acquisition. d e t e c t o r s w i t h s e p t a d e t e c t o r s Figure 2-5 2D and 3D acquisition for a PET head tomograph. In 2D mode (top) the septa absorb the axially oblique photons. In 3D mode (bottom) oblique LORs are accepted, significantly increasing the number of LORs sampled and, therefore, the sensitivity. 2 . 2 Tomograph Performance Characteristics T h e performance characteristics o f a P E T tomograph are determined by its design and by the acquisition parameters. T h e performance ultimately affects the qualitative and quantitative value o f the data. T h e most important performance characteristics are: • spatial resolution, • sensitivity, and • count rate performance. 29 Other important characteristics include energy resolution and scatter fraction. Standard performance measurements have been published in " N E M A N U - 2 2001. Performance Measures of Positron Emission Tomographs." [25] [26]. 2.2.1 Spatial Resolution The tomograph spatial resolution is defined as the amount of blurring in reconstructed image as characterized by the full width at half maximum (FWHM) of the point spread function (PSF). The most significant factors in the spatial resolution are the detector size and separation, but parallax, the block addressing accuracy, non-collinearity and positron range are also important. The reconstruction method and parameters also influence the resolution of the final reconstructed images. The contribution of the detector crystal size, d, to the resolution, FWHM ^ varies with the radial position of the source. A t the radial centre FWHM ^ — d/2, but worsens as the source position moves off-axis due to parallax errors and other geometric effects. In cut-block detectors, addressing errors within the block, FWHMblgch further degrades the resolution. Photon non-collinearity (Section 1.4.2) causes additional blurring that depends on detector separation: FWHM „ „ =0.00220^ (2.3) where D„- = ring diameter. The effect of non-collinearity on spatial resolution is on the order of 1.4-1.5 mm for a 50 cm detector ring and 2.0-3.0 mm for a 100 cm ring. The blurring also increases with positron range, which is radionuclide dependent (Section 1.4.2). The contribution to the FWHM varies from 0.2 mm, when imaging with 1 8 F , and increases to 2.6 mm for 8 2 Rb. Finally, the total FWHM of the PSF of the reconstructed image depends on the reconstruction method. The total system spatial resolution due to all of these effects is given by [27]: 30 FWHM^ = kreco^FWHM^ml + FWHM2non_coll + FWHM2mge + FWHM20Ck (2.4) where kmm =a constant that depends on reconstruction method and parameters In animal tomographs, spatial resolutions of ~1 mm have been achieved (MicroPET Focus™), but, for commercial whole body scanners, the transverse spatial resolution at 1.0 cm off-axis is 4.5 - 6.3 mm, depending on the tomograph and acquisition mode. 2.2.2 Sens i t i v i t y The sensitivity of a tomograph is the ratio of the measured true coincidence count rate to the activity of the source within the sensitive volume. Trie sensitivity depends on the tomograph geometry (solid angle) and the detector efficiencies (2.1). The geometric efficiency of a tomograph in 3D mode for a point source at the axial centre is given by: ^eo.poUU = S i n ( t a i r ' ^ a , I Dring )) (2-5) where F)axid — axial extent. F)Hng - ring diameter. In 3D mode, the geometric efficiency varies axially with a triangular profile, with the maximum at the axial centre. Therefore, the average geometric efficiency for an extended radioactive source within the tomograph volume is egeo « £ga>iPOin, 12. The total tomograph sensitivity also depends on the square of the detector efficiency, because two detectors are required for a coincidence event. The packing fraction of the detectors also affects the sensitivity. Overall, the sensitivity for a point source at the centre of the F O V varies from 0.2 -0.5% for tomographs operating in 2D mode up to 2 - 10% for those in 3D mode [28]. 31 2.2.3 Count Rate Performance and Noise Equivalent Count Rate Detector and electronics dead-times in the tomograph limit the performance, causing count rate dependent dead-time losses and a maximum achievable count rate. These affect the accuracy of the measured count rates, the total counts obtained for a given acquisition time and, therefore, the signal-to-noise ratio of the data. The main factor in the count rate performance is the scintillator decay time, but the processing times at other stages of the electronics also contribute. The decay times range from 40 ns for L S O to 300 ns for B G O , but longer integration times are used to maximize the signals and improve the energy resolution. The finite event processing time also results in pulse pile-up in the detector blocks which may affect block addressing and energy discrimination accuracy. The metrics used for these performance issues are the maximum trues or trues+scatter count rate or the count rate associated to 50% dead-time. Another count rate dependent issue is random coincidences. As the singles count rates increase, so does the rate of true and random coincidences. However, the randoms increase as the square of the singles, while the trues increase proportional to the singles (Section 3.2.1). Since randoms degrade the image data, the overall quality of the data depends on count rates. A useful metric for determining the effect of count rate (and thus randoms rate) on overall data quality for a tomograph is the noise equivalent count rate (NECR) [29]: R2 RNEC=—-r-* (2-6) R,+RSc+krandRr where = trues count rate. R J T = scatters count rate. R r= randoms count rate. Kant ~ a constant 1 or 2 depending on randoms correction method. The dependence of N E C R on the count rate (and source activity) allows the effects of the various contributions to the total coincidence rates to be assessed. The N E C R has been likened to the square of the signal-to-noise ratio, (S/N) 2 , because it is a comparison of the true signal strength over the background and noise. Since scatters and randoms given inaccurate positional and activity 32 information (Section 1.4.6), they penalize the function. Note that the randoms term contains a factor, knnd> which depends on the method o f randoms correction. Using measured delays increases the contribution from the noise due to the randoms, kranl =2. Using randoms corrections with reduced variance decreases the contribution from the randoms krand —\ and increases the N E C R , as discussed in Chapter 3. T h e N E C R curve is used to compare the effects o f different acquisition, processing and correction parameters o n the data count rate, as well as the general characteristics between different scanners. In 3 D P E T , the N E C R curve has a peak value which is a useful figure o f merit. T h e activity concentration that produces the maximum N E C R may be used to determine the optimal injected dose to be used in a given type o f study. Figure 2-6 illustrates the relationship between various coincidence count rate contributions to the NECR for a typical scanner in 3 D mode. activity in camera's FOV Figure 2-6 The relative coincidence count rates from trues, randoms, scatters and the resulting NECR are shown for different activities. This is for a typical scanner operating in 3D mode. 2.2.4 Other Performance Characteristics T h e energy resolution o f a P E T scanner is determined mainly by the scintillator conversion efficiency and the light collection characteristics o f the detector design. F o r 511 k e V photons, the % FWHM values for simple (non-block) detectors are 18%, 7%, 9% and 11% for B G O , N a l , 33 G S O and L S O , respectively [16]. These values, however, are higher in block detectors and vary between individual crystals within a block. T h e energy resolution determines the optimal energy window settings during data acquisition and the resulting sensitivity and scatter fraction. T h e scatter fraction, SF , is defined as the fraction o f the non-random coincidences in which one photon underwent Compton scatter coincidences. SF = R s c ( 2 . 7 ) R,,+R, where IL = trues count rate. Rrr = scatters count rate. Although the scatter fraction depends mainly on the size and density o f the object, the S F also varies with tomograph size, detector characteristics and acquisition mode and parameters. When measured using a standard phantom, S F becomes a performance measure by which the effects o f tomograph design and/or acquisition parameters are compared. F o r the majority o f scanners, the measured S F ranges from 15%-20% in 2 D mode and 30-40% in 3 D mode. Flowever, when performing whole body human scans, the scatter fractions for 3 D mode may be significantly higher (50-80%). 2.3 PET Data Acquisition and Reconstruction 2.3.1 PET Data Acquisition A P E T study consists o f acquiring the data necessary to reconstruct the radioactivity distribution in the subject. This is primarily an emission scan, which counts the detected coincidences from the annihilation photons originating in the radioactivity within the subject. A single "static" frame is acquired to image a region o f the subject when the radionuclide distribution does not change over the duration o f the scan. "Whole body" static images scans are formed by acquiring static frames 34 at multiple bed positions. If the time course of the tracer distribution is to be measured, the data may be acquired at a predetermined number of contiguous time intervals (time frames) forming a "dynamic scan." Two other scans are also performed: the blank and transmission scans. These are necessary for attenuation correction, as described in Section 2.4.3. A single blank scan is generally performed on a daily basis, and is also used for quality control. If detectors are poorly tuned or failing, they cause visually detectable artifacts in the blank scan. A transmission scan is required for each subject at each bed position. This may be done before or after the administration of the radioactive tracer. During the transmission scan, the subject must be maintained at the exact same position as they were in the emission scan; otherwise the attenuation correction would be inaccurate. 2.3.2 P E T Data Structure Since a valid event in P E T involves the coincidence detection of two photons by a pair of detectors, events might be labelled simply by the identification of the two detectors involved. A more physical event address is provided by calculating the parameters of the line of response (LOR) between the centres of the two detectors. Therefore, within a slice plane, only two parameters are required: xr, (radial displacement) which is the perpendicular distance from the L O R to a parallel chord through the radial centre of the aperture and d, the azimuthal angle of the L O R with respect to an arbitrary 0° angle. The L O R parameters are illustrated in Figure 2-7. 35 9 = 45°, Figure 2-7 The geometric parameters describing a LOR within one slice plane. xT is the radial displacement of the LOR from a parallel chord through the centre and 0is the azimuthal angle. In 2D P E T , only L O R s from within transverse slice planes are acquired and these planes are identified by a third parameter: the slice number, or axial position of the slice, "z." In 3D PET, the events from oblique lines of response are also measured. Two parameters are required to label the oblique slice planes: the identities of the two detector rings involved, ("^7" and "%2"), the position of one ring and the ring difference to the second ring, (c\1" and "<§") or the position of one ring and the oblique angle of the slice plane ("^7" and "p"). Thus, four parameters are required to define each 3D L O R uniquely. The total number of possible L O R s increases significantly compared with 3D acquisition. P E T data have traditionally been stored in two dimensional histograms of counts per L O R (parameterized by xr and 0) for each slice plane. Such histograms are called "sinograms," so named because each point of radioactivity in an object traces out a sinusoidal wave with changing angle in the sinogram. Each row of a sinogram contains the data from the parallel L O R s at one azimuthal 36 angle from one slice plane. Thus , each row o f the sinogram is a different projection. E a c h sinogram is indexed by the parameters that identify the slice plane. Figure 2-8 shows the L O R s involved in coincidence detections from a point source and their corresponding addresses in a sinogram. Figure 2-8 An example of a sinogram. PET acquisition is performed on an off-axis point source. All the LORs of valid coincidences form a sine wave in the histogram of counts vs. radial positions and azimuthal angle. The various parallel projections at different angles are shown outside the tomograph ring. In 2 D acquisition, each sinogram contains the data from one detector ring or pair o f neighbouring rings that is used to reconstruct one transverse slice; thus, there is one sinogram per image plane. 3 D mode data sets also include many oblique sinograms. E a c h oblique sinogram contributes to multiple transverse image planes during the reconstruction. T h e inclusion o f the oblique sinograms greatly increases the total sinograms per study. F o r example, a 2 D study the number o f acquired sinograms is 2NrjJlgs - 1 , where Nritip is the number o f detector rings. In 3 D acquisition, the maximum number o f sinograms is . Therefore, a tomograph with 32 detector rings yields 63 sinograms in 2 D mode but up to 1024 sinograms in 3 D mode. In practice, data reduction schemes are often used in 3 D mode to reduce the number o f acquired sinograms. 37 Data may also be stored in list mode, as described above. In list mode the data file size grows with the amount of acquired data since each event is stored separately and not immediately histogrammed. List mode is the most flexible way of storing the data since it may be later histogrammed into static or dynamic frames or may be reconstructed directly from list mode [30]. 2.3.3 Data Reduction Schemes To reduce the size of the data set, a number of data binning schemes have been introduced [31] [32]. These reduce the size of each sinogram and/or the number of sinograms. The technique of "angular mashing" sums together the counts in the L O R s at adjacent angular bins (rows in the sinogram), thus reducing the size of the sinogram and increasing the counts per L O R . For example, i f a mashing factor of 2 is used, every pair of rows (angular bins) in the sinogram is summed reducing the number of rows (and the size of the sinogram) by half. This reduces the angular resolution of the data by a factor of two. Often limited angular mashing may be performed without affecting the overall spatial resolution because most scanners tend to oversample in the angular dimension. Oblique sinograms in 3D acquisition may also be summed to reduce the number of sinograms. The number of oblique angles summed is determined by a parameter called the "span." The span determines the averaging of the alternating oblique slices. A span of 1 means no averaging is done. If span = 3, the alternate slices are averaged by factors of 1 and 2 respectively (1+2 = 3). Increasing the span of a data set reduces its size, but also causes a reduction in the spatial resolution most prominent in the axialfy centred and radially off-centre regions of the F O V . Another data reduction scheme is to reduce the number of oblique bins used by limiting the maximum "ring difference" of the L O R s being sampled. Larger ring differences sample larger oblique angles. In these schemes, 2D acquisition with direct and cross slice could be considered span — 3, max ring difference = 1. If 3D acquisition is performed on a tomograph with 32 physical detectors rings using span = 3 and maximum ring difference = 31, a total of 703 sinograms are acquired: 63 direct slice planes and 640 oblique planes. 38 2.3.4 P E T Image Reconstruction P E T images are reconstructed from 2 D or 3 D data sets using a number o f different methods [32] [33] [34]. These are broadly classed as analytic or iterative. T h e most commonly used analytic reconstruction method is 2 D filtered backprojection (2D-FBP) . F B P is an exact, analytic solution to the inverse radon transform. A s its name suggests, F B P requires two steps: filtering and backprojection. First, the projection data are filtered with a ramp filter in the spatial frequency domain. This an integral part o f the reconstruction because the ramp filter is actually the Jacobian o f the transformation o f rectangular to polar coordinates. This is required because the projections are acquired at varying azimuthal angles, but are used to reconstruct an image in a Cartesian coordinate system. Next, the counts from the filtered projections are backprojected onto an image space matrix and appropriately summed in image elements (voxels). T h e projections are the line integrals through the object slice at a given azimuthal angle: p(xr,$)= |J/(x,y)<5(x s in6-y cost?-x r )dxdy (2.8) where f(x>j) = 2 D slice o f the radioactivity distribution. p(x„6) = projection (radon transform)of 2 D slice. In P E T , they are the set o f parallel L O R s at a given angle, or the rows o f the sinogram. T h e filtering o f the projections is performed in spatial frequency domain. Thus, the projections are first fourier transformed. P(0),e)=\p(xr,9)e-im'dxr (2.9) where P{o>,0) — fourier transform o f the projection. T h e transformed projections are then multiplied by the ramp filter, inverse transformed and backprojected to recover the 2 D slice o f the radionuclide distribution: 39 f<x,y) = £ \\P(aJ,e)\cu\e+ia(xsind'ycose)daJdO JFJ (2.10) = ^\Pfin^O)dd where \co | is the ramp filter. F B P is linear, very fast and robust. However, it assumes that the data consist o f perfect projections, free o f statistical noise or biases due to attenuation, scatter or randoms. A s a result, F B P images are generally noisy and may contain quantitative inaccuracies and/or artifacts from the physical biasing factors. However, i f sinograms are corrected for attenuation, randoms and scatter before or during reconstruction the biases are significantly reduced. T h e image noise may be reduced by applying a smooth, low-pass filter in the same step as the ramp to control the high frequency image components. However, this filtering may itself cause artifacts and negative counts, especially at large discontinuities in the image. While this may cause visible distortions in the image, the linearity o f F B P preserves the quantitative accuracy. Iterative reconstruction works by attempting to produce successively better estimates o f the image using the measured projection data. This is achieved by maximizing or minimizing a cost function which describes the goodness o f fit between the estimated image and the measured projections. Reconstruction begins with an initial simplistic estimate o f the image. This is then forward projected and the resulting projections are compared with the measured projections to produce update factors. These factors are then backprojected and used to update the estimated image. T h e process o f forward projection-comparison-backprojection is done once per iteration. T h e iterations end when a convergence condition, based on the value o f the target function, has been met (if it exists). T h e forward/backprojection process in iterative reconstruction has the potential benefit that the physics o f the acquisition process may be incorporated into the mathematics o f the projector/backprojector calculations. Thus , the effects o f counting statistics, attenuation, scatter, and random events may also be included in the reconstruction process. 40 Statistical iterative methods include the modeling o f the Poisson counting statistics. T h e first significant method o f this class was maximum-likelihood expectation maximization ( M L E M ) [35]. M L E M uses a cost function, modeled on the Poisson distribution o f measured values, as a measure o f fitness. T h e projection-comparison-backprojection steps are given by a calculation o f the form [36]. where / * = voxel values estimates in the image at iteration k. pi — pixel values o f projection sinogram corrected for randoms and scatter (i.e. pi -rt- st) where rt and st are the contributions due to randoms and scatter, respectively. ciy — system matrix elements describing relationship between image voxels and projection pixels. n = number o f L O R s . m — number o f image voxels. While M L E M reduces statistical noise in the image and is shown to converge, it has a number o f drawbacks. First, it requires a large number o f iterations to converge (50-200) and is, therefore, very slow. A s well, in practice it must be stopped before convergence because, after a number o f iterations, it begins to reproduce the noise in the projections by introducing noise in the image. A faster implementation o f M L E M , ordered-subset expectation maximization ( O S E M ) , was developed to speed up the iterative reconstruction process [37]. O S E M groups the projections into subsets which span the angular range o f the projections. E a c h subset is reconstructed with the M L E M algorithm sequentially so that the image is updated once per subset until all the subsets have been used, ending a single iteration. Since the image is updated more frequently than M L E M , O S E M reconstruction is generally much faster. 41 While M L E M and O S E M require the projection values to be fully corrected for scatter and randoms before reconstruction, an M L E M variant, ordinary Pois son-expectation maximization (OP-EM) [33] [38] includes these in the image estimate: This method has been shown to produce quantitatively more accurate images than M L E M and is considered the method of choice in high resolution P E T imaging [39]. There is also an ordered-subset version of this method, O P - O S E M . Another variation of the M L E M , Maximum a Posteriori (MAP), imposes conditions based on prior knowledge of the image (such as non-negativity and smoothness) in a manner similar to Bayesian logic [40]. M A P is flexible and gives good visual results in reducing noise and improving spatial resolution; however, it is quantitatively less accurate than O P - O S E M . 3D P E T data may also be reconstructed by analytic or iterative methods carried over from the 2D versions. The 3D re-projection (3DRP) method [41] is the 3D equivalent of F B P and inherits the same problems as 2D-FBP. 3D versions of the statistical iterative techniques are also available. One option for a 3D data set is to rebin it into a set of 2D sinograms and then reconstruct them using a faster 2D algorithm. The most common rebinning method is fourier rebmning (FORE) [42]. F O R E uses the distance-frequency principle to rebin the fourier transforms of the projection data and therefore may only be performed post-acquisition. However, F O R E is accurate for off-axis activity and has become the standard method of 3D rebinning. A n additional rebinning method is single slice rebinning (SSRB) [43] which simply assigns the L O R s in an oblique plane to the direct slice plane equidistant between the involved two detector rings. This distorts the (2.12) where pt — pixel values of projection sinogram not corrected for randoms and scatter. 7: = randoms events per pixel i st = scatter events per pixel i. 42 accuracy of off-axis components of reconstructed radioactivity distributions, making SSRB only useful for compact radioactivity distributions near the tomograph axis. However, SSRB may be applied during acquisition. 2.4 Quantitative Corrections 2.4.1 Introduction P E T uses a number of quantitative corrections to remove spurious events from the measured data, and to correct for physical and detector limitations of the acquisition process. These corrections and calibrations are done for: • random coincidences, • photon attenuation, • detector normalization, • detector dead-time, • photon scatter , and • sensitivity calibration. 2.4.2 Randoms Correction Randoms correction is the subtraction of random coincidences from the total measured coincidences in the sinograms. The subtracted randoms may be directly measured or calculated from other measured parameters. Chapter 3 describes the details of randoms correction and the remainder of this thesis presents a new method of randoms correction. 43 2.4.3 Attenuation Correction Photons emitted from within an object are subject to interactions along the path from the emission point to the detector. For 511 keV photons in human tissue, these interactions are predominantly Compton scatter. In P E T , i f either annihilation photon interacts such that it is not detected, coincidence detection cannot occur and the positron decay is not detected. The resulting reduction in the counts along an L O R is described as "attenuation." Attenuation not only leads to significant count losses, but the effects of attenuation vary with the location of the decay and the specific distribution of attenuating material. Thus, attenuation must be corrected before, or during, reconstruction, or it will lead to considerable biases in the reconstructed radioactivity distribution. However, i f the scattered photons are detected elsewhere in the tomograph, an erroneous L O R results in a "scatter" event (Section 1.4.6). Scatter events must be removed from the data set before reconstruction (Section 2.4.5). Thus, Compton scatter results in both attenuation and scatter events. The fraction of primary photons transmitted through an attenuating medium without interaction is given by the line integral: (2.13) where fj(x) — position dependent attenuation coefficient. X0 = emission point. Xi = edge of attenuating medium. dx — path along the L O R through the attenuating medium. 44 Figure 2-9 Attenuation in PET scanning. The intensity of annihilation photons is reduced in the object along the LOR from the emission point to the edge of the attenuating medium. Likewise, the fraction of annihilation partner photons transmitted is given by: - j /j(x)dx fa„en,2=e" (^) where X2 — opposite edge of attenuating medium. The probability of both photons being transmitted, PcUnc, is also the same as the probability of a coincidence (neglecting detecting efficiencies): -^p(x)dx -jju(x)Ux -^/j(x)dx Rcoinc ~ fatten,\ fatten,2 ~ £ e ~ e (2-15) Thus, the probability of a coincidence only depends on the total attenuation along the length of the L O R and is independent of the point of emission along the L O R . A number of methods have been used to correct for attenuation in P E T [44], most of which involve a direct measurement of the attenuation using a transmission scan. In this scan, one or more moving radiation sources are positioned in the tomograph aperture outside the object. During the transmission scan, these sources move and the count rates are measured at all the lines of response. Figure 2-10 shows the arrangement for transmission scanning. 45 rotating transmission source(s) Figure 2-10 The arrangement for a transmission scanning system. The external sources are used to sample the counts rates for LORs through the object. The counts for the transmission lines of response are measured and stored in transmission sinograms, N^fx^d). The blank scan is also done using the transmission sources, but with the aperture empty yielding a set of blank sinograms, Nhhnk(xpd). Sinograms of the attenuation correction factors (ACFs) are then calculated as. ACF(xr,0)=N»"^Xr'°2 (2-16) where Nbiankfad) — measured counts in blank scan at a given sinogram element. Ntrans(xr,0) — measured counts in transmission scan at a given sinogram element. The value of the A C F for each sinogram element is then the inverse of Pcoim (2.15) for the path through the attenuation media corresponding the L O R (xp8), or: ACF = e* (2.17) The emission sinograms are usually corrected for attenuation by multiplication with the corresponding correction sinograms prior to reconstruction. 46 The sources used for transmission scanning may be positron emitters or single photon emitters. Using positron emitters (e.g. 6 8Ge) has many benefits. To begin with, the transmission scan is performed identically to the emission scan and the L O R is determined by the same coincidence processor. As well, because attenuation is energy dependent, the annihilation photons from the transmission source measure the exact same attenuation that affects the emission data. The main problem with using positron emitters is the low transmission coincidence count rates. Given that the probability of one photon being attenuated in the object is high, only a small fraction of the decays in the transmission sources produce coincidence counts. As well, the maximum usable activity used in the sources is limited by the dead-time in the detectors closest to the source, where the photons are not attenuated and the solid angle is high. Therefore, coincidence transmission scans are done at relatively low rates and require long acquisition times to obtain adequate statistics (~15 minutes per bed position). Shorter transmission scans result in poor statistics and a noisy attenuation correction. A second drawback to using positron emitters is that, since the energy of the photons from the emission source and the transmission source are the same, transmission scans acquired post-injection have interference from die emission photons. The problems of low count rates and post-injection transmission scans are resolved by using single photon emitting sources [45] [46], such as 1 3 7 Cs (662 keV), 5 7 Co (85.6% - 122 keV and 10.6% - 136 keV), or the X-ray beam from a C T scanner [47] (~80 keV). During the transmission scan, the source orbits the object and single photons transmitted through the object are detected on the opposite side of the source, but not in coincidence. This samples the attenuation along the L O R s defined by the lines from the point source to the detectors. Counting single photons, without the coincidence requirement, allows for higher activities which increases the count rates, and shorter acquisition times may be used. The photons from the single photon emitting sources are generally at a different energy from the annihilation photons are counted in a different energy window. Thus, the transmission scan may be performed post-injection. However, since the energy of the transmission source is different from 511 keV, the A C F s require scaling to account for the energy dependence of the attenuation coefficients. 47 The ACFs derived from a transmission scan may require adjustment for inaccuracies in the energy scaling or noise from inadequate counting statistics. One correction method is attenuation image segmentation [48] [49]. Segmentation uses the data from the transmission and blank scans to reconstruct images of the attenuation coefficients ("attenuation" or "u-maps"). These images are then used to determine the boundaries of different material types within the attenuating medium. Once defined, these regions are identified with a specific material, such as air or soft-tissue, and the attenuation coefficient values in each identified region in the image are replaced with the single accepted attenuation coefficients for that material. Thus, the inaccurate values of the attenuation coefficient are corrected, along with the random fluctuations, yielding a noise-free attenuation map. The segmented corrected attenuation images are then forward projected to form noise-free transmission sinograms from which new ACFs are calculated. Segmentation reduces the time required for transmission scans and/or improves the accuracy of the A C F s derived from single photon transmission scans. However, the segmentation process is not without error. Any true small variations in the attenuation values are lost, reducing the accuracy of the ACFs . 2.4.4 Normalization Variations in the coincidence counting efficiency of detector pairs give rise to non-uniform detection sensitivity. This may be corrected by using the data from a long duration, high statistics normalization scan, acquired using the rotating transmission source(s) or an extended uniform cylindrical phantom [50][51]. A normalization correction factor (NCF) is calculated from the scan data for each detector pair or sinogram coordinate: NCF{xr,e)=NnJm<"Xr^) (2.18) \ ^ norm / where NCF(x^d) = normalization correction factors for a given sinogram element. Nmm(xr,6) = measured counts in normalization scan at a given sinogram element. <Nmm> = average counts per L O R for all L O R s in normalization scan. The N C F s are applied to each sinogram element on an L O R basis prior to reconstruction. 48 Normalization in 3D P E T is problematic for many reasons. First, long counting times are necessary to accumulate a statistically significant number of counts for each detector pair because the number of L O R s in many modem tomographs is very large (108 to 109). Even with long, high count rate normalization scans, the counts per L O R may be statistically poor. As well, using large volume cylindrical sources require corrections for scatter and attenuation. Therefore, new normalization methods, such as component-based, or model-based normalization, exploit the symmetries in the detector block efficiencies and the L O R s to reduce the variance in the normalization correction factors [52] [53] [54]. These may use individual detector efficiencies, rather than the L O R efficiencies, because there are fewer detectors than LORs. 2.4.5 Scatter Correction Although the majority of the scattered coincidence events are eliminated by the energy window, a significant fraction may still contaminate the emission data, especially for 3D acquisition. The scattered events may be estimated and subtracted using a number of different schemes [55]. The simplest method estimates a smooth scatter distribution from the tails of the counts in either the sinograms or the reconstructed images. The scatter distribution is then assumed to be either a simple polynomial or a broad Gaussian with the parameters found from the fits to the tails. Once the scatter distributions are estimated, they are subtracted from the emission data. Another simple method assumes that the measured projection data may be modeled as the un-scattered distribution convoluted with a function which describes the effects of scatter [56] [57] [58]. The scatter function is derived from empirical studies using known cylindrical phantoms. The correction consists of deconvoluting the scatter function from the data. The scatter correction method which is generally considered as the most accurate is the model-based scatter correction [59] or the "single scatter simulation." [60] [61]. In this method, the scatter distributions are estimated by modeling the physics of scatter in the individual subject and the detection in the tomograph. The scatter contribution to each line of response is calculated from a set of randomly positioned scatter points in the preliminary reconstructed radioactivity and attenuation images and is calculated assuming single scatter only. The scatter distribution is scaled 49 to the tails of the trues distributions and then subtracted from the emission sinograms. This scatter correction may also be incorporated into iterative reconstruction schemes. The scatter correction improves the quantitative accuracy of reconstructed data. 2.4.6 Dead-Time Correction The count losses from detector dead-time are corrected using correction factors based on the measured single event rates and detector count rate characteristics [62]. The simplest correction method simply scales the counts per L O R by a single global dead-time correction factor. More complex corrections are applied to individual detector blocks or buckets. 2.4.7 Absolute Quantitation Calibration P E T data, if properly corrected and calibrated, are capable of measuring the absolute values of radioactivity concentration. Studies using blood plasma activity as an input function require this calibration but not studies analyzed by ratios or relative values of activity measured in the images. The activity quantitation calibration requires scanning a cylinder containing a uniform, known radioactivity concentration. A global quantitative calibration factor, CF , is then found by: CF-W) {2M) where A0 — measured radioactivity concentration in the cylinder (kBq/ml) <N v o x e l > = average voxel values reconstructed image (cts/voxel). This calibration factor is then applied to each voxel in the reconstructed image. The accuracy of the calibrated images depends on the accuracy of all the quantitative corrections. 2.4.8 Image Quality Even with accurate corrections and calibrations, the accuracy and precision of P E T image data are limited by a number of factors. One of the most significant effects is partial volume averaging, which is the blurring of the radionuclide distribution in the image so that structures overlap and 5 0 their relative values are convoluted. This occurs when the object or structure being imaged is smaller than the 2-3 times its FWHM resolution in the transverse and axial directions [63] [64] [65]. As well, the boundaries of the structures are affected. Partial volume averaging causes a reduction or even loss of the signal from the object. It also results in reduced image contrast and introduces errors in quantitative values which depend on the relative magnitude of the structure and the background activity. Image quality also depends on image noise. This is largely determined by the statistical fluctuations in the measured counts per line of response. The magnitude of these fluctuations is governed by Poisson statistics, so the standard deviation is proportional to the square root of the counts: However, while the noise in the sinograms is Poisson, the noise in the images is not [66]. The reconstruction process changes the structure of the noise and causes correlations of the voxel values. Reconstruction also increases the noise and causes noise related artifacts. A few figures of merit are available to assess the image quality produced by P E T imaging, some defined in the N E M A standards [25]. These include the signal-to-noise ratio, contrast and contrast recovery. These values may be found from studies using contrast phantoms which consist of spheres of one radioactivity concentration imbedded in a background with the different concentration. The contrast of the image of a sphere with higher radioactivity concentration that the background ("hot sphere") is defined as: a(N) = <JN (2.20) where N is the number of counts in a line of response. C, Contrast^,, = -1 (2.21) J where CH — mean counts in a region of interest (ROIs) on the hot sphere. CB - mean counts in a region of interest on the background. 51 The contrast recovery coefficient, C R C , is the ability of the imaging system to recover the actual contrast between the radioactivity concentrations of the spheres and the background, with a maximum of unity. This is defined for a hot lesion as: where CRChot = V - l (2.22) aH = activity concentration in the hot region measured independently (e.g. in a dose calibrator.) aB = activity concentration in the background region measured independently. Another measure of image quality is the contrast-to-noise ratio (CNR), related to the signal-to-noise ratio (SNR). This is often defined as: CNR = contrast,, v nolsei,gd j (2.23) where noisebgd — the relative standard deviation of the counts in the background region. The noise in the background is generally assessed by finding the standard deviation of the number of counts between small, identical sized regions of interest placed on the background region. The noise measurement is complicated by the effects of reconstruction and the attenuation and scatter corrections. 52 Chapter 3 The Theory of Random Co inc i dences and Correc t ions 53 3.1 Single and Coincidence Counting 3.1.1 Chapter Overview This chapter describes the basic theory and properties of random coincidences, as well as the randoms correction methods used in P E T , to establish the requirements of a new randoms correction. The chapter begins with an examination of the relationship between singles and true coincidences and the factors influencing the single distribution. Next, the causes and characteristics of random coincidences are described. Then, the delayed coincidence method of randoms correction is explained and its shortcomings are discussed. Then, suggested variance reduction modifications to the delayed coincidence method and singles-based randoms correction schemes are described. The chapter ends with a discussion of the relative advantages and disadvantages of the new randoms correction methods. Chapter Four presents a new model-based method of randoms correction, which is the main subject of this thesis. 3.1.2 Single Photon Detection Events As described in Section 1.4, positron decay and annihilation results in the emission of two collinear 511 keV photons. Each photon detected in the tomograph with energy above the threshold, results in a single event. If both annihilation photons are thus detected, a true coincidence event is also registered. If a solitary photon is detected, only a single event is registered. Most tomographs do not store individual crystal singles count information. However, the singles rates in detector blocks (or groups of blocks) are measured during emission scans. In general, tomographs have a significantly higher sensitivity for counting single photons than coincidences. This results in considerably higher singles rates than true coincidence rates; usually by one or two orders of magnitude. Since the random coincidences are caused by singles events, the relationship between singles and the true coincidence rates also determines the relationship between randoms and trues in a tomograph. The main reasons for a difference between the singles and coincidence rates are: • tomograph solid angle and geometric efficiency, 54 • detector efficiencies, and • photon scatter (and/or absorption) in object 3.1.3 Tomograph Geometry The geometric efficiency for 3D coincidence counting is a maximum at the centre of the tomograph and falls linearly to zero at the axial edges (Section 2.2.2). The geometric efficiency for single photon counting is the same as coincidence counting at the centre; however, it does not drop off rapidly for sources displaced in the axial direction. Activity outside the axial field of view causes singles events in the tomograph, but no coincidence events. Figure 3-1 to Figure 3-3 show how the probabilities of single and coincidence events change with axial position. annihilation axial field of view Figure 3-1 Annihilations near the centre of the tomograph have identical geometric efficiency for singles and coincidences. axial field of view detectots Figure 3-2 Annihilations near the axial ends of the tomograph have similar geometric efficiency for counting singles photons as annihilations near the tomograph centre. However the coincidence counting efficiency is much lower. 55 axial field of view Figure 3-3 Singles events originating from radioactivity outside the axial field of view. These result in single events, but not true coincidences The relationship between the theoretical singles and coincidence geometric efficiencies for a point source in a tomograph (8 cm axial F O V , 15 cm ring diameter) as a function of axial position are shown in Figure 3-4. The singles efficiency does not change dramatically with axial position. - 4 - 2 0 2 axial position Figure 3-4 The theoretical singles (solid red line) and coincidence (dotted blue line) geometric efficiencies for a point source as a function of axial position. 56 3.1.4 Detector Efficiencies The second factor in the difference in between singles and coincidence sensitivity is the detector efficiencies. The singles sensitivity depends on the detector efficiency, but the coincidence sensitivity depends on the square of the detector efficiency, since two photons are independently detected. This again reduces the coincidence sensitivity relative to the singles sensitivity, and the difference is greatest with lower detector efficiencies. As well, dead-time affects singles and coincidence events differendy. The relationship between theoretical singles and coincidence sensitivities for a point source in a tomograph (8 cm axial F O V , 15 cm ring diameter, detector efficiency = 25%) as a function of axial position is shown in Figure 3-5. The singles sensitivity is shown to be significantiy higher than the coincidence sensitivity. " 4 - 2 0 2 axial position Figure 3-5 The theoretical singles (solid red line) and coincidence (dotted blue line) total sensitivities for a point source as a function of axial position. The total sensitivity depends on both the geometric efficiency and the detector efficiencies. 57 3.1.5 Photon Interactions in the Object Photon interactions in the object also affect the single and coincidence counting differently. A t 511 keV, the main photon interaction processes are Compton scattering and photoelectric absorption. In water-equivalent material, such as soft tissue, Compton scattering dominates and photoelectric absorption is negligible (only 0.02% of interactions). The probability of "Compton-absorption" in the object (one or more Compton scatters followed by absorption) is slightly higher but still not significant. The major cause of count losses due to interactions are scattered photons either missing the tomograph or being detected with energies lower than the energy threshold. These processes affect both singles and coincidences. However, i f one of the annihilation photons is lost, a single event may still be detected, while no coincidence would be counted, as shown in Figure 3-6. Scatter also affects the distribution of detected single photons, as described in Section 4.6. Figure 3-6 Singles events originating from physical interactions. Annihilations occur within the detector volume which should cause both photons to be incident on the tomograph. However Compton scatter causes the loss of one photon. However, a single event may still be detected. 3.1.6 The Singles Distribution The detected individual annihilation photons create a distribution of singles rates in all the detectors of the tomograph. The singles distribution for a given scan depends on the radioactivity distribution, the attenuating object and the tomograph response. In 3D acquisition, the singles 58 distribution includes contributions from scattered photons and photons from activity outside the field of view. In general, the distribution of single photons in the tomograph is slowly changing because the singles geometric efficiency changes smoothly. For a long cylindrical uniform source centred on the tomograph axis, the singles distribution is uniform. Non-uniform distributions of radioactivity and/or attenuating material cause low frequency variations in the singles distributions. The distribution of singles rates is also affected by variations in individual crystal efficiencies. 3.2 Random Coincidences Random coincidences occur when two single events from different annihilations are detected within the coincidence timing window of the tomograph and are counted as a coincidence event [72] (Section 1.4.6). This gives a spurious count in the line of response between the two involved detectors. If three or more singles are in coincidence, a "multiple" coincidence occurs. In some systems multiples are ignored; in other systems one count is assigned to one possible L O R by an arbitrary process. Multiples have a much lower frequency than randoms. 3.2.1 Randoms Rates Singles events are distributed randomly in time but described by an average count rate. If the time required to process a singles event is AT, the fraction of the time that a detector is counting singles is given by: f„ = RS]AT (3.1) where fs1 = fraction of the time detector 1 is processing singles (unitless). Rs1 = singles rate at detector 1 (counts per second). AT — singles event processing time (processing time per count). 59 If a second detector is also counting singles, the average rate at which events are counted in both detectors, and, therefore, the rate of random coincidences, R„ is given by: * , = / A = We*? (3-2) From the perspective of analog timing electronics, a coincidence involves the overlap of the two pulses. This occurs i f the pulses are within ±1 pulse width, T , of each other. Therefore, the effective coincidence timing window is the sum of the two pulse widths, 2t. In modern tomographs with digital processors, the coincidence is tested by comparing the time tags of buffered singles events with the value of the coincidence timing window, Tcrw- Therefore, the equation for the randoms rate may be written as: Rr = RslRs22r=RslRs2Tcm (3.3) where 2r — 2 x coincidence resolving time or pulse width. This notation is often retained the literature Taw — time width of the coincidence timing window. TCIW is set by the user and its optimal value depends on the coincidence timing resolution of the tomograph, which is related to the scintillator decay constant and the tomograph detector design. For L S O systems TC[W — 5-6 ns is used but older systems with other scintillators have timing windows up to 15 ns. 3.2.2 Randoms Distribution and Randoms Fraction Since the randoms rates between detectors depends on the singles rates, the spatial distribution of randoms within the tomograph L O R s depends on the singles distribution. In general, the randoms distribution is smoothly changing. A uniform singles distribution results in a uniform randoms distribution within a slice plane. However, detector efficiency variations cause sharp discontinuities, in the form of diagonal lines, in the randoms sinograms. Axially, in 3D acquisition, the total randoms rates within a slice plane decrease towards the axial ends of the tomograph. 60 While the relative distribution of randoms is independent of the activity in the subject, the magnitude of the randoms relative to the trues is not. The contribution of the random coincidences compared with the true events is quantified by randoms fraction (RF): RF =——— (3.4) R,+Rr where R r = rate of random coincidences. = rate of true coincidences. Both the singles and trues rates scale linearly with the total activity in the object (neglecting dead-time losses). The randoms rates depend on the square of the singles rates (3.3), and therefore, on the square of the activity. Thus, the randoms fraction increases with activity of the object. Similarly, the R F drops over the duration of the scan as the activity decreases. Equation (3.3) also implies that the randoms rates and the randoms fraction also depend on the coincidence timing window. 3.2.3 Effects of Randoms Random coincidences have three main effects on P E T studies. Qualitatively, they form a smooth background in the image, which reduces the contrast. Quantitatively, they add counts to the LORs , increasing the measured values in a non-uniform manner, thus creating inaccurate values and biases. As well, randoms are an additional independent component of the measured data set, and, as such, the noise in the randoms contributes to the noise of the data. In all cases randoms degrade the quality in the image data. The effect of randoms on P E T image data is quantified in the noise equivalent count rate (NECR) (2.6) (Section 2.2.3). Therefore, in P E T imaging, random coincidences should be minimized where possible through the design of the tomograph or imaging protocol. Likewise, the randoms should be subtracted from the image data. Since the randoms distribution is not uniform, it may not be simply subtracted or re-scaled and, instead, requires a more accurate correction. 61 3.3 Delayed Coincidence Measurements of Randoms The random coincidence rates in a detector pair may be estimated by two different direct measurements: the rate of delayed coincidences or the singles rates in the individual detectors. These measurements require dedicated electronics and, in some cases, data storage requirements beyond those for basic coincidence imaging. However, most tomographs have the capability for at least simple delayed coincidence measurements. 3.3.1 The Delayed Coincidence Measurement The single events that produce true coincidences are correlated in time because the detected photons are produced by the same positron annihilation. There is may be a slight time delay given the potential differences in travel time, photon detection time and event processing time, but this difference is smaller than the coincidence window width. Therefore, the true coincidences are measured in a real-time or "prompt" coincidence window. However, random coincidences also contribute to the counts measured using this window. The The single events that produce randoms are not time correlated, so the randoms rate in a coincidence window does not depend on the relative time at which the photons are detected. The random rate measured in a time delayed coincidence window would be the same as that measured in the prompt coincidence window. If the time delay is long enough (e.g. 64 to 128 ns), no true coincidences would be measured. Therefore, the count rate measured in a delayed coincidence window may be used to estimate the contribution of randoms to the coincidences in the prompt window. This measurement provides a direct method of randoms subtraction. 3.3.2 Prompts and Delays The coincidences measured in the prompt window are denoted as "prompts" while those measured in the delayed window are called "delays." These two measurement channels have different content. Prompts include both the true coincidence counts and the random coincidence counts that occurred within the prompt coincidence window. Thus: 62 NP = NT+NR (3.5) where NP = number of prompt coincidence counts. NT = number of true coincidence counts. NR = number of random coincidence counts in the prompt window. While the term "trues" sometimes refers to unscattered photons, here it refers to true coincidences as opposed to random coincidences. Thus, " T " includes both unscattered and scattered coincidences, since scattering does not affect the time correlation of the two photons. The measured delayed coincidences contain only random coincidences. Thus, ND = NR. (3.6) where ND = number of delayed coincidence counts NR,= number of random coincidence counts in delayed window. Thus, the randoms fraction, RF, (3.4) could be estimated as: N RF =—— (3.7) NP The prompts are corrected for randoms by simply subtracting the measured delays from the measured prompts for each detector pair or line of response: NT, = Np — NR, (3.8) where Nr = number of corrected true coincidence counts. In older systems, the delays were measured in a separate coincidence circuit and subtracted from the acquired prompts counts during acquisition. In some newer tomographs, the digital coincidence processor compares the detection times of buffered single events, so prompt and 63 delayed coincidences are determined in the same processing step. Again, the delays may be subtracted from the acquired prompts counts during acquisition. Some tomographs store the prompts and delays in separate histograms or in list-mode events flagged as prompts or delays. The delayed coincidence measurement gives a simple, yet elegant, method of randoms correction. Since the delayed events have the same processing (e.g. energy window, coincidence timing, dead-time), the method is highly accurate. However, in general, randoms subtraction increases the noise in the true data estimate. As well, when the randoms fraction is high, the subtraction from the prompts may leave the trues poorly determined. This may be especially problematic when measuring short time frames and the randoms rates are high. 3.3.3 No i se from Delayed Randoms Correction Although the randoms distribution and mean rates measured in the delayed window are identical to those contributing to the coincidences in the prompt window, the noise content is different. The variance of the prompts includes contributions from the trues and one estimate of the randoms: a2P=(Tj+a2R (3.9) where a2P = variance of the prompt coincidences. a2T — variance of the true coincidences. a 2 R = variance of the random coincidences in prompt window. The variance in the delays is from a second, different estimate of the randoms: ol=a\, (3.10) where G ~ d = variance in delayed coincidences. a2R, = variance in random coincidences in delayed window. The subtraction of the delays from the prompts increases the variance of the resulting trues because R and R'are different estimates of the randoms rate. Thus, the variances of the randoms in the prompts and in the delays do not cancel, but sum, resulting in: 64 or =(jp + <JD=<JT + aR +oR. =(jj + 2aR (3.11) If the randoms fraction is smalL the contribution of the subtracted delays to the image noise is generally insignificant. However, in short scans done with high activity, the randoms fraction may be very high (~50%). In such cases, the variance in the delays may contribute significantly to the noise in the corrected trues. As well, in high resolution tomograph designs with a very large number of L O R s (e.g. Siemens H R R T , 4.5 billion LORs), the coincidence counts acquired per L O R will be low, resulting in large variance in both the prompts and the delays. Therefore, correcting the raw measured prompts with the raw measured delays may greatly increase the noise in the image data. In contrast, i f the subtracted randoms estimate was noise-free, (i.e. o2R. =0), the overall variance would be reduced to the variance in the prompts: o\, = al + o\ = o2p (3.12) Therefore, the randoms correction method affects the signal-to-noise ratio of the P E T data. This is quantified in the N E C R (Section 2.2.3). Recall that the N E C R (2.6) is defined as: R2 R = 1 (3.13) R,+Rsc+kran(1Rsc The constant krand is assigned a value of 2 when simple delays subtraction is used because the variance of the randoms is added twice (3.11). A value of 1 is used when the randoms correction method is essentially noiseless. Randoms correction with reduced variance sets 1 < kmni < 2. Studies optimizing the N E C R have identified the noise from measured delayed coincidence randoms correction as limiting factor in 3D P E T high count rate performance [67]. Therefore, reducing the variance introduced by the randoms correction may provide significant improvements in the quality of P E T image data. 65 3.3.4 Randoms Correction wi th Reduced Variance Randoms corrections with noise reduction have followed three basic directions. The first is to fit the background outside the body to a smooth function and subtract it post-acquisition [69]. This background is caused by the combined scatters and randoms and is expected to be slowly varying compared to the trues distribution. This estimate is essentially noiseless and requires no additional measurements. However, it is prone to significant biases, especially for asymmetric randoms distributions or high randoms fractions. It also does not account for high frequency structure in the randoms distribution, such as that caused by detector efficiency variations. Similarly, a related method uses the deconvolution of smooth scatter and randoms response functions from the data [70]. These functions are determined by either phantom or simulation studies. Again, although no noise is added, large biases may be introduced. The second direction for low noise randoms correction is to reduce the variance in the measured delayed coincidences by combining the delays from L O R s involving the same detector. Variance reduction techniques are described in Section 3.4. The third direction is to estimate the randoms from the measured detector singles rates. Since the single rates are two or more orders of magnitude higher than the randoms rate, the estimated randoms from singles have lower variance. Singles-based randoms correction methods are described in Section 3.5. Variance reduction and singles-based randoms correction are then compared in Section 3.6. 3.4 Variance Reduction of Measured Delayed Coincidence Estimates 3.4.1 Introduction A number of methods of reducing the variance of the randoms measured in the delayed coincidence window have been investigated. The simplest method is to acquire the randoms in separate sinograms and then smooth them prior to subtraction [71]. While this technique would reduce the variance, it would also attenuate the high frequency components of the randoms data, 66 mainly caused by the efficiency variations in the crystals, and thereby introduce biases [72]. More sophisticated smoothing approaches are required to prevent artifacts in the reconstructed images. The problem of randoms variance reduction is almost identical to that of variance reduction in efficiency normalization. Many techniques developed for randoms correction are readily applicable normalization and visa versa. Variance reduction usually involves the measured counts in sets of L O R s involving the same detectors to increase the total counts involved. Exploiting the symmetries of the involved LORs , the randoms rates of particular L O R s may then be extracted. 3.4.2 The Basic Me thod of Casey and Hoffman The seminal variance reduction technique is that first proposed by Casey and Hoffman in 1986 [73]. This scheme sums the randoms rates between one detector and a group ("bank") of opposing detectors. The randoms rate, I L ^ , between detector i on one side of the tomograph and a bank of N detectors J, on the other, may be decomposed into the contributing singles rates: bank (3.14) where R j ; i = singles rate in detector i. nbank ~ number of detectors in one "bank. 2x = coincidence timing window. Likewise, the randoms rate for the other bank of detectors may be decomposed into: (3.15) The total randoms between both banks would be: (3.16) 67 From the product of (3.14) and (3.15) normalized by (3.16) the original randoms rate between two individual detectors is recovered: R, r,sum(i),j r,i,sum(j) n r,sum(i\sum(j) RrJJ=RtJRtJ2t = (3.17) In other words, the randoms rate between two individual detectors may be found from the normalized sum of groups of detectors including the pair of interest. Since the rates are summed the total counts increase; thus, the variance in the estimated randoms rate is reduced. The improvement in the variance depends on the number of detectors in the group (bank), nbank. The ratio of the variance when using a single L O R <T2 (1) to that when of using a group of nhank L O R s For example, i f a group of 64 detectors is used in this scheme, the variance is reduced by a factor of 32. Therefore, improvements in reducing randoms variances come from choosing larger, but appropriate, groups of detectors to use in the sum. 3.4.3 Variance Reduction Including Full 3D Data The basic method of Casey and Hoffman used only banks of detectors from the same slice, that is, contributing to the same sinogram. This technique was also limited to using the randoms rate per L O R rather than the singles rate per detector. Improved versions of this technique involve more detectors in a single-plane and/or detector pairs from oblique L O R s in 3D acquisition. These methods were originally developed for normalization variance reduction [53]. Individual techniques differ by whether they include single-plane or 3D coincidences, are exact or approximate, and i f the correction is based done for individual detectors or LORs . °"2(>w)is: (3.18) 68 The effectiveness of three of these techniques on randoms variance has been assessed for systematic accuracy and variance reduction efficacy in phantom and clinical studies [74]. A l l three methods showed reduction in variance. The one method that made a-prior assumptions about the singles distribution showed a low frequency bias, but the other two did not. Phantom measurements found that variance reduction increased the signal-to-noise from ~ 5 % to ~15% for the specific tomograph used in the study operating in 3D acquisition. Another randoms variance reduction scheme uses a Bayesian estimation method to compute the statistical mean values for each pixel of the randoms sinogram [75]. This was used to incorporate randoms smoothing into the Maximum a' Posteriori (MAP) reconstruction method. The smoothed randoms sinogram was estimated from the measured randoms sinograms and the intrinsic detector efficiencies extracted from blank transmission scan data. However, it was later found that the singles detector efficiencies extracted from trues coincidence data had significant difference from the singles efficiencies involved in random coincidences and the method was changed [76]. Instead, a maximum-likelihood (ML) estimate of the mean singles rate at each detector is calculated, from which a mean randoms sinogram is found. This method is admitted to be similar to that of Casey and Hoffman, except that a true M L estimate is used. The variance reduction resulted in lower visible noise in the sinograms. As well, when high and low count randoms data were both corrected by this method, there was only a small percentage squared difference between them. Biases introduced from this method were not assessed. 3.5 Singles-based Corrections 3.5.1 Introduction The contribution of randoms coincidences to the measured prompt coincidences in a line of response may also be estimated indirectly through the measured detector singles rates. The randoms rate for the line of response joining any two detectors is given by (3.3). 69 A number of singles-based randoms correction methods have been developed for different systems, each designed to work within the technical limitations of the device. For example, a number of singles-based randoms corrections were proposed for dual-head coincidence imaging systems which could not measure delayed coincidences [77] [78]. Only the most important singles-based methods for dedicated P E T are reviewed below. 3.5.2 Singles-based Randoms Correction for Dedicated P E T Tomographs The most direct method of singles-based randoms correction for dedicated P E T tomographs is to measure the singles rate per detector during acquisition. One early investigation of this technique used a P E N N - P E T 240H tomograph with large area NalfTI) detectors [79]. In this method, the singles distribution is first measured across the detectors in a separate scan because detector singles rates are not normally stored by this tomograph. Then, the randoms sinograms were calculated using (3.3). These were then scaled using an empirically determined relationship between randoms rate and singles rate and subtracted prior to reconstruction. For phantom and human studies, the resulting calculated randoms rates reproduced the values measured by delayed coincidence to within 17%. Unsurprisingly, phantom and human studies corrected with this method showed improved contrast compared to those corrected with a smooth background subtraction. Singles-based randoms correction was also developed for the G . E . Advance™ dedicated P E T camera [80]. For each detector, the count information is stored for every single event that meets the energy threshold requirement. Therefore, the singles distribution is directly measured and the resulting randoms rates may be calculated from (3.3). The accuracy of this method was tested by comparing the total measured delayed randoms counts in six slices of a patient study to those calculated from the singles. In these slices, the randoms counts rates agreed to within < 0.25% for both 2D and 3D patient data sets. The randoms counts per sinogram pixels were also compared. The pixel values measured by delayed coincidence were found to be Poisson distributed about the corresponding pixel values from the calculated sinograms, suggested that the differences were due mainly to noise rather than biases. The authors conclude that the method is highly accurate 70 compared to the delayed coincidence method. N o estimate of the noise reduction efficacy was made. In most tomographs the singles counts per crystal are not measured or stored during acquisition. However, the singles rate per detector block or bucket (group of four detector blocks) is measured and usually stored in the data file header. One method of singles-based randoms correction uses the measured singles rate per bucket [81]. It begins with the assumption that the ratio of the randoms rate in a detector pair to the product of the singles rates in the buckets containing the detectors is a constant. First, a high statistics reference scan is performed using a uniform elliptical phantom to produce a low variance randoms sinogram. Then, a singles sinogram is calculated for each crystal pair using the average bucket singles rates. This requires smoothing at the abrupt edges caused by the borders of the buckets. The measured and calculated sinograms are then divided to produce a correction sinogram. This correction sinogram takes into account the relative crystal efficiencies. During subsequent scans only the bucket singles rates are required. The randoms sinogram used for correction is created by first calculating a preliminary randoms sinogram using only the bucket singles rates. This sinogram is multiplied by the predetermined correction sinogram. Therefore, clinical studies only required bucket singles rates for randoms correction, not delayed coincidences or individual detector singles. However, this method is prone to biases arising from the averaging of the singles rates over groups of detectors, rather than using individual crystal singles rates. As well, biases may be introduced by the calibration sinogram. This method reproduces the total measured randoms rates to within 5% for a variety of imaging situations. Sample calculated randoms distributions profiles show visual agreement with measured randoms, but contain less noise. The sinograms also show lower standard deviation than those from delayed coincidence measurements. Finally, this method was shown to increase the N E C rates by up to 25% for clinical studies, compared with delayed coincidence measurements. 71 In a similar method [82], a reference scan is not used. Instead, the measured intrinsic detector efficiencies from component-based normalization data [54] are used to account for the differences in crystal singles efficiencies. However, this introduces a bias i f there is any difference between singles and coincidence events in pulse-pileup, dead-time or energy discrimination. This method also suffers from the same biases as the previous method, caused by the averaging the singles over groups of detectors. However, this bias is modeled in a global scaling factor that varies with detector singles rates (to account for dead-time differences and the effects of patient size). The randoms are calculated from the singles using a variant of (3.3): where sf — singles efficiency for detector i. a(Rsi,RJ — global scaling factor. This method showed improvements in the signal-to-noise rado (SNR) for contrast phantoms, similar to those obtained by variance reduction. However, biases resulted in large variations in the correction factor; a(Rsj,RsJ was found to vary from 0.80 to 1.13 for objects varying from the size of a head to that of an abdomen. 3.6 Discussion of Low Variance Randoms Correction Methods 3.6.1 Issues wi th Smoothed-Delays Randoms Correction While variance reduction is effective, it requires the measurement, storage and manipulation of the delayed coincidence data. The measurement of delayed coincidences increases the demands on the tomographs coincidence processor which results in higher dead-time. This limits the count rate capabilities of the system. As well, the delays require separate storage before subtraction which increases the data storage demands, whether the data are directly binned in histograms or stored in 72 list mode. Variance reduction also requires that the delays measurement be pre-processed before subtraction which increases the computing time necessary to produce the final image. The gains from randoms variance reduction depend on the number of detector pairs included in the summation. Some methods have inherent biases i f they make assumptions about the umformity of the singles distribution, or i f they require singles detector efficiencies. These biases may ultimately affect the accuracy of the randoms correction. Finally, the achievable variance reduction is still limited by the number of measured delays. In the cases of short scans or large number of LORs , these may still be small. 3.6.2 Issues wi th Singles-based Randoms Correction The singles-based randoms correction method requires the measurement and storage of the singles counts for every crystal, rather than measured delayed coincidences. This frees up the coincidence processor and reduces the storage demands by a factor of two. Unfortunately, most tomographs do not presently have the capabilities to measure and store individual crystal singles counts. Some singles-based methods require only the measured singles rates for blocks or buckets, which are often already available, so that the data measurement and storage demands are not significantly increased. However, using the larger groups of detector introduces biases due to variations in the singles distributions over the crystals in a block and due to differences in the relative crystal efficiencies. Other subtle issues may affect the accuracy of a singles-based estimate of randoms. First, the singles must be measured with the same signal processing parameters (preamplifier gain, energy window, and digitization) as the coincidence events; otherwise, the singles would yield an estimate of the randoms under different conditions. The singles must also experience the same dead-time losses or be accurately corrected for dead time. Since dead-time is mainly attributable to scintillator decay, the dead-time experienced by single and coincidence events is comparable, provided the coincidence processor does not significantly increase the dead-time. As well, in tomographs with 73 intrinsic radioactivity in the detectors, such as L S O , the constant singles background must be taken into account in the randoms correction. 3.6.3 Assessment of Variance Reduced Randoms Correction Methods. A n assessment of the effects of the different variance reduction randoms correction methods on data and image quality has been performed by Brasse, et al [82]. This work compared the impact of smoothed-dekys and singles-based randoms correction. They concluded that both methods increased the N E C R , leading to improvements in the image SNR of about 15% for both filtered backprojection and iterative reconstruction. Although this improvement is unnoticeable in the images, it is caused by increasing the N E C R by 32%. This study concluded that i f the original image S N R is acceptable, the reduced variance in the random correction would allow a 25% reduction in the patient scan times. The study also found that the singles-based method performed marginally worse than the smoothed-delays method because of biases from using singles rates from large numbers of detectors. In summary, randoms variance reduction is of value in reducing the noise in the image data. While single-based randoms correction reduces the workload of the coincidence processor and the data storage demands, the method is subject to biases i f single counts are not recorded for each crystal. As well, i f the singles and coincidence counts are not processed identically, other biases may occur. Therefore, a new singles-based randoms correction has been developed. In this new method, the single events in each crystal are found from a model-based calculation, so that no biases are introduced by averaging over many detectors. Also, no measured singles distributions are required because the singles are calculated only from the preliminary reconstructed activity and attenuation images, making this method applicable to any coincidence imaging device. The singles distribution is smooth, so that the randoms distribution used for the correction is essentially noiseless (i.e. kmnd ~ 1 in the N E C R equation). This novel model-based randoms correction is the subject of Chapter Four and the remainder of this thesis. 74 Chapter 4 A Mode l -Based App roach to Randoms Correct ion 75 4.1 Introduction 4.1.1 Chapter Overview This chapter details the theory of the model-based randoms calculation, beginning with an overview of method. Then, the details of the model-based singles distribution computation are given, including the calculations of the solid angle, photon survival and the detector efficiencies. The effects of photon scatter in the object on the calculations are then described. The chapter ends with an explanation of the how the randoms distribution is calculated from the singles and the scaling required before subtraction. Chapter Five describes the specific implementation of this method for the MicroPET R4 tomograph. 4.1.2 Overview of the Method The model-based random correction is a post-acquisition method of estimating the randoms distributions from calculated singles distributions. The calculation uses only the preliminary reconstructed images of the radioactivity distribution and the attenuation coefficients and the measured global singles rate. Neither delayed coincidence data, nor individual crystal singles data are required. The calculated randoms distributions are smooth and therefore do not increase the noise in a randoms subtracted data set. The heart of this method is the model-based calculation of the singles distribution. This method differs from traditional Monte Carlo calculations in that the histories of individual photons are not followed in detail. The probabilities of detecting singles and trues are calculated from modeling the single photon detection probability rather than by following a large number of photon histories. This increases the computational efficiency and reduces the calculation times. Thus, the method is comparable to model-based scatter correction [59] [61] and normalization [52] schemes. It is also similar to other model-based simulations [83], except that the randoms correction calculation is driven by the data. The following is a broad outline of the randoms calculation, with the details explained in the rest of the chapter. A set of activity sample points is generated within the radioactivity and attenuation 76 images. The contribution of each point to the singles rates at each detector in the tomograph is then calculated, taking into account the activity at the point, the probability of photon attenuation in the object, the detector solid angle and intrinsic efficiency, and the acquisition energy window. The effects of photon scatter are included in the attenuation and detector efficiency calculations. The total singles distribution is the sum of the contributions from all the activity sample points. The calculated singles distribution is scaled to the measured average global singles rate. Then, the randoms rate per L O R is calculated from the singles distribution for every L O R acquired by the tomograph. The calculated randoms distribution is spatial variant. Finally, the total number of randoms events for the given acquisition time is calculated and the randoms are subtracted from the prompts data. The randoms corrected data may then be reconstructed normally. The method may be applied to static or dynamic data sets. For dynamic data, a radioactivity image must be reconstructed, and a different randoms correction calculated, for each time frame. 4.2 Singles Distribution Calculation 4.2.1 T h e P r e l i m i n a r y R a d i o a c t i v i t y a n d A t t e n u a t i o n Coe f f i c i en t I m a g e s The first step in the model-based randoms calculation is the preliminary reconstruction of the radioactivity distribution and attenuation coefficient images from the measured emission and transmission data. Since no delays are measured or subtracted, the emission data consist of prompts. The prompts sinograms are corrected for attenuation, normalized, and then reconstructed using standard reconstruction techniques. A n accurate attenuation image is required for both the attenuation correction and for calculating photon survival in the singles calculation. 4.2.2 G e n e r a t i o n o f A c t i v i t y S a m p l e P o i n t s The positions of the activity sample points are found by randomly generating x, y, and z locations within the volume of the activity distribution. The positions are absolute and are not discretized to the voxel centres. The total number of points generated is determined by the pre-defined sampling 11 density (sample points per ml). Increased sampling density improves the accuracy of the singles distribution calculation, but at the expense of increased computation time. The optimal sampling density is the smallest value that produces negligible differences between identical calculations of the singles distributions. To improve the computational efficiency, a threshold is used to remove the sample points with low relative activity values from the sample set used in the calculation. The activity values of the sample points are still used in the calculation of the singles distribution; the threshold sampling simply determines the lowest activity used in the calculation. The threshold value is set as a percent of the maximum reconstructed activity value. The threshold also has the benefit of removing points in the background (randoms or scatter) from the calculation if the threshold is high enough. The optimal radioactivity threshold setting may be determined empirically by adjusting it so that there are no generated sample points outside the boundaries of the object. Figure 4-1 shows a set of activity sample points from a small bottle with uniform radioactivity. J 1 1 i i i -5 0 5 -5 0 5 x x Figure 4-1 A set of randomly positioned radioactivity points (blue circles) from a uniform cylindrical bottle with a neck. The green circles and lines show the FOV of the tomograph. The total number of sample points used in the singles calculation depends, not only on the sampling density and the threshold, but also on the activity distribution. A greater dynamic range 78 of activity concentration values requires a lower threshold for accurate results. This translates into more points for a given sampling density. In general, the singles calculation requires a few thousand points for accurate results when a complex activity distribution is used. The results of testing the values of the threshold and sampling point density are described in Section 6.3.4. The random nature of the activity image sampling, along with the statistical noise in the image, causes differences between each realization of the calculated singles distribution even when the calculation parameters are identical. These variations do not appear as random fluctuations in the individual pixel values. Instead, they cause small differences in the overall smooth shape of the calculated singles distributions between otherwise identical calculations. Thus, noise in the radioactivity image manifests itself in small biases in the singles distribution. As well, different sets of activity sampling points lead to small differences in the calculated singles distributions between realizations. The magnitude of the random biases from both processes depends on the sampling parameters and is inversely related to the number of sample points used in the calculation. 4.2.3 Single Photon Detection Probability Once generated, the set of activity sample points is used as the radioactivity distribution from which the singles distribution is calculated. The probability of detecting single photons originating at each sample point is calculated for every detector in the tomograph using an analytic calculation based on a detection model that includes the: • solid angle and other geometric effects, • photon attenuation based on the reconstructed attenuation map, and • detector efficiencies. Thus, the probability of annihilation photons from activity sample point, i, being counted by a detector, A, is: 79 p. dei,i,A = P (4.1) where QirA — solid angle of detector A from sample point i. ^mrviA ~ p h ° t o n survival probability in object from i to A. £ i A = efficiency of detector A for photons originating at i. The calculation of the singles distribution uses (4.1) for the primary annihilation photons. However, the trues distribution may be simultaneous calculated i f the detection probability of the annihilation partner photons is also found. This additional computation was included to provide an additional test of the accuracy of the photon detection model, but does not directly impact the randoms calculation. 4.3 Geometric Calculations 4.3.1 Introduction A Cartesian coordinate system is used for the solid angle calculations. In the right handed Cartesian coordinate system, the transverse slices are in the x-y plane and z is the axial direction, with the positive direction being from back to front. The detectors in the tomograph form a cylindrical detection surface, as shown in Figure 4-2. 80 Figure 4-2 The coordinate system for a cylindrical dedicated tomograph. The z-axis goes from back to front. The scintillator material forms an annular volume with the inner radius = the radius of detector ring, rring and the outer radius = rring + xda, where xdtl - detector thickness. Each detector crystal corresponds to a square or rectangular area on the inner cylindrical surface of the tomograph. Most tomographs also have an electronically defined F O V within the detector ring that defines the radial limit of accepted L O R s and, therefore, the radial extent of the sinograms. The axial F O V is defined by the z extent of the tomograph. The relationship between these physical dimensions is shown in Figure 4-3. 81 Figure 4-3 The tomograph physical parameters relevant to the solid angle calculation. Note that the crystal dimensions are not to scale. 4.3.2 Solid Angle of the Primary Photon The probability of counting a photon in a given detector, A, depends first on the solid angle presented by the crystal to the activity sample point. The solid angle is given by: (4.2) where aA — area of the detector A at normal incidence. YiA ~ photon incident angle from sample point i to detector A. riA - distance from sample point i to detector A. The elements of the solid angle calculation are shown in Figure 4-4. 82 [elector A Figure 4-4 The elements of the solid angle calculation from activity sample point i to detector A. For computational economy, the solid angle calculation treats the detector crystals as simple area elements rather than volumes. The solid angle is not calculated at the front face of the crystal, but at a depth equal to the mean depth of photon interactions in the crystal (e.g. 4.5 mm for 1.0 cm thick LSO). The distance from the activity sample point to the detector, rJyi is then found by: The vector from i to A is rjA = JC- — xA = (x, - xA,jj -yA, % - ^ ) . The vectors describing the normals to the detectors are simply the detector positions in the x-y plane and zero in the axial direction, i.e. nA — (XA,JIA, 0). The incident angle, yiA, is the angk between rt A and nA, and is found from: (4.3) where Xj — coordinates of the activity sample point i xA = coordinates of detectors!. 83 COS Yi,A = I h I = i i (4.4) where rn- is the radius of the detector ring. The incident angle is used in both calculating the solid angle and in determining the effective thickness of the detectors (4.11). 4.3.3 Solid Angle of the Partner Photon When calculating the probability of a coincidence, the probability of detecting the annihilation partner photon must be calculated. However, it is not necessary to determine which crystal the partner photon hits because of its collinearity with the primary photon; thus, the detector solid angle is not calculated explicitly. It is only necessary to determine i f the partner photon is incident on the tomograph detector ring. This is found by calculating the axial position of the partner, on the path rt;B when its radial position is rHng, i.e. at x2 +y2 = r2ng . Partner photons which miss the tomograph are not detected and, therefore, cannot result in a coincidence, only a singles event. Thus, i s calculated as: Pdeij.B ~ Psurv,i,B£i,B f°r \Zp \ — Z-wmo PdeaB=° f°r\zP\>zwmo (4.5) where i,B = path of the annihilation partner photon. siB - detector efficiency for annihilation partner photon = siyi. %p = axial position for along riB when x2+y2= R2ng. %jmo — axial extent of the tomograph ring. 4.3.4 True Coincidence LOR Calculation and Corrections The probability of a true coincidence in 3D acquisition is found for the line of response (LOR) defined hy f{ A using the coordinates xn 6, ^  and Perfect collinearity is assumed for all events. The relationship between the sinogram axes and the tomograph geometry are shown by Figure 84 4-5. T h e radial bins o f the sinograms are the parallel L O R s for a given azimuthal angle. T h e transverse image planes are the slices in the axial direction. Figure 4-5 The axes of the sinograms are related to the tomograph parameters for transverse slices. Before binning, the calculated trues distributions are corrected for the changing sampling distances in the radial direction. This "arc effect" reduces radial bin sizes as the displacement from the tomograph axis increases and, therefore, influences the relative counts per bin. A n arc correction is usually applied to sinograms prior to reconstruction to prevent image distortion and artifacts from the non-uniform sampling. A r c effects do not occur for single photon detection and, thus, do not influence the randoms sinograms. 4.4 Photon Survival Calculation 4.4.1 Photon Attenuation T h e probability o f a photon emitted at sample point i being detected at detector A is reduced by attenuation in the object along rt A . T h e intensity reduction is object and photon path dependent; thus, the calculation requires sampling the reconstructed attenuation image. 85 The probability of photon survival is given by the line integral: 'A - j ft(x,y,z)ds e " (4.6) where pi(x,y,%) — position dependent attenuation coefficient. xt = coordinates of activity sample point i. xA — coordinates of detector A. ds — interval along jj A . In the photon survival calculation, the 3D attenuation image is sampled at discrete points, J . , at intervals of ds along jj A. The points are generated using the line parameters of Jj A from xt to the edge of the electronic F O V : s +] =s+dsx = Sj +dsx (cos 0 sin <f>, cos 6 sin (/>, cos 0 sin (/)) (4.7) where Sj = attenuation sampling points. ds — sampling interval distance. 6 - azimuthal angle of r A . cp - oblique angle of fiA. The attenuation coefficient for each sample point, pp is found by indexing the voxel in the attenuation coefficient image corresponding to the coordinates of the attenuation sampling point, Sj, and looking up the attenuation coefficient value for that voxel; pf^j)- ^ e photon survival probability for each interval ds is P. = e Mjds. Then, the total survival probability along jj A is: surv,i,A 86 where j = attenuation sampling point. Pj = photon survival at/, Pj = attenuation coefficient at/. The probability of a coincidence depends on the survival of both annihilation photons. To calculate the survival of the partner photon, the attenuation object is sampled in the opposite direction (i.t.—riA). The total survival is calculated by (4.8), with an opposite, but collinear, set of attenuation sample points. Figure 4-6 shows the attenuation object sample points for the both photons along f. A and — A , respectively. x x Figure 4-6 Attenuation sample points for along the path from an activity sample point to a detector. Both the primary (blue) and partner (pink) sample points are shown. The green lines show the FOV of the tomographs. The photon survival probability from an activity sample point is calculated for every detector in the tomograph. This requires a large amount of sampling of the attenuation object, and is the majority of computational burden in the randoms calculation. 87 The photon survival calculation assumes narrow-beam attenuation: that all the photons interacting on the path riA are completely lost. However, 511 keV photons interact in tissue almost exclusively by Compton scatter; thus, the narrow-beam calculation underestimates the total photon survival because a fraction of the scattered photons are still counted. Section 4.6 describes the modifications to the photon survival calculations necessary to account for photon scatter in the object. 4.5 Detector Efficiency Calculations 4.5.1 Detector Efficiency The last component of the singles photon detection probability calculation is the efficiency of the tomograph detectors. The total detection efficiency, sde„ is the product of the detector intrinsic efficiency, sint and the fraction of the events with deposited energies within the acquisition energy window,./^: £det = £int fwin (4-9) The detector intrinsic efficiency is defined as: £ba=\-e~"*,x*'" (4.10) where pda — attenuation coefficient of the detector material. xda,g ~ effective detector thickness. For normally incident photons xdel^ is simply the detector thickness. However, the effective detector thickness increases with the incident angle, y. cos ft,A (4.11) where xda - detector thickness. yi?A — photon incident angle. 88 The effective pathlength of photons in the intrinsic efficiency calculation is also affected by the saw-cuts in cut-block detectors. The saw-cuts not only decrease the packing fraction, they reduce the total thickness of scintillator encountered by the photons passing through the detector by the ratio: _ crystal size J pack ~ ^ j . crystal spacing Therefore, the effective detector thickness used in the efficiency calculation is: Xdet,eff = fpackX del (4-13) 4.5.2 Energy Window Fraction The energy window fraction,^,,, is the fraction of the photons interacting in the scintillator crystal that deposit energies greater dian the lower energy threshold. Photons diat deposit energies less than the lower threshold are not counted, resulting in reduced detection efficiency. The value of the threshold is user controlled (typically 250 keV or 350 keV), and the optimal values depend largely on the energy resolution of the detectors. 511 keV photons that interact in the detectors by photoelectric absorption deposit their full energy and are counted in die energy window. However, even in scintillators with high density and high atomic number, the probability of photoelectric absorption of 511 keV photons by a single interaction is small (e.g. 32.5% in LSO). Most of the photons Compton scatter in the scintillator. Those that undergo Compton scatter, but are then absorbed in the detector ("Compton-absorption"), deposit their full energy. Elowever, 511 keV photons that Compton scatter once in the crystal, without a secondary interaction, deposit only a fraction of their total energy; between 0 and 340 keV for 511 keV photons. Neglecting the effects of detector energy blurring, all these events would be rejected by a 350 keV threshold window and a large fraction would be rejected by 89 a 250 keV window. The contribution of Compton-absorption is not calculated for a each photon path and detector, but it should increase with longer effective detector thickness xda^ Therefore, since fain is dependent of the effective detector thickness, the detector efficiency calculation might be improved by not assuming that eino and^ S B are separable, as was done in (4.9) and (4.10). The effect of the energy window on detector efficiency is then incorporated by modifying the exponent in the intrinsic efficiency equation: where xda — detection efficiency correction factor to account for the effect of energy thresholds. The value of KDEL may be estimated for a given tomograph detector design (Section 5.4.3), but empirical adjustment may be necessary to reproduce measured data (Section 6.3.2). As well, since KDET is dependent on the energy threshold, different values of KDTT are required for different energy window settings. The values of fain and K D A would also are be largely dependent on photon scatter in the object, which reduces the photon energies. However, in this calculation, the detection of the object-scattered photons is calculated separately from the un-scattered photons, as described in Section 4.6. 4.5.3 Relative Crystal Efficiencies The efficiencies of individual crystals vary systematically depending on their position in the detector block. For example, photons that scatter in the crystals near the block edge have a lower probability of being absorbed in the same block than photons scattered in the centre crystals. Therefore, the edge crystals have a systematic reduction in efficiency. True coincidences are corrected for these efficiency variations on a L O R basis using the normalization scan data (Section 2.4.4). However, the efficiency variations produce a "crystal block structure" pattern on 90 the singles distributions. This creates a diagonal pattern on the randoms sinogram due to the reduced efficiencies of the edges crystal, as shown in Figure 4-7. sinograms 10 20 30 40 50 60 70 80 radial bin (d) Figure 4-7 The diagonal pattern seen in a measured randoms sinogram. This pattern is caused by the "block structure," the systematic efficiency variations within a detector block. The systematic, block-structure dependent, singles efficiency variations are incorporated in the calculation of the singles distribution. The singles efficiencies are modeled assuming that the efficiencies of the crystals at the same relative position within the block are identical. The relative block singles efficiencies are determined using Monte Carlo simulations with adjustments made to improve the agreement of the high frequency details between measured and calculated randoms distributions (Section 5.4.5). This block structure is then superimposed on the calculated singles distribution before the randoms distributions are calculated. One model of the block structure is used to correct all the detector blocks. 91 There are also random variations in the individual crystal singles efficiencies. These arise from the P M T tuning, scintillator and P M T non-uniformities and other subtle effects. Random efficiency variations are not incorporated into the singles calculation because they are not directly measured. While singles detector efficiencies are extracted in some normalization schemes [53] [54], they are measured using true coincidences. Thus, they reflect the singles efficiencies for a smaller range of incident angles than the single photons involved in the random coincidences, and are not appropriate for correcting the calculated singles distribution [76]. 4.6 The Effects of Photon Scatter in the Object 4.6.1 Broad-beam Correction The photon survival calculation (4.8) is only accurate in a narrow-beam situation where attenuation is equivalent to count losses. It assumes that all photons interacting on the path rt A are not detected. However, since nearly all the interactions of 511 keV photons are by Compton scatter, very few photons are absorbed in the object. In a P E T scanner, count losses only occur i f the scattered photons physically miss the tomograph or i f the scattered photons are rejected by the energy thresholds. Thus, (4.8) overestimates the count losses because a fraction of the scattered photons are counted. To account for the lower count losses, the narrow-beam attenuation coefficients from the u-map used in (4.8) are modified by a broad-beam correction factor, (x^ < 1); thus, the photon survival calculation becomes: psurv=Ue~"ids (4-15) j This correction increases the calculated photon survival and, therefore, increases the effective singles counts at each detector. This is similar to the build-up factor used for corrections involving 92 broad-beam geometries [84] and is in agreement with the lowered attenuation coefficients seen in 3 D P E T [85]. xbmd is a correction to the photon survival calculation along each path and is derived from the probability o f the scattered photons being detected. It is assumed that the reduction in the counts in a detector caused by photons scattering out o f the original path is roughly balanced by those that scatter into the detector from other paths (Section 4.6.3). Since every photon path from the activity sample point to each detector is sampled, the net effect o f this modification is to correct for the broad-beam nature o f the photon detection. This broad-beam correction is based on the probability o f the photons that single-scatter in the object being counted somewhere in the tomograph. This depends on the energy distribution o f single-scattered photons, the energy-dependent efficiency o f the detectors and the value o f the energy threshold. Thus, the value o f xbrmd for a given photon path may be estimated from a calculation o f the fraction o f the object-scattered photons that are counted in the tomograph (Section 4.6.2). Since x^ for a given path depends on the tomograph design and acquisition parameters, it becomes part o f the model for a specific tomograph and acquisition protocol (Section 5.4.4). Finally, in this approach, the value is adjusted based o n empirical studies (Section 6.3.3) and the dependence o f the accuracy o f the calculation on the value o f xbrmd is investigated. In general, build-up factors vary with die product o f the narrow beam attenuation values and the thickness, px. They also vary with the photon energy and attenuating material. In the present calculation, # w is a single correction value for the average broad-beam build-up that is used for all paths as a first approximation. In practise, scaling the calculated singles distribution to the measured global singles rates makes the calculation less sensitive to the value o f ^ w for different sized objects A s well, the final singles counts in each detector are calculated using contributions from many activity sample points with many different paths; thus, errors associated with using an average value o f xkoad for all paths tend to cancel out. In situations where this approximation could potentially lead to inaccurate results (for example, when applied to larger objects such as whole 93 body h u m a n studies) a p a t h d e p e n d e n t va lue of xbmi c o u l d easily be i m p l e m e n t e d i n the c a l c u l a t i o n (Section 7.2.4). Although the b r o a d - b e a m c o r r e c t i o n accoun t s fo r the r e d u c e d c o u n t losses due to scatter, i t does n o t address the issue of d i s p l a c e d coun t s . The p h o t o n s scattered f r o m a path rt A w o u l d n o t be c o u n t e d i n de tec tor A., b u t i n a di f ferent detector , po ten t i a l ly a f fec t ing the singles d i s t r i b u t i o n . The effects o f the angular d i s t r i b u t i o n o f scat tered p h o t o n s are d i scussed i n Section 4.6.3. 4.6.2 Estimate of Fract ion of the Object-Scattered Photons that are Counted The f rac t ion o f the objec t -sca t tered single p h o t o n s that are a b s o r b e d i n the de tec tor a n d c o u n t e d , fsa&bss d epends o n a n u m b e r o f factors [86], i n c l u d i n g the: • energy d i s t r i b u t i o n of the scattered p h o t o n S j / ^ f E J , • energy d e p e n d e n c e of the de tec tor in t r ins i c ef f ic iency, sint(Es), • energy d e p e n d e n c e o f the p h o t o n a b s o r p t i o n p r o b a b i l i t y i n the d e t e c t o r , ^ f a ( E J , • energy b l u r r i n g caused b y the d e t e c t o r , ^ ( E J , a n d • energy w i n d o w s se t t ing used d u r i n g acqu i s i t ion , . fttin(EJ. Each of these factors are e x a m i n e d i n t u r n a n d b u i l t i n to the est imate. The energy d i s t r i b u t i o n o f single scat tered p h o t o n s , fm(EJ, is g i v e n b y the Klein-Nishina e q u a t i o n cast i n t e rms of the energies of the scat tered photons, Es: da 1 dEsc E2 Ep | ESC j V sc \ 2 •2mec J o J 1 p J (4.16) w h e r e mc — e l e c t r o n rest mass energy. E0 — u n s c a t t e r e d p h o t o n energy. Esc — sca t te red p h o t o n energy. 94 This distribution is shown in Figure 4-8 for energies from the backscatter peak (170 keV) to the full energy (511 keV) for annihilation photons. 0.006 i | 0.002 () 150 200 250 300 350 400 450 500 550 energy of scattered photon Figure 4-8 The differential cross section vs. scattered energy for object-scattered 511 keV photons; from the backscatter energy (170 keV) to full energy (511 keV). The vertical red lines show two common lower level thresholds use in PET. The efficiency of detecting these photons depends on Elt because the attenuation coefficient of the scintillator is energy dependent. As well, since the scattered photons have a range of incident angles, the effective detector thickness for individual photon paths, xdet^ in (4.10) must be replaced by an average effective value <xda>. The intrinsic efficiency is then a function of the scattered photon energy, Esr The product of (4.16) and (4.17) gives the energy distribution of the object-scattered photons that interact in the crystal. eim{Esc) = \-e ,-Pd« < £ « ) < * A r . > (4.17) 95 This absorption fraction, fabi(EsJ, is the fraction o f the interacting photons that deposit all o f their energy in the detector, which depends on the photofraction o f the detector. Although, in general, this is not a simple function o f Esp in the first a p p r o x i m a t i o n ^ / E J is simply the probability o f photoelectric absorption. Thus, the total energy distribution o f scattered photons that are fully absorbed in the detector is: fKN(Esc)£inl(Esc)fabs(EJ (4.18) However, this distribution is blurred by the energy resolution o f the detector. This may be modeled by convolving (4.18) with a Gaussian function, fG(E) that models the shape o f the photopeak: fa(E,E0)= (4.19) •42KO(E0) where E0 — reference energy o(E0) = FWHMfEJ /2.35 and FWHM is the width o f the photopeak. This function is made more accurate by using an energy dependent width, given that FWHM ° c 1/V£ . In general, energy blurring in the detector flattens the distribution and may changes the fraction o f accepted photons depending on the energy threshold. In summary, the total energy distribution o f the object-scattered photons that are fully absorbed in die detector, FmtfabfEs) , is: ^ . ^ ( ^ « ) = [ / ™ ( ^ ) ^ ( ^ ) / « t a ( ^ ] ® / C ( ^ . ^ ) (4-20) 96 Finally, the fraction of the photons that are absorbed in the detector and counted in an energy window,/ / i a / a t o may then be estimated by integrating (4.20) over the bounds of the energy window and normalizing to the total integrated scattering cross section: J" Rscat,abs(Esc)dEsc fscat.abs ~ Esc=5l\keV (4.21) \ fKN(Esc)dEsc E=\70keV where Et<pper ( E w ) = upper dower) energy threshold. The impact of the detected object-scattered photons may be seen by simplifying the photon survival calculation to a uniform attenuation situation: Ps^^e^ (4.22) where pobj = attenuation coefficient of the object material. xobj — pathlength through the object. Neglecting the object-scattered photons, the total photon detection probability (4.1) is: P* ~ Ps^ce, = e'^b' (1 - e™*** ) (4.23) Adding the effects of the detected scattered photons (4.23) becomes: Psu^a-e, = (1 - e'"""^) + (1 - e"^ ) fscm (4.24) If the broad-beam correction factor is used to account for the scatter, then (4.24) becomes: D £• ~KbroadMolyjxob} / I ~Kdet^LletXdet4f \ ( A o r \ rsurvt-dei ~ c U e ) V*-^D) 97 T h e value o f « w is then estimated by equating (4.24) and (4.25), and solving for x w for a given value o f pobjX0y. This calculation has been done for the M i c r o P E T R4 (Section 5.4.4). However, this calculation provides a lower limit estimate because it only accounts for the singly scattering photons that are fully absorbed in the detector by a single interaction. It does not include other relevant processes, such as multiple C o m p t o n scatter or Compton-absorption in the detector, that increase the number o f counted photons. Therefore, the estimated value o f xhnad is further adjusted to fit the empirical data. T h e dependence o f the randoms calculation accuracy on the value o f x w w a s also investigated experimentally (Section 6.3.3). A single value of xhrmi was used in the randoms calculation and is expected to give good results for small objects (such as those tested in this work). In principle, the optimal value of xhnad changes with object size. However, scaling the calculated singles to the measured singles rates reduces the dependence of the results o n the exact value o f xbrgalj. T h e above derivation may also be used to determine the path-dependent values o f x^j, which may lead to improvements i f the randoms correction is applied to larger objects (Section 7.2.4). 4.6.3 The Spatial Distribution of the Object-Scattered Photons In the previous section, the fraction o f the object-scattered photons that are detected was described and a broad-beam correction was applied to the photon survival calculation. Elowever, the spatial distribution o f the detected scattered photons was not discussed. Since the scattering process alters the paths o f the photons, the distribution o f detected single photons could be affected. Following the paths of every scattered photon in a given study would require a detailed Monte-Carlo simulation. In the model-based calculation, the effects o f the photon displacement may be determined based on some general observations [87]. T h e photons scattered from the beam along r A are effectively distributed in a cone around ^ o r i g i n a t i n g at the scattering location, xsa as shown in Figure 4-9. Thus , the overall effect o f 98 scatter is that photons scattered along A are counted by detectors in an area centred on detector A. Figure 4-9 Scattered photons are scattered into a cone around the ray between the emission point and the detector. Some of the scattered photons hit other detectors. The energy thresholds not only cause count losses due the rejection of object-scattered photons, they also effectively determine the maximum and average scattering angles of photons that are counted. The relationship between scattering angle, ds0 and scattered photon energy Ex is given by: 6SC — arccos r , 511 511^ 1 + (4.26) where Ea = original photon energy. Etc = scattered photon energy. From (4.26) an energy threshold of 250 keV corresponds to a maximum scattering angle, 6scmax = 93° and a threshold of 350 keV corresponds to dSCtrnax — 58°, ignoring energy blurring. Thus, the scattering cones of the energy-accepted photons have these half-angles. The angular distribution of Compton scattered photons is given by the common form of the Klein-Nishina equation: 9 9 da re l + c o s 2 0 dQ 2 [l + a(l-cos0sc)f 1 + (1 + cos 2 0SC) [1 + a(l - cos 0SC)] (4.27) where rt = the classical electron radius. a = E/ mf (ratio of photon energy to electron rest mass energy.) Bsc — scattering angle. dQ. = 2n%m0d0^. F i g u r e 4-10 shows the normalized differential and integral scattering cross sections for 511 keV photons. The cross-section is strongly biased to small scattering angles; thus, the photons have a higher probability of being detected closer to detector A. Therefore, although the maximum scattering angles are 58° and 93°, respectively for thresholds of 350 keV and 250 keV, the average scattering angles are lower. In the case of a 350 keV energy threshold, 50% of these detected photons were scattered into a cone with a half angle of 34° (45° for 250 keV.) However, in absolute terms, the number of scattered photons that hit an individual detector is smaller at larger scattering angles, because the distribution is spread over a larger area, reducing the beam density. Figure 4-10 The normalized differential (solid) and integral (dashed) cross sections for the Compton scattering of 511 keV photons at angles from 0 to 180°. 100 While it may be possible to calculate the distribution of scattered photons, including the necessary additional corrections for the .differences in solid angle, survival and detection efficiency for the scattered photons excessively increases the complexity and computational burden of the calculation. However, this calculation is not necessary because the effects of scatter on the singles distribution generally cancel. Any photons that scatter out of the beam along the path fj A, and therefore do not hit detector A, hit one of the surrounding detectors. Likewise, some of the photons that scatter from neighbouring beams hit detector A, as shown in Figure 4-11. The total photons incident on the tomograph detectors is largely preserved. Therefore, no further modification to the singles distribution is used beyond the broad-beam correction. detector X point, i Figure 4-11 Some of the photons scattered along r{ A hit detector Xand some scattered along r{ hit detector A This calculation assumes that the losses and gains of counts due to scatter in neighbour photon beams are roughly balanced. 4.7 Calculating, Scaling and Subtracting the Randoms Distributions 4.7.1 Calculating Relative Count Contributions Once the photon detection probabilities per detector (4.1) are calculated for an activity sample point, the singles and trues count rates contributions are simply the product of the detection probability and the activity of the sample point. Therefore, the contribution of the radioactivity at i to the singles rate at detector A is: 101 Kj.A = PdeU,A\ (4-28) where ^daiA ~ probability of single event on detector^ due to radioactivity at i (4.1). A\ = radioactivity at sample point i. The total singles rate at detector A, RsA, due to all the activity sample points is then: K . A ^ ^ . A (4-29) '=1 where n — number of activity sample points. 2 = the emission of two photons per annihilation. Likewise, the calculated contribution to the trues rates along the L O R defined by the line rt A .is: Ri,i,LOR(iA) = Rdei.i,ARdeij,B^i (4.30) where LOR(M) = the L O R defined by the line fiA. The total trues rate for LOR(A) is the sum of the contributions from all the sample points. 4.7.2 Scaling the Rates The calculated singles and trues rate distributions described thus far are only relative values. Therefore, the calculated detector singles rates require scaling to absolute values before the randoms distributions are calculated. The simplest and most robust scaling method is to simply scale the total calculated singles to the global, average singles rate measured by the tomograph. In this case, the scaling factor for singles and trues is: ft=^SSL (4.31) s.calc where = measured average global singles rate. R,.^ — sum of the calculated singles rates of all detectors = ^T/?, n K S.A 102 This one scaring factor is applied to all the calculated singles rates in each detector to give the scaled singles rates K'SjA: This method of scaling requires that the measured global singles rate be corrected for detector dead-time. There are potential problems if dead-time associated with singles rates and coincidence rates differ due to differences in their processing. Errors in the singles rate dead-time correction lead to scaling inaccuracies, especially at higher count rates, and impose a limit on the maximum count rate for which the randoms correction would be effective. However, for both singles and coincidences, the scintillator decay time dominates. Therefore, the dead-time correction factor calculated by the tomograph software may be used to correct the singles rates, albeit in a modified form. The effect of high dead-time on the accuracy of the model-based randoms correction is tomograph dependent, and must be determined experimentally. 4.7.3 Correction for L S O Radioactive Background on Singles Calculat ion The use of scintillators which cause a radioactive background counts (e.g. LSO) requires (4.31) and (4.32) to be modified. The singles calculation assumes that all the singles events are due to the radioactivity distribution. However, the measured rates have two components: one due to radioactivity in the subject (which is time dependent) and the other due to the background radioactivity in the scintillator material (which is time independent.) Since the singles distribution calculation assumes that the counts are due only to the activity in the subject, the scaling factor should be: R S , A ~fsR*,A (4.32) R, -R. 's,bgd s,bf>d R. (4.33) ls,calc where Ksbgd — measured total singles rate due to background radioactivity. 103 Then, the singles rates in each detector, A, are scaled by: R s,ca!c,A = fsJ>gdRs,calc,A "*~ Rs,bgd,A (4-34) where ^ c a k A - calculated singles rate in detector A. RSibgtisi — measured background singles rate per detector. Since the trues rate from the background radioactivity is generally negligible in realistic scanning situations, the trues rates are simply scaled by. R-l,WR{iA) ~ fs,bgdRl,lor(iA) (4.35) 4.7.4 Calculating Randoms from Singles Once the singles rates distribution is scaled, the average randoms rate, for each pair of detectors is computed directly from the calculated average detector singles rates using (3.3). The parameters of the line of response are also calculated for each detector pair based on the relative detector positions in the tomograph. The calculated randoms rate is then stored in the randoms sinogram at the calculated L O R coordinates. This is done for all possible detector pairs and binned in a 3D data set with the same data reduction parameters (span and mashing) as the acquired data set. The calculated randoms sinograms now contain average randoms rates. However, randoms subtraction uses sinograms of the total randoms counts for a given acquisition time. Converting average randoms rates to total randoms counts requires a decay correction that takes into account the quadratic dependence of the randoms rates on activity, as well as the acquisition time. Assuming that the spatial distribution of radioactivity does not change significantly with time during the duration of the frame, the total radioactivity, singles rate and trues rates all decrease exponentially with time due to the decay the radionuclide: 104 R(t) = R0e- (4.36) where X - decay constant of radionuclide. R„ = initial trues or singles count rate. The randoms are related to the square of the singles rates and, therefore, decrease as: Rr(t) = Rroe-2A' (4.37) where IL^ = initial randoms event rate. When calculating the average randoms rates from the average singles rates, a decay correction factor, frjjeag) is required to correct for the differences in how the randoms and singles rates change with the decay of the radioactivity. This is found by integrating (4.36) and (4.37) over the acquisition times, yielding the singles and randoms decay factors. Then, frdecv is the ratio of the randoms decay factor to the square of the singles decay factor: (4.38) where T = scan acquisition duration. This correction is small for scans with acquisition times that are short compared to the half-life of the radionuclide. For example, for a 30 minute scan with 1 8 F , the correction is 0.3% and for a 60 minute scan it is 1.2%. For a three hour scan the correction is much larger, 10.5% and, for a 12 hours scan, it is 230% [80]. This decay correction must be modified to take into account scintillator background radioactivity. In the presence of such a background, the singles rate does not follow (4.36) but changes with time as: 105 R(t) = (R0-Rb)e-M+Rb (4.39) where Rb — background singles count rate. Therefore, is calculated using (4.39) rather than (4.38) because the part of the randoms due to the background should not be decay corrected. Finally, the total randoms counts for each detector pair, Crtoah is calculated by: ^r,total ~ fr,decayRrTaCq (4.40) Thus, the randoms calculation produces a scaled and spatially variant randoms distribution. The randoms counts distributions are then subtracted directly from the measured prompts sinograms. The resulting trues sinograms are attenuation corrected, normalized and reconstructed. 106 Chapter 5 An Algorithm for Model-Based Randoms Correction for the MicroPET R4 Tomograph 107 5.1 Development of the Randoms Correction Algorithm 5.1.1 Chapter Overview This chapter details of the specific algorithm and implementation of the randoms correction calculation. The chapter begins with a description of the randoms calculation algorithm. Then, the details of the design and performance of the MicroPET R4™ tomograph, used to test the randoms correction, are described. The chapter ends with a detailed discussion of how the MicroPET R4 is modeled in the single photon detection calculation. 5.1.2 Introduction A computer program was written to perform the randoms correction using the model-based method described in Chapter Four. This method is, in theory, applicable to any P E T tomograph provided that the design and performance characteristics of the specific tomograph are accurately specified in the calculations. Originally, the randoms correction was intended to be used for dual head coincidence imaging (DHCI) systems [14], specifically the Ph i l i p s /ADAC Vertex Molecular Coincidence Detector with Attenuation Correction™ ( M C D / A C ) . The Vertex M C D / A C is dual head S P E C T camera with the NalfTl) crystal thickness increased from 3/8" to 5/8", coincidence electronics, axial septa and a singles-based transmission scanning system. However, this camera did not measure delayed coincidences and used only a simple background subtraction to correct for randoms and scatter. Our research group performed a number of studies aimed at defining and improving the performance of this class of devices [88][89] [90]. Work also began on the development of calculated scatter and randoms corrections for the M C D / A C camera [91] [92]. However, D H C I systems became obsolete, given their inferior performance compared to dedicated P E T , and have been superseded in common usage by the widely available dedicated P E T and P E T / C T systems. It was, therefore decided to develop the model-based randoms correction method for use on a Siemens MicroPET® R4. This dedicated P E T tomograph is designed to image small animals with 108 h i g h r e s o l u t i o n a n d h i g h s e n s i t i v i t y i n a n a x i a l field o f v i e w t h a t i s l a r g e r e l a t i v e t o t h e s i z e o f t h e s u b j e c t s . T h e M i c r o P E T w a s i d e a l f o r v a l i d a t i n g t h e r a n d o m c o r r e c t i o n b e c a u s e i t s t o r e s p r o m p t s a n d d e l a y s s e p a r a t e l y , s o t h a t m e a s u r e d r a n d o m s d i s t r i b u t i o n s a r e e a s i l y e x t r a c t e d . A s w e l l , s i n c e s m a l l o b j e c t s a r e i m a g e d , t h e e f f e c t s o f s i n g l e p h o t o n s c a t t e r a r e l e s s s i g n i f i c a n t . Section 5.3 d e s c r i b e s t h e d e s i g n a n d p e r f o r m a n c e o f t h e M i c r o P E T R 4 i n d e t a i l , a n d Section 5.4 d e s c r i b e s t h e m o d e l i n g o f t h i s t o m o g r a p h i n t h e s i n g l e s c a l c u l a t i o n . 5.1.3 Program Development T h e r a n d o m s c o r r e c t i o n p r o g r a m , ccrand_corf\ w a s r e q u i r e d t o p e r f o r m t h e c a l c u l a t i o n s i n a c l i n i c a l l y r e a s o n a b l e t i m e ( ~ m i n u t e s ) u s i n g a s i n g l e C P U P C w i t h 1 G B o f m e m o r y . A l t h o u g h d e v e l o p e d i n a W i n d o w s X P e n v i r o n m e n t , i t w a s a l s o d e s i r a b l e t h a t t h e c o d e b e p o r t a b l e t o o t h e r p l a t f o r m s . A s w e l l , t h e c o d e n e e d e d t o b e f l e x i b l e e n o u g h t o a l l o w d i f f e r e n t t o m o g r a p h d e s i g n s t o b e m o d e l e d w i t h o n l y m i n o r c h a n g e s . I t w a s , t h e r e f o r e , d e c i d e d t o w r i t e t h e m o d e l - b a s e d r a n d o m s c o r r e c t i o n p r o g r a m i n M A T L A B ™ ( © 1 9 8 4 - 2 0 0 7 , T h e M a t h w o r k s ™ , I n c . ) . M A T L A B i s a h i g h l e v e l p r o g r a m m i n g l a n g u a g e o p t i m i z e d t o p e r f o r m c a l c u l a t i o n s o n l a r g e a r r a y s o f m u l t i d i m e n s i o n a l d a t a . A l t h o u g h M A T L A B i s a s c r i p t l a n g u a g e , c o d e o p t i m i z a t i o n a l l o w s e x e c u t i o n t o b e v e r y f a s t , a p p r o a c h i n g t h e s p e e d o f c o m p i l e d C c o d e . A s w e l l a M A T L A B c o m p i l e r i s a v a i l a b l e w h i c h t r a n s l a t e s t h e M A T L A B s c r i p t f i l e s i n t o C c o d e f o r c o m p i l i n g i n t o a n e x e c u t a b l e f i l e . M A T L A B i s a l s o a v a i l a b l e f o r d i f f e r e n t c o m p u t e r p l a t f o r m s w i t h o n l y m i n o r c o m p a t i b i l i t y i s s u e s , s o t h a t t h e c o d e i s p o r t a b l e . M A T L A B i n c l u d e s p o w e r f u l e d i t i n g , d e b u g g i n g a n d o p t i m i z a t i o n t o o l s . I t h a s t o o l s t o a n a l y z e d a t a , t o b u i l d g r a p h i c a l u s e r i n t e r f a c e s ( G U I s ) a n d p r o d u c e p r e s e n t a t i o n q u a l i t y g r a p h i c s o f t w o a n d t h r e e d i m e n s i o n a l d a t a s e t s . M A T L A B w a s u s e d t o c r e a t e rand_corr, a s w e l l a s a n u m b e r o f o t h e r p r o g r a m s u s e d i n t h e d e v e l o p m e n t , t e s t i n g a n d a n a l y s i s o f t h e r a n d o m s c o r r e c t i o n m e t h o d . T h e a n a l y s i s a n d v i s u a l i z a t i o n t o o l s p o t e n t i a l l y h a v e w i d e r a p p l i c a t i o n s b e y o n d t h i s w o r k a n d m a y b e u s e f u l i n o t h e r r e l a t e d r e s e a r c h . F i r s t , a g r a p h i c a l u s e r i n t e r f a c e w a s d e v e l o p e d t o r u n t h e rand_corr p r o g r a m . I t 1 0 9 simplifies the control of all the calculation parameters. Access to debugging and visualization tools is also simplified. The rand_corr gui is shown in F i g u r e 5-1. [ | save calculations Display i - ] show 20 sinograms P] energy windowCs) [ j intial random points good emission points (tor random... tes sample points (SLOW) Sample Points Act thresh (% ot • rand sample pts above threshold sampling densty (ptsAnO i uu • al sample pts above threshold point at x, y, i (cm) single point activity (MBq) 100 sorted voxels win highest activity 100 Detector Parameters Tomograph upti v •pert.det. colnc window (ns) [ j Interpol sires to defau* energy windows (keV) 350 - 750 v "singles" window traction (%) 4 0 coincidence window traction (%) 4Q radial bins (x_r) S4 azimuthal bins (theta) 96 no. slices (i) 6 3 oblique bins (phi) o 0 Seating Params raw total object activity (MBq) 6.17 sampled object activity (M8q) a 3.68 set initial activity (MBq) average activty (MBq) acquisition time (min) radlonucWe F^Q scale rates to: singles rate v • • • • •siloes" rate (keps) 617.956 prompts rate (keps) 37 6463 • • • • trues rate (keps) 37.1077 delays rate (kepe) o 538557 Calculation Parameters number of iterations 1 sampling distance (cm) Q 3 scale object attenuation 0 5 0 sample total object sample axial FOV x Q smooth sinograms Q arc correction Figure 5-1 Graphical user interface (GUI) for rand_corr. This interface allows control of all the calculation parameters and the selection of the data sets used for the randoms correction. A set of programs for data handling, processing, visualization and analysis in a graphical environment were created under the name CODEX (Coincidence Data EXaminer). These were developed because no other appropriate data analysis package was available. CODEX includes 110 g r a p h i c t o o l s t o d i s p l a y , a n a l y z e a n d c o m p a r e 2D d a t a s e t s ( s i n o g r a m s , p r o j e c t i o n s o r s i n g l e s d i s t r i b u t i o n s ) , a s s h o w n i n F i g u r e 5 - 2 . viHUB -I Cl.lt. DataSell data set(s) mieel _msas_ssrti mat object time frame 0 Data Sot 2 data satrs) micel _meas_ssrb.mot Object time lr Measured Rat ss from Header HlWIMHMi coinc window (ns) t. "singles" rote (kcps) 61 7.956 , . radionucfida prompts rate (kcps) 37.6463 ,1 restore original values v trues rate (kcps) 37-1077 jjl recalculate rates energy windows (keV) 35Q _ 75^  window fraction (%) 40 delays rate (kept) 0.5335S7 recalculate randoms Display arc correction l_] interpolate to default 15 (from singles) v jagj Data Format • Sinogram Projection •) COuntS datal v Q decay correct Q norm Decay Corrected Initial Count Rate* (kepi*) object initial activity (MBq) 3 singles rate (kcps) 0 12000 hits rate (kcps) 617.956 prompts rate from singles (kcps) 37.6463 10000 prompts rata from hits (kcps) 0 trues rate (kcps) 37.1077 8000 randoms rate from singles (kcps) 0.538557 randoms traction from sngles 1.4% 6000 randoms rate from hits (kcps) 0 538557 randoms traction from hits 1.4% 4000 2000 20 40 60 radial bin (d) Print F**e [ J Save to uPET Figure 5-2 The graphical 2D data analysis tool used in CODEX. A 2D sinogram is displayed. A s w e l l , a 1 - D p r o f i l e d a t a a n a l y s i s / d i s p l a y t o o l w a s d e v e l o p e d t o s i m u l t a n e o u s l y d i s p l a y r a d i a l , a z i m u t h a l a n d a x i a l p r o f i l e s o f m u l t i p l e d a t a s e t s a s r a w c o u n t s o r a s r a t i o s , p e r c e n t d i f f e r e n c e s o r o t h e r figures o f m e r i t b e t w e e n t h e d i f f e r e n t d a t a s e t s . T h i s p r o g r a m i s s h o w n i n F i g u r e 5 - 3 . I l l • Vjpw P r n f i l e s l |^~J C j Figure 5-3 The 1-D profile analyze/display tool used in CODEX. This is used to analyze and compare the 1-D profiles from different data sets. 112 5.2 Structure of the Randoms Correction Code 5.2.1 Overview The basic mathematics of the randoms correction calculation is described in detail in Chapter 4. The implementation of the calculation is described in detail in this section. The overall structure of the rand_corr code is shown in Figure 5-4. initialization choose and load activity and attenuation images of data set to be corrected T generate activity sample points calculate contribution of activity point to singles rate at each detector 1 superimpose block structure on singles distribution i i 1 scale calci rates to m« global sinj llated singles ;asured *les rates i > i calculate randoms distribution from singles distribution i > i f subtract calculated randoms sinograms from prompts sinograms Figure 5-4 The structure of the random correction algorithm. 5.2.2 Initialization of Calculation Parameters and Loading Image Data The rand_corr program uses a number of parameters that govern the randoms calculation, tomograph detector model, and various other aspects of the overall calculation. The 'Detector Parameters" describe the tomograph model and include the number of detectors per ring, number of rings, coincidence timing window, energy window and the detection efficiency correction factor. The "Randoms Scaling Parameters" are used in calculating the absolute randoms counts and include the acquisition time, 113 radionuclide, measured singles rates and background singles rate. The " L O R Binning Parameters" determine the binning of the trues and randoms sinograms produced by the calculation. These include the mash, span, maximum ring difference, number of radial and azimuthal bins, number of slices and the 2D rebinning methods. The sinogram binning should match the sinogram binning of the acquired data sets to be corrected. The "Radioactivity Object Sampling Parameters" are used in the generation of the activity sample points, and include the sampling method, percent threshold, and sampling density or number of sample points. The "Display Parameters" set flags to graphically display information used during the calculation for debugging or presentation purposes. The information that may be shown includes the plotting of the generated activity sample points, and the attenuation sample points used in the photon survival calculations. When the data set to be randoms corrected is selected, the header files of the preliminary reconstructed images are read and the header information and the image data are loaded into memory using a widely used M A T L A B function, read_asipro.m [93]. The voxel values may be accessed by a 3D coordinate or a single index. Acquisition and/or tomograph information from the header files that is relevant to the randoms calculation (e.g. study acquisition parameters, calculated correction factors, measured count rates, reconstruction and processing parameters) are used to set the calculation parameters in rand_corr. However, the calculation parameters may be changed by the user before the calculation. 5.2.3 Generation of Activi ty Sample Points The first step in the calculation is the random generation of the positions of the activity sample points. Three sets of random numbers, bounded by the tomograph F O V , give the x, y and z coordinates. If the activity distribution extends beyond the F O V (for example, a long digital phantom for testing or reconstructed images from multiple bed positions knitted together), the points are generated outside the F O V . The random point positions are stored in three vectors corresponding to the x, y and z coordinates. 114 The activity at each of these sample points is then determined by finding the value of the voxel in the activity distribution image that corresponds to the position of the point. The activity values are compared to the threshold and the sample points with values below the threshold are removed from the set used in the calculation of the singles rate distribution. 5.2.4 Calculat ion of the Contribution of E a c h Activi ty Sample Point The contributions of each activity sample point to the detector singles rates (4.28) and trues rates (4.30) are then calculated while looping over all the sample points. The rand_corr code is vectorized to simultaneously calculate the singles and trues contributions for all the detectors in one ring. Therefore, the calculation loops over all detector rings. The flowchart for this routine is shown in Figure 5-5. Each element of the flowchart includes a reference to the appropriate equation from Chapter 4. As well, in this section, the primary photons are referred to as the "blue" photons, while their annihilation partners are the "pink" photons. 115 for each sample point • calculate blue geometric efficiencies and incident angles (4.2) (4.4) • • calculate detector efficiencies (4.14) }f determine paths from point to detector and sample attenution object (4.7) calculate blue and pink photon survival (4.8) calculate single and true probabilites (4.28)(4.30) > calculate singles and trues rates contributions (4.28)(430) add contributions to singles and trues distributions Figure 5-5 The structure of the routine in 'rand_corr' that calculates the contribution of an activity sample point to the singles and trues distributions. In each element is a reference to the corresponding equation. The "blue" photon refers to the primary photon under consideration, while the "pink" photon is the annihilation partner. 1 1 6 The first step is the simultaneous computation of the distance (4.3), incident angle (4.4) and the line parameters (4.7) from the activity sample point to each detector in the ring. These values are stored as vectors and used for calculating the solid angles (4.2) and detector efficiencies (4.14) of the blue photons, which are also stored as vectors. Then, the line parameters are used to determine the axial location at which the pink photons intersect the tomograph detector ring in order to calculate the probability of detecting the pink photons (4.5). The line parameters are used to generate the attenuation sample points (4.7) from the activity sample point position through the attenuation object to each detector in the ring, as shown in Figure 4-6. This produces a set of two dimensional data structures with one dimension the detector in the ring, and the other the attenuation sample points. Attenuation sample points that extend beyond the bounds of the attenuation object are rejected, and the lines truncated. The attenuation sample point coordinates are then converted to object indices and used to look up the attenuation coefficients along the line and calculate die interval photon survival, P. = e Mjds. The product of the interval photon survival values along the lines yields the total photon survival to each detector (4.8). Calculating the total photon survivals collapses the 2D data structure into a vector, with each element the total photon survival to a different detector. The pink photon survival is calculated identically. The solid angle, photon survival and detection efficiency vectors are then used to calculate the contributions of the activity sample point to the singles rate (4.28) and trues rate (4.30) of each detector in the tomograph ring. The singles rates contributions for this detector ring are then added to the singles distributions. The trues rates contributions are added to the trues sinograms for the L O R s that are calculated from the line parameters. This calculation is done for each detector ring and then repeated for each activity sample point until the contributions from all activity sample points are added. The result is a two dimensional singles distribution (indexed by crystal position and detector ring, (Figure 5-6)) and a 3D trues data set. 117 5.2.5 A d d i n g B lock Structure Scaling, and Calculating Randoms Once the total singles rates distribution has been calculated, the singles rates are multiplied by systematic relative detector efficiencies (Section 4.5.3) producing the "block structure" shown in Figure 5-6. The calculated trues are not corrected for the relative detector efficiencies because the identities of both crystals involved in a coincidence are not explicitly calculated. f 1 0 F I 20 "= 3 0 | singles 20 40 60 80 100 120 140 160 1£ azimuthal detector x10 |5 4 5 4 3.5 | 10 I 20 •Sj ~° 30 singles II _ _ U I_I L l J L r I r T T i • 20 40 60 80 100 120 140 160 1£ azimuthal detector x10 5 4 3 Figure 5-6 The calculated singles distribution to the 32 x 192 arrays of the detectors in the tomograph. The singles distributions before (above) and after (below) correction for relative individual detector efficiencies are shown. The calculated singles distribution and trues sinograms are then scaled to the measured global average singles rate and corrected for the radioactive background in L S O . Thus, the singles and trues distributions contain absolute values of count rates based on the rates measured during scan (Section 4.7.2). Next, the singles distribution is used to calculate the sets of randoms sinograms. The randoms rate per detector pair is calculated from the singles rates in each detector. The randoms calculation is performed for the one row of radial bins at a time, each corresponding to one azimuthal angle. The singles rates in a half-ring of detectors are stored in one vector and the rates in the opposite 118 half-ring are stored in another vector. The products of the opposing detector singles rates give the randoms rates for the L O R s between the detector pairs. The L O R s are then offset by a single detector, and the randoms rates for the new L O R s are calculated. These L O R s are assumed to be interleaved between the first set of LORs , increasing the radial sampling. This follows the binning scheme used by most modem tomographs. Thus, for a ring of nAa detectors, the randoms rates are calculated first for —— L O R s using the opposing half-rings and then for the — 1 interleaved 2 2 LORs. This yields a total of (« d e t - 1) L O R s per azimuthal angle. The detector ring pairs and the L O R s are shown in F i g u r e 5-7. detector rings detector rings Figure 5-7 The calculation of the randoms for the even numbered LORs (left) and the odd numbered interleaved LORs (right). The dashed line shows the electronic FOV. A l l the rows of the randoms sinograms are then calculated in like manner by rotating the azimuthal angle of the L O R s and recalculating based on the singles rates of the detector pairs in the new half-rings. In this binning scheme, azimuthal angles are used. The calculation continues until the random are calculated for the L O R s at all possible angles, yielding a (» d e t - 1) x (» d e t /2) randoms sinogram. Randoms sinograms are calculated for single detector rings or pairs of detector rings. Since a 3D data set is required, a randoms sinogram is calculated for each possible pair of 119 detector rings. This yields a (» d e t — 1) x (nAJ2) x » n n g ? x » r i data set, where » r i n g s is the number of detector rings in the tomograph Once all the randoms sinograms are calculated, the radial dimensions of the trues and randoms sinograms are truncated to match the size of the electronic F O V . The sinograms may then be rebinned and/or reformatted to match the structure of the data to be randoms corrected. Finally, the values in the randoms sinograms are converted to total randoms counts for an acquisition time frame (Section 4.7.4). The calculated randoms counts may then be directly subtracted from the measured prompts sinograms to correct them for randoms. 5.2.6 Opt imiza t ion of the Code Two methods of code optimization significantly improve the execution speed in M A T L A B . The first is the pre-allocation of memory for large data arrays. The randoms calculation involves the use of large data arrays. For example, the calculated 3D randoms sinograms for the MicroPET R4 are four dimensional and contain 18 million bins (191x96x32x32). Dynamically reallocating memory during the calculation would increase the computation time considerably. Pre-allocating memory to store large arrays before the calculation of the values of the array elements resolves this computational problem. The speed improvement is considerable. The second important code optimization is performed by vectorizing the calculation code. As its full name, M A T r i x LABoratory, suggests, M A T L A B is optimized to do simultaneous calculations on the elements of data structures such as vectors, matrices and multidimensional arrays. Using array calculations in the place of loops significantly reduces the calculation time. Vectorization is used extensively in the calculation of the singles distribution and in the calculation of the randoms sinograms. 120 5.3 The MicroPET R4 Scanner 5.3.1 MicroPET R4 Tomograph and Detector Design The imaging of small animals for research has been advanced by the development of MicroPET tomographs [94]. The MicroPET R4 is a small aperture, large axial F O V dedicated tomograph used for imaging mice and rats, shown in Figure 5-8. It has a 120 mm central aperture defined by 25 mm thick lead end-shields. The detector ring is 148 mm in diameter, but the transverse F O V is electronically limited to a 100 mm diameter. The axial field of view is 78 mm. It is based on the original MicroPET design [95J and is similar to the MicroPET P4 small primate scanner [96]. Figure 5-8 The MicroPET R4 small animal tomograph. The aperture is 120 mm and axial FOV is 78 mm, large enough to accommodate two mice or one large rat The MicroPET R4 detectors use a LSO(Ce) scintillator crystal block 19x19 mm 2 by 10 mm thick. The block is cut into an 8 x 8 array yielding 64 crystals, each 2.1 x 2.1 x 10 mm 3 . Since the centre-to-centre crystal spacing is 2.4 mm, the overall volumetric packing fraction is 76.5%. The cuts are all 9 mm deep, leaving 1 mm to structurally support the block. There is no depth of interaction encoding. The scintillator light is carried by a 100 mm long multi-clad fibre optic bundle to a 121 p o s i t i o n s e n s i t i v e p h o t o m u l t i p l i e r t u b e ( P S - P M T ) . T h e fibre o p t i c c o u p l i n g p e r m i t s t h e l a r g e r P S -P M T t o b e p o s i t i o n e d a w a y f r o m t h e s m a l l e r d e t e c t o r b l o c k t o a l l o w c l o s e r p a c k i n g o f t h e d e t e c t o r b l o c k s . T h e P S - P M T ( H a m a m a t s u R 5 9 0 0 - C 8 ) h a s e i g h t a n o d e s t i e d t o g e t h e r t h r o u g h a r e s i s t o r c h a i n t o g i v e f o u r o u t p u t s . T h e s i g n a l o u t p u t s a r e f e d i n t o p r e a m p l i f i e r s a n d i n t e g r a t i o n c k c u i t r y o n t h e f o u r b o a r d s a t t h e b a c k s i d e o f t h e P M T . T h e b a s i c d e t e c t o r b l o c k o f t h e M i c r o P E T R 4 i s s h o w n i n Figure 5-9. Figure 5-9 The detector block unit from the MicroPET R4 tomograph. An LSO cut-block scintillator is optically coupled to a position sensitive PMT through a multi-clad fibre-optic bundle. Four boards at the backside of the PMT provide front-end readout. From reference [99], Used with permission. F o u r d e t e c t o r b l o c k s a r e b u n d l e d i n t o a m o d u l e o r c a s s e t t e w i t h a c o m m o n h i g h v o l t a g e s u p p l y b u t s e p a r a t e d e t e c t o r r e a d o u t c h a n n e l s , s o e a c h m o d u l e h a s 8 x 3 2 c r y s t a l s . T h e M i c r o P E T R 4 h a s 2 4 m o d u l e s a r r a n g e d i n a r i n g a r o u n d t h e a p e r t u r e , t h e l o n g s i d e o f t h e m o d u l e i n t h e a x i a l d i r e c t i o n . T h e t o m o g r a p h h a s 4 d e t e c t o r b l o c k s ( 3 2 c r y s t a l r i n g s ) i n t h e a x i a l d i r e c t i o n a n d 2 4 d e t e c t o r b l o c k s ( 1 9 2 c r y s t a l s ) p e r r i n g f o r a t o t a l o f 9 6 d e t e c t o r b l o c k s a n d 6 1 4 4 c r y s t a l s . 1 2 2 5.3.2 M i c r o P E T R4 Electronics T h e a n a l o g s i g n a l s f r o m t h e m o d u l e s a r e s e n t t o 2 4 a n a l o g s u b s e c t i o n b o a r d s , e a c h w i t h 4 i n d e p e n d e n t s i g n a l c h a n n e l s . E a c h c h a n n e l p r o v i d e s a v a r i a b l e g a i n r e c e i v e r , c r y s t a l a d d r e s s i d e n t i f i c a t i o n a n d d i g i t i z a t i o n , e n e r g y s i g n a l d i g i t i z a t i o n a n d t h r e s h o l d i n g . B o t h c r y s t a l i d e n t i f i c a t i o n a n d e n e r g y t h r e s h o l d i n g u s e o n - b o a r d , u s e r - d e f i n a b l e l o o k - u p t a b l e i n f o r m a t i o n . A s w e l l , e a c h c h a n n e l h a s a s u m m e d s i g n a l w h i c h i s s e n t t o a c o n s t a n t f r a c t i o n c U s c r i m i n a t o r t o p r o d u c e a t i m e s t a m p f o r e a c h e v e n t . T h e d i g i t a l s i g n a l s f r o m t h e a n a l o g s u b s e c t i o n b o a r d s a r e s e n t t o o n e o f t w o d e t e c t o r h e a d i n t e r f a c e b o a r d s w h i c h a d d s a d e t e c t o r b l o c k i d e n t i f i c a t i o n t a g . T h e n , t h e s i g n a l s a n d c o u n t r a t e i n f o r m a t i o n a r e p a s s e d t o a d i g i t a l c o i n c i d e n c e c o n t r o l l e r t h a t b u f f e r s e v e n t s a n d d e t e r m i n e s w h i c h e v e n t s a r e i n p r o m p t a n d d e l a y e d c o i n c i d e n c e a c c o r d i n g t o t h e u s e r d e f i n e d "Tew- C o i n c i d e n c e s a r e a l s o r e s t r i c t e d t o b e w i t h i n t h e e l e c t r o n i c f i e l d o f v i e w . A l l t h e e v e n t i n f o r m a t i o n f o r v a l i d c o i n c i d e n c e s , i n c l u d i n g d e t e c t o r p a i r , t i m e t a g s a n d c o i n c i d e n c e t y p e ( p r o m p t o r d e l a y e d ) , a r e f i n a l l y p u t i n t o a 6 4 b i t w o r d t h a t i s s e n t b y a fibre o p t i c c a b l e t o t h e a c q u i s i t i o n c o m p u t e r a n d t h e e v e n t i n f o r m a t i o n i s s t o r e d i n l i s t m o d e . T h e M i c r o P E T u s e s a n I n t e l C P U b a s e d c o m p u t e r s y s t e m r u n n i n g W i n d o w s X P ™ ( M i c r o s o f t ™ ) f o r t h e a c q u i s i t i o n / r e c o n s t r u c t i o n c o m p u t e r p l a t f o r m w i t h n o a d d i t i o n a l h a r d w a r e . A p r o p r i e t a r y s o f t w a r e s u i t e c o n s i s t i n g o f " M i c r o P E T M a n a g e r ™ " [ 9 7 ] a n d " A s i p r o ™ " [ 9 8 ] a r e u s e d t o a c q u i r e , h i s t o g r a m , r e c o n s t r u c t , p r o c e s s a n d a n a l y z e t h e d a t a . 5.3.3 Transmission Scanning System T h e M i c r o P E T R 4 t r a n s m i s s i o n s c a n n i n g s y s t e m u s e s a p o i n t s o u r c e w h i c h m o v e s i n a h e l i c a l o r b i t a s i t i s t r a n s l a t e d a l o n g t h e a x i a l F O V . T h e a c q u i s i t i o n m a y b e d o n e u s i n g a w i n d o w e d c o i n c i d e n c e m o d e w i t h a ^ G e p o i n t s o u r c e . H o w e v e r , s i n c e t h e s o u r c e o r b i t s c l o s e t o t h e d e t e c t o r s a s i n g l e s s o u r c e , s u c h a s 3 7 C o ( E y = 1 2 2 k e V , 1 3 6 k e V ) , i s p r e f e r a b l e t o g i v e h i g h e r c o u n t r a t e s a n d r e d u c e d s c a n n i n g t i m e s . F o r s i n g l e s t r a n s m i s s i o n s c a n n i n g , t h e s o u r c e m o t i o n i s c o n t r o l l e d b y a n e n c o d e d m o t o r t h a t c o n s t a n t l y m e a s u r e s t h e s o u r c e h o l d e r p o s i t i o n . F r o m t h e 1 2 3 source and detector positions, the lines of response of detected single photon events are calculated and used to generate transmission sinograms. This transmission scanning system is also used for the blank scans and may be used for normalization scans. Alternately, a uniform cylinder of radioactivity may be used for normalization. 5.3.4 MicroPET R4 Histogramming and Reconstruction The MicroPET R4 tomograph has 32 rings of 192 crystals for a total of 6144 crystals. It acquires in 3D list mode, including oblique LORs , leading to large data sets. The list mode data are histogrammed into sinograms based on the selected span, mash and maximum ring difference parameters. A typical 3D acquisition (span = 3, maximum ring difference = 31) produces 63 direct and 640 oblique sinograms for a total of 703. These are grouped into "segments" related to the average ring difference. For span = 3 and maximum ring difference = 31, there are 21 segments, 1 for the direct planes and 20 (±10) for the oblique planes. The direct segment contains 63 slice planes and the oblique segments contain a decreasing number of sinograms (59 sinograms for the segments with an average ring difference of ±2, down to 5 sinograms for the segments with an average ring difference of ±29). The standard sinogram size is 84 radial x 96 angular bins. The L O R s measured by offsetting one detector are interleaved in the radial binning to increase the radial sampling distance to half the detector size and satisfy the Nyquist criterion (Figure 5-7). For 192 detectors per ring, this should produce 96+95 — 191 bins over the ring diameter of 148 mm. However, the sinogram is electronically truncated to a radial F O V of 100 mm to reduce the arc and parallax effects, decreasing the number of radial bins to 84. The radial bin size in this configuration is 1.2 mm at the radial centre and becomes smaller with increased displacement due to the arc effects. Arc effects are corrected prior to reconstruction. Data binned with 84 x 96 sinograms, span = 3 and maximum ring difference = 31 (703 sinograms) have a total of 5,668,992 L O R s per data set. The data are stored as 4 bytes per L O R for a total size of 22 M B . The prompts and delays may be binned separately, or the delays may be subtracted 124 during histogramming. The list mode data may also be binned into static, dynamic or gated (if available) data sets. The 3D data may be further rebinned into 2D sinograms before reconstruction using SSRB or F O R E . Tomographic reconstruction may then be performed using the standard methods such as 2D filtered backprojection (FBP) and 2D ordered subset expectation maximization (OSEM). If 3D sinograms are used, 3DRP and 3D O S B M / M A P reconstruction are available. Typically, the slice images are reconstructed at 128 x 128 voxels with 63 slices with a voxel size of 0.845 x 0.845 x 1.211 (=0.865) mm 3 . 5.3.5 M i c r o P E T R4 Performance Measured performance values for the MicroPET R4 have been published [99] [100]. The performance measurements of the similar MicroPET P4 (primate scanner) and other small animal P E T scanners are also available [96] [101]. For the MicroPET R4, the spatial resolution measured with a 1 mm 2 N a point source at the radial and axial centre was F W h l M = 1.65 mm (radial), 1.66 mm (tangential) and 1.85 mm (axial). A t a radial offset of 5 mm, the FVvTTM increases to 2.61 mm (radial) 2.27 mm (tangential) and 2.50 mm (axial). The point source equivalent sensitivity at the axial and radial centre of the tomograph was measured with a 6 8 Ge line source and attenuating aluminum sleeves [102]. After corrected for attenuation and source branching fraction, these were found to be 4.37% for a 250-750 keV window, 2.45% for a 350-650 keV window and 1.86% for a 410-613 keV window. The energy resolution was measured for every crystal in the tomograph. The values varied between 17% and 36% with the averaged tomograph value of 23%. The scatter fractions were measured with 30 mm and 60 mm diameter scatter phantoms representing, respectively, a mouse and a rat. The scatter fractions were 30% and 42% with a 250-750 keV, and 18% and 28% with a 350-750 keV, for the mouse and rat phantoms, respectively. The count rate performance and N E C R were also measured using different sized cylindrical phantoms and different energy windows. The maximum true + scatter count rates using a 350-750 keV window were 411 keps (@1240 125 kBq/ml), 317 kcps (@552 kBq/ml) and 290 kcps (@161 kBq/ml), respectively, for the mouse, rat and 10 cm phantoms. The maximum N E C R using a 350-750 keV window were 168 kcps (@824 kBq/ml , 91 M B q total) and 89 kcps (@298 k B q / m l 81 M B q total) for the mouse and rat phantoms, respectively. The background radioactivity from the L S O scintillator influences the low count rate performance of the MicroPET R4 [103]. The count rates with an empty aperture due to the 1 7 6 L u have been measured with different energy windows [104]. For a 250-750 keV energy window, the singles rate = 70.9 kcps, trues rate = 1032 cps, delays rate = 6.7 cps and prompts rate = 1039 cps. For the 350 to 750 keV energy window, the singles rate = 52.2 kcps, trues rate = 80.1 cps, delays rate = 3.7 cps and prompts rate = 83.8 cps. Eligh background singles rates could cause errors in the calculated randoms rates and distributions if they are not taken into account. 5.4 The MicroPET R4 Detector Model 5.4.1 Introduction The model-based calculation of the singles count rates requires modeling the physics of the radiation interactions in the object and in the tomograph detectors. Therefore, the specific design and performance characteristics of the tomograph being modeled must be included in the calculation. In this Section, the details of the MicroPET R4 tomograph relevant to the detector model used in this calculation are discussed. 5.4.2 Detector Binning, Solid Angle Calculations and Attenuation Object Sampling The accuracy, speed and memory requirements of the randoms calculation depend primarily on the number of crystals used in the tomograph detector model. Ideally, the number of crystals in the calculation should equal the number of crystals in the tomograph. In scanners with an extremely large number of crystals, the burden of calculating the singles rates to each crystal would cause unreasonably large calculation times. In such cases the detector binning should be down 126 sampled. However, since the MicroPET R4 tomograph has only 6144 crystals, the singles calculation may be performed for every real crystal within a reasonable time. The solid angle of each crystal is calculated using the physical crystal area (2.1 x 2.1 mm) rather than the area of the crystal spacing (2.4 x 2.4 mm). The solid angle is calculated using the position of the mean depth of interaction, rather than the front face of the crystal. For 511 keV photons in a 10.0 mm thick L S O crystal this depth is 4.6 mm. Thus, while the tomograph ring has a physical radius of 74.0 mm, the effective radius for the solid angle calculation is 78.6 mm. The effect of non-normal incident angle on effective detector area is also included in the solid angle calculation. The attenuation object is sampled along the line from the activity sample point to the detector in order to calculate the photon survival (Section 4.4.1). The computation time depends directly on the size of the sampling interval, ds. The MicroPET attenuation image is reconstructed with the voxel size = 0.845 x 0.845 x 1.211 (=0.865) mm 3 ; thus, ds should not be < 1mm. If the sampling interval is too large, the photon survival calculation does not properly account for changes in the attenuating medium. Therefore, a sampling interval of ds — 3 mm was chosen as a compromise between accuracy and speed. 5 .4 .3 Detector Efficiencies The MicroPET R4 detectors consist of 1.0 cm thick LSO. The effect of the crystal gaps is modeled by reducing the effective crystal thickness by the ratio of the detector size to the detector spacing (2.1/2.4 = 88%). Thus, at normal incidence, the effective crystal thickness is xda^— 0.88 cm. For 511 keV photons, the total attenuation coefficient of L S O ipUo) is 0.88 cm"1. Therefore, using (4 .10) , for normally incident photons, the MicroPET detectors have eint — 54%. The detector efficiency improves with increasing incident angle because' of the increased effective detector thickness. The detector efficiency for the photons that did not scatter in the object is given by (4.14). The detection efficiency correction factor, xd(0 is used to account for the effect of the energy window 127 on efficiency. xdlt varies with the detector design and the energy window settings. In the following discussion, the value of xda is estimated for the MicroPET R4 model with a 350 keV energy threshold using the procedure outlined in Section 4.5.2. The effect of the energy window on the total detection efficiency may also be modeled using the window fraction,^,^ which is simply the fraction of the interacting photons that are counted (4.9). The value of fMn may be estimated for the case when no photons scatter in the object, since the effects of scatter are accounted for in the photon survival calculation (Section 4.6). 511 keV photons interact in L S O by photoelectric absorption (33%), Compton scatter (62%) and coherent scatter (5%)[105]. A l l photons that are photoelectrically absorbed deposit 511 keV in the detector. These events are above the energy threshold and counted. Photons that coherently scatter are assumed to deposit no energy in the detector and are not counted. According to the Compton equation (1.2), 511 keV photons that Compton scatter in the detector deposit energy through the recoil electrons up to a maximum of 340 keV. Although, these should all be rejected by the 350 keV threshold, energy resolution blurring causes a small fraction of the single scattered photons to be counted, as shown in Figure 5-10. 128 Figure 5-10 The distribution of the energy deposited by 511 keV photons Compton scattered in the detector (blue). This is convoluted with an energy blurring function (FWHM = 23% at 511 keV) (green). The black dashed lines show the 350-650 and 250-650 keV energy thresholds. For the 23% energy resolution of the MicroPET R4, ~ 5% of these events are above the 350 keV threshold. The total contribution of the 511 photons that undergo single Compton scattering in the detector is then 3% (62% x 5%). Therefore, the total fraction of 511 keV photons that are detected when using a 350 keV threshold is 36% (33% from photoelectric absorption and 3% from single Compton scatter in the detector). This is a lower limit estimate offUv However, it does not take into include the photons that undergo Compton-absorption, or multiple scattering in the detector that would increase fuiv. Using this value, a lower limit estimate for xdet may be found by equating (4.9) and (4.14), the two different formulae for sde-= f*in (1 - ) - (1 - e ' M " * ) (5.1) Se t t i ng^ = 0.36 and solving (5.1) for xdet yields xdet - 0.28. Again, this is lower limit estimate; the true value would be higher. The optimal value is xdet is based on empirical data (Section 6.3.2). 129 5.4.4 Broad-Beam Correction Factor Essentially all the interactions of 511 keV photons in tissue-equivalent materials are by Compton scatter. Since the MicroPET R4 is designed for imaging small rodents, it has a small transverse field of view (100 mm) and a small axial field of view (78 mm.) This limits the size of the subjects so that the maximum possible pathlength of photons in the subject is ~130 mm. More realistically, the diameter of a rat and a mouse are approximately 60 and 30 mm, respectively, so the average pathlength of photons in the animals would be —30 and 15 mm. Using the attenuation coefficient for water, it may be estimated that < 15% of the single photons would scatter in a mouse and < 25% in a rat. Therefore, in these imaging situations, the attenuation and scatter of single photons is small and does not alter the data significandy. This also implies that multiple scatter is negligible. The photon survival calculated uses a broad-beam correction factor, x^^, to correct the narrow-beam attenuation coefficients for photon scatter in the object (Section 4.6). The value of a r w may be estimated from the fraction of the object-scattered single photons that are counted,^4^,., using die procedure outlined in Section 4.6.2. 1 ne value of fscattahs is estimated for a MicroPET R4 tomograph with a 350 keV energy threshold using the energy distribution of the object-scattered photons that are fully absorbed in the detector, P ^ / E J . This, following (4.20), is given by: ^ K ) = [ / ™ ( ^ (5-2) where /KN(E,J = energy distribution of Compton scattered photons fabs(EJ = fraction of photoelectric interactions fi(EJ = total attenuation coefficient of the detector <xA ; g ?>= average, effective detector thickness fc(Esc,Ej%J = an energy dependent, Gaussian energy blurring function. fM(EJ is given by the Klein-Nishina equation (4.16). For the MicroPET R4, fAIS(EJ and JU(EJ for L S O are used [105]. The effective detector thickness for scattered photons depends on their incident angles. Since the paths of the scattered photons are not explicitly calculated, an average 130 effective crystal thickness <xda^> is used. In the MicroPET detector model, the normally incident effective thickness was taken to be 8.8 mm (Section 5.4.3). Scattering from the radial and axial centre of the tomograph would result in incident angles up to 26°. Scattering from near the axial edge would result in incident angles up to 45°. This range of incident angles corresponds to a range of effective detector thicknesses from 8.8 - 12.4 mm. Therefore, the average incident angle is about 26° and <xdag>~ 10 mm. Finally, the energy blurring function in the calculation of Pstat^s(Es) uses the MicroPET energy resolution of 23% at 511 keV. Figure 5-11 shows all the stages of this calculation: the energy distribution of the Compton scattered photons, fm(EJ, then corrected for intrinsic efficiency (using pU0(EJ and <xdel.i;f>~ 10 mm), then further modified by f^uo^st) a n d finally the total distribution blurred by fc(EJ. A l l values of attenuation coefficients are taken from [105]. The four plots show the various factors in the final calculation of Psca,^i,s(Esc). T i 1 i 1 : r 0 100 200 300 400 500 600 700 800 E (keV) Figure 5-11 The energy distribution of photons Compton scattered in the object, fjcN, (blue). This is corrected by the intrinsic efficiency (green), the fraction of photoelectric absorption in the detector fabsjso(Esc) (red) and convoluted with an energy dependent energy blurring function (teal). These assume <Xdet,eff> — 10 mm and energy resolution — 23% at 511 keV. The black dashed lines show the 250, 350, and 650 keV energy thresholds. 131 The fraction of the object-scattered photons that were photoelectrically absorbed in the detector, fsatjm i s found by integrating Pmabs(EJ within the energy window and normalizing to the total scattering cross section (4.21). Using the Pwlal>s(Es) calculated above (based on the MicroPET R4 detector model) and a 350-650 keV window, givesfscat^b=\2.5%. fsatja does not include the fraction of the object-scattered photons that Compton scatter in the detector and fall within the 350-650 keV window, fxatcr This also contributes the total fraction of object scattered photons that are counted. Calculating by a similar method g i v e s ^ „ = 0 . 5 % . The fraction of object-scattered photons that undergo Compton-absorption in the detector cannot be calculated analytically and is not included in this analysis. Therefore, the lower limit of the fraction of the object-scattered photons that are counted is the sum of the two calculated contributions, fsattjot ~ fscat,abs fscat,cs ~13 /o. The value of the broad-beam correction factor, x w , is now found based on xda =0.28 and fmm ~ 0.13. As described in Section 4.6.2, the value of xbrmdis found by equating (4.24) and (4.25), then solving for xbnad. Combining these equations yields P p ~Kbroad^objxobj /I ~KdttMdelXdeleff \ ~/JobjXobj /I ~KdelPdelxde/.eff \ , J" /I ~VobjXobj \ / r -?\ The calculation is not very sensitive to the values used for pohj and For x^- — 5.0 cm of water (pobj =0.0958 cm"1), xbnad ~ 0.28. Again, this preliminary value does not take into account Compton-absorption in the object or the detector. Therefore, the optimal value of % w requires further refinement based on empirical tests (Section 6.3.3). 5.4.5 Crystal B l o c k Structure Variations in the relative crystal efficiencies cause sharp diagonal lines in the randoms sinograms (Section 4.7.3). A simple model of the crystal efficiency variations within the detector block of the MicroPET R4 was used to reproduce these in the randoms calculation. Preliminary values of the efficiencies in a detector block were found using a Monte Carlo simulation of the detector block. Then, the symmetries of the crystal location were used to the identify crystals that should have the 132 same efficiencies from purely systematic considerations. It was found that within a block of 64 crystals only ten unique efficiency values were required. The efficiencies found from the simulation were then averaged according to their block locations to determine the ten efficiency values. These values were normalized to bring the total block efficiency to unity. The efficiency values were further adjusted to improve the fit of the calculated randoms distribution to the high frequency details in the measured randoms distribution. The relative efficiencies within the model of the MicroPET R 4 detector block are shown in Figure 5-12. Figure 5-12 Relative detector efficiencies in detector block model. This model only reproduces systematic block structure effects; random variations in crystal efficiency are not included. These would result in random variations in the relative amplitudes of the individual diagonal lines in the randoms sinograms. As well, the systematic variations change with P M T drift, so the block structure model may require adjustment over time. The relative efficiencies also depend on the energy window settings. The present model is only used with a 350-650 keV window. 133 5.4.6 Dead-Time Correction The randoms calculation requires the measured randoms and singles rates to be dead-time corrected. The MicroPET software calculates a global correction factor, fdadim > 1, for the coincidences (trues and randoms), but not for the singles rates. Since the MicroPET calculation of the dead-time correction factor is proprietary, the exact relationship between singles and coincidence dead-time is unknown [106]. However, the measured singles and coincidences are processed identically in the MicroPET until the coincidence processor. As well, the scintillator decay time is largest contribution to detector dead-time. Therefore, a dead-time correction factor for the singles rates may be based on the values offdead£me. The randoms are measured as delayed coincidences in the same digital coincidence processor as the trues, so fdmdtim is equally valid for trues or randoms. Given that the randoms rates are proportional to the square of the singles, the dead-time correction for singles should be roughly the square root of the correction for coincidences. Therefore, a dead-time correction factor of fdeadtimewas u s e < ^ f ° r t n e measured single rates. A further, an empirically determined, correction of fdeadtimewas a p p l i e d to the calculated randoms sinograms to bring calculated and measured values of randoms rates at high count rates into better agreement. 5.4.7 Summary of Calculation Parameter Optimization The following calculation parameters require further adjustment based on the experimental validation studies: • detector efficiency correction factor, xdep • broad-beam correction factor xbmd, and • radioactivity object sampling parameters, threshold and sampling density. Experiments were performed to determine the optimal values of these parameters, and the sensitivity of the randoms calculation to those values (Section 6.3). 134 Chapter 6 Experimental Validation of the Randoms Calculation 135 6.1 Measurements 6.1.1 Chapter Overview Chapter Six examines the validation experiments used to test the calculated randoms correction. The chapter begins with a description of the experimental details of the phantom and animal studies. Next, the Figures of Merits used to compare the measured and calculated randoms corrections are discussed. Then, the details of the calculation parameters used in the randoms correction are described. The bulk of the chapter gives the results of the experiments and compares them with the corresponding results from the randoms calculation. The overall significance of the results is discussed in Chapter Seven. 6.1.2 Introduction The accuracy and efficacy of the model-based randoms correction method were assessed using phantom and animal experiments done on the MicroPET R4 P E T tomograph. Since the MicroPET R4 allows the delayed coincidences to be stored separately, the randoms distributions may be directly measured. In early development, the calculated randoms distributions were compared with those generated by Monte Carlo simulations using SimSET [107]. Elowever, because SimSET does not directly simulate random coincidences or include a block detector model for P E T tomographs, the value of the simulations was limited. The G A T E simulation program was later available, but was not then validated for the MicroPET R4 [108]. Therefore, it was decided to perform validations using only measured data. The following studies were made on the MicroPET R4: • A high activity, off-centre line source was imaged in air for 12 hours. This provided a data set with a large number of counts so that individual count profiles could be examined. The high initial count rates also allowed the effects of high dead-times to be assessed. • Point sources in air-filled and water-filled cylinders were scanned at different positions in the F O V . These experiments assessed the accuracy of the randoms calculation for activity at different positions in the tomograph. 136 • Uniform cylinders were scanned to assess the accuracy of the calculation for larger objects and radioactivity distributions. As well, one cylinder was scanned at different axial positions so that the effects of radioactivity outside the axial field of view could be studied. Another scan was done on a cylinder with very high activity to measure image noise at high randoms fractions. • Contrast phantoms consisting of a sphere within a cylinder, each with different radioactivity concentrations, were also scanned. These experiments assessed the accuracy of the randoms calculation for complex radioactivity distributions. As well, the effects of the measured and calculated randoms on the reconstructed contrast phantom images were studied. • A mice study was used to assess accuracy of the randoms correction in a realistic scanning situation. As well the effects of the measured and calculated randoms on the reconstructed mice images were evaluated. A l l studies used 1 8 F ( T 1 / 2 = 1 1 0 min), with the exception of the high activity cylinder which used " C ( T 1 / 2 = 2 0 min). The emission scans were acquired on the MicroPET R 4 using a 350-750 keV energy window and a coincidence window of = 6.0 ns. Acquisition times were varied depending on the source radioactivity and the count requirements of the study. The blank and transmission scans used a ' 'Co point source and acquisition times of ~ 1 2 minutes and were performed post-injection. 6.1.3 High Count Study Using a High Activity Line Source in Air A high radioactivity (~40 MBq), ~ 1 mm diameter, off-axis line source in air (Figure 6-1) was scanned for 12 hours The source was 44 mm long, approximately half the length of a mouse body. The study was histogrammed into one 12 hour time frame to obtain a high count, low noise randoms distribution. The study was also histogrammed into 12 — one hour time frames to assess how the randoms rates and distributions vary with count rate. 137 Figure 6-1 Reconstructed coronal (left), sagittal (centre) and transverse (right) slice images of the off-axis line source. The lines and cross hair mark the approximate position of the tomograph axis. 6.1.4 Point Sources in Air-filled and Water-filled Cylinders Point sources were scanned at various positions in air-filled and water-filled cylinders in the tomograph imaging volume. These studies provided a test of the total photon detection model (solid angle, photon survival and detector efficiency) from different axial and radial positions within the F O V . Two scans were done for each position. The bottle was first filled with non-radioactive water and scanned. The scans with water were done first to maximize the activity when attenuation reduces the count rates. After scanning, the water was removed and the phantom was repositioned with the point source at the same location and scanned again. The measurements in air isolate the solid angle and detector efficiency parts of the calculation. The measurements in water include the additional effects of the photon survival (and scatter), so that the parts of the calculation may be studied separately. High activities were used to produce elevated randoms fractions and allow the acquisition of a large number of randoms counts. Scan durations of 30 to 60 minutes were used depending on the source activity. A l l experimental parameters are listed in Table 6-1. 138 Table 6-1 Experimental parameters of the studies with a high activity point source at various positions in air-filled and water-filled cylinders. There are no scans position labelled 3 and 4. Study position number and cylinder filling Initial Activity (MBq) Scan Time (min) Position (mm) X ± 0 . 4 mm y ± 0 . 4 mm z ± 0 . 6 mm 1 (water) 37.99 30 -4.7 +5.5 -4.9 l(air) 25.53 30 -4.7 +4.7 -4.9 2 (water) 15.11 60 -6.3 20.7 0.0 2 (air) 5.54 60 -5.5 19.9 1.2 5 (water) 31.22 30 5.5 -4.7 21.8 5 (air) 16.71 60 5.5 -6.3 20.6 6 (water) 8.29 60 -18.2 0.4 20.6 6 (air) 3.58 60 -18.2 -0.4 20.6 7 (water) 19.99 30 0.4 0.4 35.1 7 (air) 13.51 30 -0.4 -0.4 33.9 8 (water) 8.31 58.7 -0.4 -14.8 33.9 8 (air) 4.48 60 -0.4 -15.6 33.9 The sources used liquid samples of 1 8 F with high initial activity concentrations. Point sources were made by injecting a small volume (~5 ul) of radioactivity into a small bore capillary tube using a narrow gauge syringe. Care was taken to not smear out the radioactivity along the tube walls. The tube was sealed and the activity measured in a dose calibrator. The tube was then implanted into a high density polystyrene mesh/foam and mechanically secured into the neck of a 500 ml Nalgene water bottle, as shown in Figure 6-2. The bottle was approximately the size of a large rat, with the outer diameter - 69 mm, length from bottom to neck =142 mm, total length = 167 mm, the wall thickness = 2.0 mm. The point source positions in the bottle are shown in Figure 6-3. Only positions on one end were used because the tomograph response was assumed to be symmetric about the axial centre. 139 Figure 6-2 The large water bottle used in the point source measurements. The tube containing the point source is seen at the left, sticking out of the neck of the bottle. -50 0 50 -50 0 50 * y Figure 6-3 Positions of point sources in an air-filled cylinder (red asterisks) and the corresponding positions in a water-filled cylinder (blue circles). The green lines show the electronic aperture of the tomograph. The black lines show the location of the phantom cylinder walls. 140 High activity point sources in more complex attenuation distributions were also scanned. One scan used a point source in air positioned beside a plastic water-filled ampoule ("complex point 1 " ) , as shown in Figure 6-4. Another scan was done of a point source inside a water-filled cylinder beside an air-filled sphere ("complex point 2"), as shown in the Figure 6 -5 . The experimental parameters are listed in Table 6 - 2 . Figure 6-4. A coronal slice of the reconstructed attenuation (left) and radioactivity (right) images for the point source in air beside a water-filed ampoule ("complex point 1"). The cursor in the attenuation image marks the position of the point source. Figure 6-5 A sagittal slice of the reconstructed attenuation (left) and radioactivity (right) images for the point source in a water-filled cylinder beside an air-filed sphere ("complex point 2 " ) . The cursor in the attenuation image marks the position of the point source. 1 4 1 Table 6-2 Studies with a high activity point sources in complex attenuation objects Study Initial Activity (MBq) Scan Time (min) Point Source Position X (mm) y (mm) z (mm) complex point 1 4.48 60 12.2 2.9 0.0 complex point 2 8.31 58.7 1.3 -13.1 0.0 6.1.5 Un i fo rm Cylindrical and Contrast Phantoms Different sized cylindrical phantoms containing a uniform distribution of radioactivity in water were scanned. The first phantom was a small cylinder about the size of a mouse, which fit completely within the axial F O V . This phantom was scanned with different axial offsets, each with increasing radioactivity outside the axial F O V of the tomograph, as shown in Figure 6-6. For this study, transmission scans were done of only the first two scans. The other two scans used an attenuation map created from the reconstructed radioactivity image in which the voxel values in the cylinder were replaced with the attenuation coefficients of water. Figure 6-6 Reconstructed images of a small uniform cylinder scanned with increasing axial displacements (0.0,1.0, 2.0 cm and 3.0 cm) (left). Larger displacements resulted in increased radioactivity outside the axial FOV. The reconstructed image of the larger uniform cylinder is shown on the right. A second, larger cylinder (approximately the size of a rat) was also scanned. This had a larger diameter and the length extended outside the axial end of the tomograph. The phantom and study parameters are listed in Table 6-3. 142 Table 6-3 Details of the parameters in the uniform cylindrical phantom studies. Study Initial Activity (MBq) Scan Time (min) diameter (mm) length (mm) cylinder 1 20 60 50 -100 cylinder 2 position 1 59 60 30 70 cylinder 2 position 2 34 30 30 70 cylinder 2 position 3 27 30 30 70 cylinder 2 position 4 21 60 30 70 The larger cylinder was also previously scanned with a very high activity of 1 1 C . This study was analyzed for image noise. Two different contrast phantoms were scanned under different conditions. Both phantoms used a large cylinder but with different sizes of spherical insert. The sphere and the cylinder were filled widi different radioactivity concentrations. The radioactivity concentration ratio between the sphere and background varied. Contrast phantom 2 was scanned twice at one position with different total activities and scanning times. It was also scanned at a second axial position, which altered the amount of activity outside the axial F O V and the location of the sphere. The different phantom configurations are shown in Figure 6-7. The details of the studies are listed in Table 6-4. 143 • Figure 6-7 Reconstructed images of the three contrast phantom configurations: "contrast phantom 1" (left), "contrast phantom 2" (centre), and "contrast phantom 2a" (right.) Phantoms 2 and 2a are identical except that the entire phantom was moved between the two scans. Table 6-4 Details of the parameters in the contrast phantom studies. Study Initial Activity (MBq) Scan Time (min) sphere diameter (mm) Activity Ratio (T:B) contrast phantom 1 27.0 58.7 50 7:1 contrast phantom 2 study 1 60.8 5 20 3:1 contrast phantom 2 study 2 58.5 120 20 3:1 contrast phantom 2a (posn 2) 21.4 60 20 3:1 6.1.6 A n i m a l Study As part of a regular research study, two mice were injected with a total of ~ 3 M B q of 1 8 F - E F 5 , an imaging agent which is taken up in hypoxic tumour tissue. A three hour scan was performed. The reconstructed images are shown in Figure 6-8. 144 Figure 6-8 Reconstructed transverse, sagittal and coronal slices of the reconstructed scan of two mice injected with 1 8F-EF5. 6.1.7 MicroPET Acquisition and Processing A l l data were acquired using the standard protocol, unless otherwise noted. The acquired list mode data were histogrammed into two data sets for each study: one with the prompt and delayed coincidences stored in separate sinograms, and the other with the delays subtracted. 3D data sets were generated with a sinogram size of 84 radial bins by 96 angular bins using span = 3 and maximum ring difference = 31. This produced a total of 703 sinograms grouped into 21 segments. Data were also rebinned into sinograms using SSRB to increase the counts per slice for comparison with the calculated randoms sinograms. The preliminary radioactivity images used for the randoms calculation were reconstructed from the prompts sinograms using F O R E followed by 2D-FBP with normalization, attenuation and dead-time corrections applied. The reconstructed images were 128 x 128 x 63 slices. The attenuation coefficient images were reconstructed from the transmission scan data. The resulting upvalues for water were —30% lower than the true value (0.0958 cm"1). This was due to the scatter of the photons from the singles transmission source [109]. Segmentation was done using the standard MicroPET procedures to give correct values. First, the attenuation coefficient images were first smoothed using a Gaussian filter with a F W H M of 3 pixels in both the transverse 145 axial directions. Regions of interest were drawn around volumes of known attenuating media, usually a water volume or the patient bed, and the material composition was identified. The attenuation values were then scaled to the known values. Next, the attenuation image voxels were segmented according to their new rescaled values. A number of thresholds were placed on the histograms of the object pi-values to define the regions that have same composition and u-values. The thresholds were adjusted until distinct structures (such as the animal bed or phantom boundaries) were seen on the preliminary segmented image. When the thresholds were finalized, the segmented image was created. This was reprojected to form new transmission sinograms and new A C F s were generated. The preliminary radioactivity image was then reconstructed using the new ACFs . Figure 6-9 shows histograms of the same attenuation image voxel values before and after segmentation. xlO 2 1.5 1 0.5 0 histogram of attenuation values 0.05 0.1 0.15 X 10 5 histogram of attenuation values 6 -5 A • 4 3 2 • 1 n • I 0.2 linear attenuation coefficients (cm ) 0 0.05 0.1 0.15 0.2 linear attenuation coefficients (cm'1) Figure 6-9 Histograms of attenuation coefficient values from the transmission scan of a water-filled cylinder using a 5 7Co singles source. The uncorrected histogram (left) shows a water peak at an erroneous value of 0.065 cm-1. The histogram of the segmented attenuation object (right) shows only discrete attenuation values, and the water peak is now at the correct value (0.0958 cm1.) The y-axis shows the frequency of attenuation values. 146 6.2 Data Analysis 6.2.1 Introduction The measured and calculated randoms distributions may differ in three ways: • inaccuracies in overall scaling, • statistical noise, and • biases in the spatial distribution due to inaccuracies in the calculation. The validation of the model-based randoms correction requires that both the accuracy of the calculated spatial distribution of the randoms, as well as the global scaling, be assessed. The accuracy of the global scaling is found simply by comparing the measured and calculated global randoms rates. The accuracy of the calculated randoms distributions for each study was assessed by visual comparison of the sinograms and count profiles to measured distributions. The pixel values were compared using quantitative figures of merit (FoM). Finally, the effects of the each randoms correction method on the images (noise and bias) were assessed. 6.2.2 Comparison of Count Rates The measured and calculated global randoms count rates were compared by the percent difference: M-C % difference = x l 0 0 % (6.1) where M — measured average global count rates. C — calculated average global count rates. This gives no information about the accuracy of the spatial distribution, only on the accuracy of the global scaling of the randoms rates. The trues rates were also compared, but the accuracy of the trues calculation has little bearing on the accuracy of the randoms calculations. 147 6.2.3 Comparison of Randoms Distributions A n assessment of the accuracy of the calculated spatial distribution of randoms requires comparing the pixel values of the measured and calculated randoms distributions. However, the pixel values differ because of both biases and the statistical fluctuations in the measured randoms. Thus, the task is to estimate bias in the presence of noise. In this analysis, the term "bias" is used for the systematic errors in the calculated random distributions that may vary spatially. Visual comparison of the sinograms was first done to detect any potential gross differences and artifacts in the shapes of distributions. In general, the calculated sinograms are very smooth, but in the majority of the studies, the measured delays sinograms are very noisy. This noise masks both high and low frequency details. Single slice rebinning was used to reduce the 3D data set to a 2D data set, and, thus, reduce the noise in the sinograms. As well, the central transverse slices were mainly examined because this slice is at the peak of the counting sensitivity and has the lowest statistical noise. Sinograms of the sum of the slices were also sometimes used to further reduce the noise. Sinograms of the percent differences (6.1) between the pixel values of the measured and calculated sinograms were also calculated. These were very useful in detecting the small differences between the sinograms. For the highest count studies, the count profiles of individual pixel rows were examined along the different data axes: radial, azimuthal and axial. When the profiles of measured and calculated randoms distributions and plotted simultaneously, biases may be visually detected i f the statistical fluctuations in the measured data are small. The 12 hour frame of the off-axis line source study contained an extremely large number of counts (460 million delays); thus, profiles within the 3D data set were also compared, without rebinning. For most studies with a lower number of counts, the data were summed over two out the three axes to form the profiles. For example, the summed radial profiles were formed by summing the data over the azimuthal and axial bins. This was necessary to reduce the noise, so that biases could be detected along the un-summed axis. While these summed profiles have reduced noise, biases in the summed axes may average out. Therefore, summed profiles for each axis must be examined. Finally, profiles of the percent difference 148 between the measured and calculated summed count profiles were formed to enhance and quantify biases. In the percent difference profiles, biases appear as departures from flatness. Four quantitative calculated figures of merit (FoM) were also used to assess the bias and noise differences between the calculated and measured randoms distributions based on the differences in the pixel values. These were the normalized mean squared error, NMSE, the chi-squared value, "y2", the standard deviation of the percent differences, s.d.(%diff), and the 2 D correlation coefficient, r2D. The NMSE and y2 were calculated when comparing sinograms or entire data sets. The s.d.(?/odiff) and 2 D correlation coefficient were calculated only when comparing individual sinograms. In this task, the mean squared error, MSE, is the square of the differences between the corresponding pixel values in the two data sets, divided by number of pixels. The NMSE, then, is the sum of the square of the normalized differences. For the measured "M" and calculated "C" data sets, with pixel values, M{ and C{, and for N pixels, these quantities are defined as: MSE = ^ it(Mi-Cif (6-2) and 1 N NMSE = — Y AT i—t (6.3) For two identical data sets, MSE and NMSE = 0. It may be shown that the MSE is the sum of the statistical variance and the bias, so the values of MSE and NMSE have components from both bias and noise. In situations where the bias dominates, N M S E and M S E become a measure of the upper limit of the bias. The MSE and NMSE are especially useful when evaluating the differences made by different processing of the same data set. For example, optimal values of the calculation parameters were found, in part, by finding the values that minimized the M S E . For individual 149 studies, the NMSE gives an absolute value of the magnitude of the bias i f the noise is small compared to the bias. The chi squared, between the pixel values in two data sets is calculated as: f - ± ^ - c > ] ' 1=1 M, ' (6.4) This may be also normalized to the Chi-squared per pixel, ^ /N where N is the number of pixels. The Chi-squared is a measure how closely the variations in pixel values follow Poisson statistics alone. Thus, it may be used to detect the presence of systematic biases. In this task, the j^/N is used to compare the smooth distribution (the calculated randoms) to a distribution that contains Poisson noise (the measured randoms). If the difference the between two data sets is purely Poisson, jf/N is very close to unity. However, j^/N > 1 indicates that the differences in the data sets have a component due to bias. The relative magnitude of biases, however, cannot be quantified by )f/N because its value is sensitive to the absolute pixel values. Thus, y2/N increases with the absolute number of counts even when the relative bias is unchanged. The standard deviation in the percent difference values between the measured and calculated distributions is calculated as: sd(%diff)=-^-lY^ M . - C M M, (6.5) i V J This is a measure of the magnitude of the statistical variations between the pixel values in two data sets. For two identical sinograms s.d.(%diff) = 0. For two identical sinograms which differ only by their noise content, s.d(%diff) > 0 and is indicative of the noise. When comparing the noisy measured randoms with the smooth calculated randoms, s.d.(%diff) is a measure of the noise in the measured randoms, but s.d.(%diff) is also sensitive to biases. 150 The 2D correlation coefficient, r2D, is a measure of the similarity of two 2D data sets: (6.6) V V i J V i / For two identical images, r 2 D = 1 and is independent of a scaling factor. r2D is very sensitive to structural differences, but also to noise. For low noise images, it provides an additional measure of the agreement between two images. In low noise situations, low values of r2D indicate significant bias. These figures of merit must be used in combination to assess the accuracy of the calculated randoms distributions because the magnitude of the statistical noise limits the detectability of any bias. For those studies in which y2/N ~ 1, the difference between the calculated and measured data is dominated by the Poisson noise. In this case, the bias is smaller than the statistical noise; making it undetectable. For those tests with yf/N > 1, the statistical noise no longer dominates and the other FoMs become more sensitive to bias. In this case, the F o M values become the upper limit of the bias (in the limit of when the statistical noise is zero). 6.2.4 Comparison of Reconstructed Images The overall effect of the different randoms correction methods on the reconstructed images was assessed by comparing image count profiles and measuring the image noise. The noise was found by measuring mean voxel values in a number of small ROIs position at equal radial distances from the centre of the phantom. The % noise is the ratio of the standard deviation of the mean ROI voxel values to the average of these values. This gives an estimate of the image noise, but is sensitive to averaging within the ROI , as well as variations in the image due to attenuation and scatter and their corrections. Such variations could potentially change the purely Poisson nature of the noise. 151 6.2.5 Testing the Quantitative FoMs To test the sensitivities of the different quantitative FoMs, a high count measured randoms sinogram was compared with two calculated randoms sinograms: one including the effects of the block structure in the calculation, and the other without. The block structure causes the diagonal pattern in the randoms sinograms. Figure 6-10 shows the measured sinogram and the two calculated sinograms. The FoMs were calculated comparing the measured central slice sinograms to each of the calculated sinograms. The resulting FoMs are listed in Table 6-5. 10 20 30 40 50 60 70 radial bin id: 10 20 30 4 0 50 60 70 radial bin (d) 10 20 30 40 50 60 70 radial bin (d) Figure 6-10 The comparison of a measured sinogram (left) to a calculated sinogram without block structure (centre) and a calculated sinogram with block structure (right.) Table 6-5 The values of the FoMs between measured and calculated randoms distributions in detecting block structure in the central slice sinogram. calculation NMSE sd(% diff) x 2 / N T2D parameter (%) (%) ' (%) w/o block 5.33 10.89 2.44 9.0 with block 2.34 8.34 1.29 70.6 The value of every F o M is improved with inclusion of the block structure. The NMSE decreased by almost 50%. The s.d.(% diff) decreased by -20%. The y2 IN > 1 indicates the dominance of the bias. The decrease in the rf/N shows that including the block structure brings the difference 152 between the sinograms closer to being purely Poisson. The 2D correlation coefficient changes the most i f the block structure is added; it increases from 9% to 71% for central slice. Thus, these FoMs are able to detect bias when the image noise is low, and the 2D correlation coefficient is most sensitive measure. 6.3 Calculations of the Randoms Distributions 6.3.1 Introduction The randoms distribution calculation requires the values of the following calculation parameters to be optimized based on experimental results: • detector efficiency correction factor xdlD • photon survival broad-beam correction factor and • radioactivity object sampling parameters: threshold and sampling density. The experimentally determined values were used in the remainder of the calculations. Also determined was the sensitivity of the results of the randoms calculation to the values of these parameters. 6.3.2 Detection Efficiency Correction Factor The value of the detection efficiency correction factor was estimated to be, xdet ~ 0.28 from physical arguments based on the MicroPET R4 detector model (Section 5.4.3). However, because of the simplifications in this calculation, the optimal value is expected to be higher. Randoms distributions calculated with different values of xdtt were compared to the measured data from the high activity line source study (Section 6.1.3). The line source, being an extended object, averages out any small errors from the calculations of individual points with different solid angles. Since the line source is in air, the effects of attenuation and scatter in the subject are negligible; thus, the results do not depend on the value of the broad-beam correction factor. The comparison used a 153 one hour frame from the study which was acquired at lower count rates (frame 3 in Table 6-8) to minimize the impact o f dead-time. T h e calculation was done with high sampling density (5000 pts/ml) and threshold (20%). T h e calculated distributions were compared with the measured data by the count rates and quantitative figures o f merit. T h e results are listed in Table 6-6. Table 6-6 Measured and calculated counts rates and FoMs comparing the randoms central slice sinograms from frame 3 of the line source in air for different values of Xde* Kdet Randoms Rate (keps) Trues Rate (keps) MSE X 2 / N measured rates 8.00 186 0.20 8.02 115 74.1 1.28 0.25 8.02 142 74.1 1.28 0.30 8.02 167 74.1 1.28 0.33 8.02 179 74.4 1.29 0.34 8.02 185 74.3 1.29 0.35 8.02 192 74.2 1.29 0.40 8.02 216 74.2 1.29 T h e results show that the value o f xda has no effect on the accuracy o f the randoms rate over a wide range o f values. This is due the scaling o f the randoms distribution to the measured singles rate. There is also a negligible difference between the F o M s over a wide range o f xdtt values. In the absence o f other criteria, the agreement with the measured trues rate was used to determine the value o f xder; with xdet = 0.34 giving the best agreement. While the value o f the trues rate is not a primary figure o f merit, smaller differences between calculated and measured trues rates indicate a better overall calculation. 154 6.3.3 Broad-beam Attenuation Correction Factor I n Section 5.4.4, xbrmd w a s i n i t i a l l y e s t i m a t e d t o b e 0 . 2 8 f o r a 3 5 0 - 6 5 0 k e V e n e r g y w i n d o w , b a s e d o n p h y s i c a l a r g u m e n t s a n d t h e a s s u m p t i o n t h a t a l s o xda - 0 . 2 8 . H o w e v e r , i f t h e e m p i r i c a l l y d e t e r m i n e d v a l u e xdet = 0 . 3 4 i s u s e d , t h e i n i t i a l e s t i m a t e o f xM b e c o m e s 0 . 3 8 . E m p i r i c a l l y - b a s e d a d j u s t m e n t s w e r e d o n e b y m e a s u r i n g t h e a g r e e m e n t b e t w e e n m e a s u r e d a n d c a l c u l a t e d r a n d o m s r a t e s a n d d i s t r i b u t i o n s f o r d i f f e r e n t v a l u e s o f x^. F o r t h i s , a s t u d y a p h a n t o m c o n f i g u r a t i o n w i t h s i g n i f i c a n t a t t e n u a t i n g m a t e r i a l w a s u s e d ( p o i n t s o u r c e a t p o s i t i o n 6 , i n w a t e r , i n Table 6-1). T h e m e a s u r e d a n d c a l c u l a t e d c o u n t r a t e s a r e c o m p a r e d i n Table 6-7, a l o n g w i t h F o M s c o m p a r i n g t h e m e a s u r e d a n d c a l c u l a t e d r a n d o m s d i s t r i b u t i o n s . Table 6-7 Measured and calculated counts rates and FoMs comparing the randoms distributions Kbroad Randoms Rate (keps) Trues Rate (keps) MSE X 2 / N measured rates, position 6 in H20 2.00 64.3 0.20 2.03 69.6 16.0 1.09 0.30 2.02 67.0 15.7 1.09 0.40 2.01 65.2 15.6 1.10 0.45 2.00 64.3 15.6 1.10 0.50 2.00 63.1 15.6 1.11 0.60 1.98 61.1 15.6 1.12 0.70 1.97 59.4 15.7 1.14 T h e v a l u e s o f xbrmd h a v e a v e r y s m a l l e f f e c t o n t h e a c c u r a c y o f t h e r a n d o m s r a t e ; t h e b e s t a g r e e m e n t i s f o r xbroad = 0 . 4 5 - 0 . 5 0 . T h e c h a n g e s a r e a g a i n s m a l l b e c a u s e t h e r a n d o m s r a t e i s s c a l e d d i r e c t l y t o t h e m e a s u r e d s i n g l e s r a t e . T h e F o M s a l s o c h a n g e o n l y s l i g h t l y w i t h xbmad; t h e MSE i s m i n i m i z e d a t 1 5 5 xbnud = 0.40-0.60. The //N decreases monotonically with the lower values of xbroad and is, therefore, irrelevant. The calculated trues rate also varies slightly, with « w = 0.40 giving the best agreement. Since xbrmd — 0.45 gives the best agreement for the randoms rate, it is optimal. However, the accuracy of the calculation, as measured by these parameters, is shown to be relatively insensitive to the values of xbrmd. 6.3.4 Radioactivity Object Sampling and Calculation Time The accuracy of the randoms calculation clearly depends on the sampling of the radioactivity object. However, the computation time also depends directly on the number of sample points used. Two calculation parameters, sampling density and threshold, are used to choose the activity sampling points (Section 4.2.2). The sampling density determines raw number of sample points. The threshold determines which of those points to use in the calculation. It was found that the calculated randoms distributions show little variation if a few hundred to a few thousand sample points are used, depending on the activity distribution. Unsurprisingly, the larger activity distributions require the higher number of points. In determining the best values for these parameters, the threshold is determined first, and then the sampling density is adjusted to produce the requisite total number of points. The threshold is used to remove points with low activity from the set of sample points to use in the calculation. The high activity points give the greatest contribution to the randoms. The threshold also removes image background, including apparent activity from randoms, scatter and reconstruction artifacts. For objects with highly concentrated activity distributions and high activity, such as the line and point source sources and the cylindrical phantom studies, a high threshold (20%) was used to discriminate low count background. In the line source study, a 20% threshold and a density of 500 points/ml results in -3000 points. In the point source studies, a 20% threshold and a density of 5000 points/ml results in -200-300 points. In a uniform cylinder, the large volume greatly increases the number of sample points. For example, a sampling density of 500 pts/ml results in —70,000 sample points. Therefore, the sampling density was reduced to 50 pts/ml, which generated — 7,000 sample points. 156 Improper use of the threshold may cause problems for complex radioactivity distributions. For example, the contrast phantoms consist of a hot sphere inside a colder cylinder. The contrast difference between the sphere and the cylinder affects the appropriate threshold. A 20% threshold has no effect i f the activity ratio is of 2:1. However, this threshold would eliminate the entire contribution of the cylinder i f the activity ratio is > 5:1. Therefore, to detect a higher dynamic range of contrast, the threshold was reduced to 5% of the maximum and the density was kept to 50 pts/ml. For both contrast phantoms, this resulted in ~6000 sample points. Since the animal studies had a complex activity distribution, we used a threshold of 5% and a sampling density of 50 pts/ml. The appropriate threshold also depends on the noise in the reconstructed radioactivity image. One study using a contrast phantom was binned into a 30 second frame. The resulting reconstructed activity image was very noisy. A threshold of 5% was inappropriate because it produced a significant number of sample points outside the subject. For this study, the threshold of the randoms calculation was increased to 20%. This problem was detected by viewing the distribution of sample points before the randoms calculation. Idowever, for routine randoms correction of animal studies, a threshold of 5% and density of 50 pts/ml were expected to work well. The computation times scale with the number of points. For a standard single processor computer using the standard calculation parameters, the average calculation time was found to be 0.23 seconds/sample point. Thus, for a point source with 250 points, the computation time is 58 seconds. For 7000 sample points, this becomes ~27 minutes. However, this parameter and the calculation code could be further optimized for calculation speed. As well, the code is amenable to parallel processing. 6.3.5 Standard Calculation Parameters Therefore, the following values for the calculation parameters were used for all the studies, except where otherwise noted. 157 • detection efficiency correction factor, xda = 0.34, • photon survival scatter correction factor, xhrMll — 0.45, and • for the validation tests, the thresholds and sampling densities were adjusted according to the study: For point sources 5000 pts/ml, 20% threshold, line sources 500 pts/ml, 20% threshold; uniform cylinders 50 pts/ml, 20% threshold and contrast phantoms and animal studies, 50 pts/ml, 5% threshold. 6.4 Results 6.4.1 High Count Study Using a High Activity Line Source in Air The results from the off-centre line source in air are listed in Table 6-8. The total measured delays in the 12 hour frame are far greater than those that would be obtained in a realistic animal study {c.f. 5.9 Mets in the 3 hour mice study). This frame is to test for small biases between the calculated and measured randoms distributions. A t all count rates, the randoms calculation reproduces the measured randoms rates for the line source to within < 0.8%. The 12-hour frame is the only data set that shows any significant disagreement, 5.7%, and this was caused by errors in the calculated dead-time correction due to the long scan duration. For the remainder of this analysis, the calculated randoms distributions for this data set only are scaled to the same counts as the measured randoms for comparison purposes. The trues rates, while not relevant to the accuracy of the randoms calculation, are also reproduced to < 2.2% for all the time frames except the 12 hour frame, where the error in the trues rate is 7.5%. 158 Table 6-8 Measured (calculated) count rates from high radioactivity line source at one hour time intervals over a 12 hour scan. Study/ Total Singles Trues Randoms Randoms Randoms Start Time Delays (cts) (kcps) (kcps) (kcps) % diff Fraction (%) 12 hour 459.4 M 2011 139 10.6 -5.7 7 frame (2011) (150) (11.2) frame 0 266.5 M 7880 573 74.0 -0.8 11 t=0 min (7880) (579) (74.6) frame 1 127.0 M 5399 395 35.3 -0.3 8 t=60 min (5399) (397) (35.4) frame 2 60.5 M 3709 272 16.8 0.0 6 t=120 min (3709) (277) (16.8) frame 3 28.8 M 2554 186 8.00 -0.4 4 t=180 min (2554) (185) (8.03) frame 4 13.8 M 1763 128 3.83 -0.3 3 t=240 min (1763) (128) (3.84) frame 5 6.66 M 1223 87.6 1.85 -0.5 2 t=300 min (1223) (89.1) (1.86) frame 6 3.25 M 853 60.0 0.903 -0.3 1 t=360 min (853) (60.1) (0.906) frame 7 1.62 M 600 41.1 0.449 -0.2 1 t=420 min (600) (41.2) (0.450) frame 8 821 k 426 28.2 0.228 0.0 1 t=480 min (426) (28.3) (0.228) frame 9 429 k 307 19.3 0.119 0.0 1 t=540 min (307) (18.8) (0.119) frame 10 234 k 226 13.2 0.0650 -0.3 0 t=600 min (226) (13.1) (0.0652) frame 11 135 k 171 9.04 0.0375 0.3 0 t=660 min (171) (9.16) (0.0374) The measured 12 hour frame was used first for comparison because of the high number of counts. The SSRB rebinned randoms distributions were examined first. The measured, calculated and percent difference sinograms of two slices are shown: the central slice (Figure 6-11), and a slice midway between the axial centre and the axial end (Figure 6-12). 159 10 20 30 40 50 60 70 80 10 20 30 40 50 60 70 80 10 20 30 40 50 60 70 80 radial bin (d) radial bin (d) radial bin (d) Figure 6-11 The measured (left), calculated (centre) and percent difference (right) central slice randoms sinograms for the 12 hour frame of the off-centre Une source study. 10 20 30 40 50 60 70 80 10 20 30 40 50 60 70 80 10 20 30 40 50 60 70 80 radial bin (d) radial bin id: radial bin <d) Figure 6-12 The measured (left), calculated (centre) and percent difference (right) sinograms of a slice midway between the centre and the axial end (slice 48) for the 12 hour frame of the off-centre line source study. Both sets of randoms sinograms show the low frequency spatial variations due to smooth changes in the singles distribution caused by the off-centre source. The high frequency structure caused by the systematic variations in the detector efficiencies within the block is also apparent. Both of these features appear well reproduced by the calculation. 160 There are some minor differences in the low and high frequency structures, as seen in the percent difference sinograms. The most prominent features are diagonal lines caused by differences in the efficiencies of individual crystals. The lines are most prominent for the lowest count regions where the percent difference values would be the highest. Also visible in the central slice percent difference sinogram is a curved band caused by reduced counts in the measured sinogram. This dip is seen in the pixels that corresponds to the pixels in trues sinograms with the highest count rates. This feature is not seen in all slices; it is not apparent, for example, in the non-central slice in Figure 6-12. The radial, azimuthal and axial profiles of the SSRB rebinned randoms sinograms are shown in Figure 6-13 to Figure 6-15. These are a sample set of profiles in which the other data axes have not been summed. The calculated profiles show good agreement with the measured data. A few radial profiles show small differences at some radial positions. The positions of these biases mainly correspond to the location of the curved band artifact described above. Some of the axial profiles show errors where shapes of the profiles agree but are not scaled the same. These are probably caused by the random variations in individual detector efficiencies. 161 az = 48 ax = 5, az = 48 ax = 9. az = 48 ax= 13. az = 48 0 radial bin (cm) ax = 17, az =48 -5 0 radial bin (cm) ax = 21. az = 48 0 5 radial bin (cm) ax = 25, az = 48 0 5 radial bin (cm) ax = 29, az = 48 radial bin (cm) ax = 33. az = 48 -5 0 5 radial bin (cm) ax = 37, az = 48 5 0 5 radial bin (cm) ax = 41.az =48 -5 0 5 radial bin (cm) ax = 45, az = 48 2000 1000 0 radial bin (cm) ax = 49, az = 48 0 5 radial bin (cm) ax = 53, az = 48 -5 0 5 radial bin (cm) ax = 57, az = 48 0 5 radial bin (cm) ax= 61. az = 48 -5 0 radial bin (cm) -5 0 radial bin (cm) radial bin (cm) 0 5 radial bin (cm) Figure 6-13 Radial profiles for azimuthal bin ("az") 48 and different axial slices ("ax")of the measured (red) and calculated (blue) randoms sinograms of the 12 hour frame of the off-centre line source study. 162 a x = 1, rad = 42 ax = 5, rad = 42 ax = 9, rad = 42 a x = 13, rad = 42 £ 50 200 » 500 1000 500 50 100 150 azimuthal bin (deg) ax = 17, rad = 4 2 50 100 150 azimuthal bin (deg) ax = 21. rad = 42 50 100 150 azimuthal bin (deg) ax = 25. rad = 42 50 100 150 azimuthal bin (deg) ax = 29. rad = 42 1000 500 0 2000 1000 0 1000 500 0 c 1000 1000 2000 1000 50 1 00 1 50 azimuthal bin (deg) ax = 33. rad = 42 50 100 150 azimuthal bin (deg) ax = 37. rad = 42 50 100 150 azimuthal bin (deg) ax = 41 , rad = 4 2 50 100 150 azimuthal bin (deg) ax = 45. rad = 42 o § I 500 50 1 00 1 50 azimuthal bin (deg) ax = 49, rad = 42 50 100 150 azimuthal bin (deg) ax = 53, rad = 42 50 100 150 azimuthal bin (deg) ax = 57, rad = 42 50 100 150 azimuthal bin (deg) ax = 61, rad = 42 500 ff I 400 200 5 100 50 1 00 1 50 azimuthal bin (deg) 50 100 150 azimuthal bin (deg) 50 100 150 azimuthal bin (deg) 50 100 150 azimuthal bin (deg) Figure 6-14 Azimuthal profiles for radial bin ("rad") 42 and different axial bins ("ax")of the measured (red) and calculated (blue) randoms sinograms of the 12 hour frame of the off-centre line source study. 1 6 3 rad = 1, az =48 » 2000 f \ ~ § 1000 / \ rad = 5, az = 48 rad = 9, az = 48 13. az = 48 -2 0 2 axial bin (cm) rad = 17, az = 48 -2 0 2 axial bin (cm) rad =21. az = 48 -2 0 2 axial bin (cm) rad = 25. az = 48 2000 1000 2000 I A 1000 / \ -2 0 2 axial bin (cm) rad = 33. az = 48 -2 0 2 axial bin (cm) rad = 37, az = 48 -2 0 2 axial bin (cm) rad = 41. az = 48 1000 1000 -2 0 2 axial bin (cm) rad = 49, az = 48 -2 0 2 axial bin (cm) rad = 53, az = 48 -2 0 2 axial bin (cm) rad = 57, az = 48 2000 =5 1000 -2 0 2 axial bin (cm) rad = 29, az = 48 -2 0 2 axial bin (cm) rad = 45. az = 48 -E 1000 -2 0 2 axial bin (cm) rad = 61, az = 48 1000 3 o -2 0 2 axial bin (cm) -2 0 2 axial bin (cm) -2 0 2 axial bin (cm) -2 0 2 axial bin (cm) Figure 6-15 Axial profiles for azimuthal bin ("az") 48 and different radial bins ("rad") of the measured (red) and calculated (blue) randoms sinograms of the 12 hour frame of the off-centre line source study. Figure 6-16 shows the radial, azimuthal and axial summed count profiles. These show excellent agreement between the calculated and randoms distributions. Figure 6-17 shows the radial, azimuthal and axial summed percent difference profiles of the same data. These profiles would be flat with a value of 0% if the measured and calculated randoms distributions were in perfect agreement. Again, the data show good agreement with the largest difference seen in the summed radial profile. Here the calculation overestimates the randoms by up to 2-3% in the region that 164 corresponds to the dip in the measured sinograms. The azimuthal and axial profiles are nearly flat distribution, with only a minor bias in the axial direction. Comparing Figure 6-17 with Figure 6-16 shows that, in the axial distribution, the counts decrease significantly near the axial edges. Thus, a larger percent difference near the axial edge actually represents a very small difference in the distributions. 20 40 60 100 120 140 a z i m u t h s b i n \ d e u ) Figure 6-16 The radial (left), azimuthal (centre) and axial (right) summed count profiles of the calculated (blue) and measured (red) randoms distributions from the 12 hour frame of the off-centre line source. a z i m u t h a l b i n I d e a l Figure 6-17 The radial (left), azimuthal (centre) and axial (right) summed percent difference (datal = calculated, data2 = measured) profiles for the 12 hour frame of the randoms distributions of the off-centre line source. A sample of the 3D data set was also examined. The radial count profiles of the randoms sinograms for different segments are shown in Figure 6-18 to Figure 6-20. Again, the calculated profiles appear to agree well with the measured ones. At large oblique angles (segments ± 7 ) , there is a small anti-symmetric discrepancy in the scaling, as seen when the radial profiles are summed 165 over the other axes, as shown in Figure 6-21. These are only a small contribution to the overall randoms distributions. o o 1 0.5 0 ax = 1, az = 48 ax = 5. az = 48 ax = 9. az = 48 ax = 13, az = 48 200 o o 100 0 -5 0 5 radial bin (cm) ax = 17, az = 48 -5 0 5 radial bin (cm) ax =21, az =48 -5 0 5 radial bin (cm) ax = 25. az = 48 -5 0 5 radial bin (cm) ax = 29, az = 48 200 100 0 radial bin (cm) ax = 33. az = 48 -5 0 5 radial bin (cm) ax = 37, az = 48 200 100 0 LO z; o o -5 0 5 radial bin (cm) ax = 41, az = 48 200 100 0 0 5 radial bin (cm) ax = 45. az = 48 200 100 0 -5 0 5 radial bin (cm) ax = 49, az = 48 (_> o o I o 1 0 1 0 1 : : 200 100 w T= 100 0 radial bin (cm) ax = 53, az = 48 -5 0 5 radial bin (cm) ax = 57, az = 48 -5 0 5 radial bin (cm) ax = 61, az = 48 100 0 « 50 o o -5 0 5 radial bin (cm) radial bin (cm) -5 0 ! radial bin (cm) 0 -5 0 5 radial bin (cm) Figure 6-18 Radial profiles of the direct planes (segment 0) of the measured (red) and calculated (blue) radial profiles for azimuthal bin ("az") 48 and different axial slices ("ax"). Some profiles are missing because those oblique sinograms are not sampled in this segment. 166 Figure 6-19 Radial profiles of the oblique planes (segment -7) of the measured (red) and calculated (blue) radial profiles for azimuthal bin ("az") 48 and different axial slices ("ax"). Some profiles are missing because those oblique sinograms are not sampled in this segment. ax= 17, az = 48 3 O EJ 1 0.5 0 in o a ax =21, az =48 o -5 0 5 radial bin (cm) ax = 33. az = 48 -5 0 5 radial bin (cm) 50 0 ~ 100 0 o o -5 0 5 radial bin (cm) ax = 37, az = 48 a o -5 0 5 radial bin (cm) ax = 25, az = 48 ax = 29, az = 48 200 o El -5 0 5 radial bin (cm) ax = 41, az = 48 -5 0 5 radial bin (cm) o o -5 0 5 radial bin (cm) ax = 45, az = 48 1 0.5 -5 0 5 radial bin (cm) Figure 6-20 Radial profiles of the oblique planes (segment +7) of the measured (red) and calculated (blue) radial profiles for azimuthal bin ("az") 48 and different axial slices ("ax"). Some profiles are missing because those oblique sinograms are not sampled in this segment. 167 x ]o: rarkat profile x radial picAs X JQ- ladui proim °-S -4 -3 -2 - i 0 1 2 3 4 5 °4 4 -3 -2 -1 Q I 2 3 4 5 °-S -4 -3 5 -1 0 1 2 3 4 6 radial bm (cm) radial bm (cm) radial bin (cm) Figure 6-21 The radial summed count profiles of the measured (red) and calculated (blue) randoms distributions for segments -7 (left), 0 (centre) and +7 (right). Quantitative figures of merit were calculated for the different time frames in this study, and the results are listed in Table 6-9. The main difference between the frames is the image noise. The first frames have a high number of counts, and, therefore, less image noise. For these frames, y2/N > 1, which indicates that differences in the pixels values are non-Poisson and, therefore, due mainly to biases. Thus, the other FoMs indicate the magnitude of the biases; higher count frames give better estimates. Therefore, the low values of the NMSE and the very high value of the 2D correlation coefficient in the 12 hour frame suggests that the overall bias is very small. The 2D correlation coefficient is very high in the four highest count frames and does not change significantly between these frames. As the total counts decrease, the statistical fluctuations begin to dominate and the correlation decreases. 168 Table 6-9 FoMs comparing the measured and calculated randoms distributions from the high radioactivity line source at one hour time intervals over the 12 hour scan. The values are for the central slice sinograms. In brackets are the FoMs over all the pixels in the data set. Study/ Start Time Total Delays (cts) NMSE (%) s.d(%diff) (%) t2D (%) 12 hour frame 459.4 M 0.36 (1.09) 3.7 5.85 (4.50) 98.0 frame 0 266.5 M 0.45 4.2 4.55 97.6 t=0 min (1.55) (3.39) frame 1 127.0 M 0.56 4.6 2.54 96.9 t=60 min (2.29) (2.14) frame 2 60.5 M 0.81 5.6 1.76 95.5 t=120 min (3.79) (1.55) frame 3 28.8 M 1.31 7.1 1.36 92.6 t=180 min (6.90) (1.29) frame 4 13.8 M 2.38 9.3 1.18 87.5 t=240 min (13.3) (1.15) frame 5 6.66 M 4.71 13.4 1.13 78.3 t=300 min (26.1) (1.09) frame 6 3.25 M 13.3 18.1 1.05 67.1 t=360 min (50.8) (1.05) frame 7 1.62 M 17.2 25.4 1.02 53.5 t=420 min (99.5) (1.03) frame 8 821 k 33.4 35.4 1.00 41.0 t=480 min (188.7) (1.02) frame 9 429 k 63.6 48.7 1.01 29.4 t=540 min (341.4) (1.00) frame 10 234 k 112.8 65.4 0.99 21.5 t=600 min (586.7) (1.00) frame 11 135 k 192.3 73.0 0.98 14.9 t=660 min (953.9) (0.99) However, the FoMs still have a contribution from noise. The sd(%diff) reflects the noise in the measured sinograms. In the measured central slice randoms sinogram of the 12 hour frame (Figure 6-11), the counts in most pixels range between 1500-3500, except in the areas reduced by 169 block structure. The lower count regions contribute more to the total image noise. From this perspective the dominant pixel value was 1500 counts, which corresponds to -2.6% noise. This is close to the value of sd(%diff) = 3.7% calculated from the differences in the sinogram pixel values. For those frames where ^/N ~ 1 (frames > 6), the difference between the calculated and measured data is dominated by the Poisson noise. Any biases are smaller than the statistical noise and is, thus, not detectable by the FOMs. This is illustrated in Figure 6-22, which shows the measured and calculated randoms for the central slice sinogram in time frame 6. The measured sinogram is very noisy with little visible structure, but the calculated sinogram is still smooth. The percent difference sinogram shows only noise, wiuh no evidence of a bias. The noisy measured sinogram results in the high values of percent difference. 10 20 30 40 50 60 70 60 10 20 30 40 50 60 70 60 10 20 30 40 50 60 70 80 radial bin (d) radial bin (d) radial bin (d) Figure 6-22 The measured (left), calculated (centre) and % diff (right) central slice randoms sinograms for frame 6. The reduced counts in the measured sinogram gready increase the noise. The F o M values from comparing the pixels in all the slices (in brackets) show the same systematic changes as those for the central slice. However, the values are slightly higher, due mainly to the increased noise from the other slices where the counting sensitivity is lower. This is confirmed by the reduced^ /N when comparing the pixels in all the slices. 170 The FoMs of the individual slices of the 12 hour frame were also calculated to examine their variations between slices in a high count study. Figure 6-23 shows the values of the NMSE and sd(%diff) per slice. The NMSE remains flat out to nearly the axial ends on the tomograph which indicates that the values of NMSE are dominated by the bias and not the image noise. The sd(%diff) is not flat because it changes with the increasing noise in the measured slices towards the axial edge which is caused by the nearly triangular nature of the sensitivity profile, as seen in the axial count profiles (e.g. Figure 6-16). 0.2 | . , . r~. r-0.18 • 0.16 -0.14 • 0.12 • UJ tn | 0 . 1 -8 0.08 -0.06 • 0.04 - \ 0.02 • X. 0 ' 1 ' ' ' ' 1-4 - 3 - 2 - 1 0 1 2 axial bin (cm) Figure 6-23 The NMSE (left) and sd(% diff) (right) per slice for the 12 hour frame. The values for NMSE And sd(%diff) are different than those in Table 6-9 because, in the profiles, the calculated data is used in the denominator of (6.3)and (6.5) to avoid "divide-by-zero" errors in the profiles. Figure 6-24 shows the values of t h e ^ / i V a n d the 2 D correlation coefficient per slice. Again, y2 IN > 1 signifies that the differences between the measured and calculated are not purely Poisson. The yf /N peaks in the slice containing the most counts, where the Poisson noise is least significant. The 2 D correlation coefficient shows an excellent correlation for the majority of the slices, dropping off only at the edges where the noise begins to dominate. axial bin (cm) 171 1-5' ' ' ' ' ' ' ' 1 0.4 I ' ' ' ' ' ' ' ' 4 - 3 -2 -1 0 1 2 3 4 4 -3 -2 -1 0 1 2 3 4 axial bin (cm) axial bin (cm) Figure 6-24 The ;^/7V(left) and 2D correlation coefficients (right) per slice for the 12 hour frame. The values for x2/Nare different than those in Table 6-9 because, in the profiles, the calculated data is used in the denominator of (6.4) to avoid "divide-by-zero" errors in the profiles. The effect of count rate on the randoms distribution was examined by comparing the measured randoms distributions from two different time frames. Figure 6-25 shows the radial and axial percent difference profiles comparing the randoms distributions measured at high count rates (frame 0) with those measured at low count rates (frame 11). The source was not moved between these time frames; thus, only the acquisition count rates and the total collected counts are different. The total counts in the low count rate frame were scaled to those in the high count rate frame for comparison. There are obvious large biases in the radial and axial distributions, seen as large slopes, caused by the differences in the count rates during acquisition. 172 radial profile axial profile radial bin (cm) axial bin (cm) Figure 6-25 The radial (left) and axial (right) randoms summed percent difference profiles where datal = measured frame 0, and data2 = measured frame 11. The counts in frame 11 were scaled to those in frame 0. The most likely cause of this bias is the L S O background radioactivity activity (Section 4.7.4, [103]). A t low count rates, the randoms caused by singles events inside the detectors have a significant contribution to the measured randoms. The randoms calculation was then tested to determine i f it could reproduce the randoms distributions at both high and low count rates. Figure 6-26 shows the radial and axial summed percent difference profiles between measured and calculated distributions for both time frames 0 and 11. The biases seen in these profiles are much smaller than the differences between measured time frames 0 and 11 seen in Figure 6-25, especially in the radial direction. Thus, the randoms calculation is able to reproduce the effects of different acquisition count rates, probably because it takes the background radioactivity into account. 173 radial profile axial profile radial bin (cm) axial bin (cm) Figure 6-26 The radial (left) and axial (right) randoms summed percent difference profiles where datal = calculated and data2 = measured randoms. The top row shows the profiles for frame 0 (high rates) and the bottom row shows the profiles for frame 11 (low rates). 6.4.2 Point Sources in Air and Water-filled cylinders The results from the studies of point sources at various positions in air-filled and water-filled cylinders are listed in Table 6-10. 1 7 4 Table 6-10 Measured and calculated count rates from a high radioactivity point source in an air-filled and water-filled cylinder at various positions. Point position measured (calculated) Total Delays (Mets) Singles (kcps) Trues (kcps) Delays (kcps) Delays % diff Randoms Fraction (%) 1 (water) 87.8 6200 (6200) 487 (542) 48.8 (48.6) -0.4 9.1 1 (air) 51.0 4713 (4713) 458 (473) 28.4 (28.5) -0.4 5.8 2 (water) 26.4 2504 (2504) 199 (209) 7.34 (7.24) 1.4 3.6 2 (air) 4.62 1023 (1023) 97.0 (96.6) 1.28 (1.27) 0.8 1.3 5 (water) 53.1 4793 (4793) 232 (236) 29.5 (29.3) 0.7 11.3 5 (air) 34.2 2704 (2704) 153 (150) 9.51 (9.61) -0.3 5.9 6 (water) 7.20 1284 (1284) 64.3 (64.4) 2.00 (2.00) 0.0 3.0 6 (air) 1.88 646 (646) 36.4 (36.1) 0.52 (0.52) 0.0 1.4 7 (water) 17.8 2744 (2744) 49.5 (33.9) 9.87 (9.81) 0.6 16.6 7 (air) 11.1 2166 (2166) 39.0 (33.8) 6.16 (6.14) 0.3 13.6 8 (water) 5.77 1144 (1144) 22.6 (17.3) 1.64 (1.64) 0.0 6.8 8 (air) 2.41 721 (721) 13.1 (11.5) 0.67 (0.67) 0.0 4.9 complex point 1 1.99 936 (936) 90.8 (91.9) 1.11 (1.11) 0.0 1.2 complex point 2 4.32 1379 (1379) 116 (124) 2.40 (2.40) 0.0 2.0 The calculation reproduces the measured delays rates to within < 1.4% for all point source positions, for both air- and water-filled cylinders. The accuracy of the calculated randoms rates is independent of the point source location. The calculated trues rates do not accurately reproduce 175 the measured rates, with errors ranging from <1% to 32%, with the largest error for points in water near the axial edge. Again, this has no bearing on the accuracy of the randoms. The measured and calculated central slice randoms sinograms for the point source at position 1 in the water-filled and air-filled cylinders are compared, respectively, in Figure 6-27 and Figure 6-28. Both show good visual agreement with the block structure and low frequency variations in the randoms, but the calculated distributions are smooth. The percent difference sinograms again show the diagonal lines due to random variations in crystal efficiencies and the artifact from the dip in the measured randoms, most clearly seen in the measured sinogram of the point source in air. 10 20 30 40 50 SO 70 BO 10 20 30 40 50 60 70 80 10 20 30 40 50 60 70 80 radial bin f d: radial bin (d) radial bin id! Figure 6-27 Measured (left), calculated (center) and percent difference (right) randoms sinograms for the central slice of the scan of the point source in water at position 1. 176 Figure 6-28 Measured (left) calculated (center) and percent difference (right) randoms sinograms for the central slice of the scan of the point source in air at position 1. Figure 6-29 and Figure 6-30 show the radial, azimuthal and axial summed percent difference profiles. The results generally show excellent agreement. In the radial summed count profile, the calculation overestimates the randoms in the region that corresponds to the dip in the measured sinograms by up to 5-6%. The azimuthal and axial profiles are nearly flat (if the scale of the y-axis axis is noted). radial profile azimuthal profile axial profile -6 J -2 0 2 4 6 I 20 40 50 BO 100 120 140 160 180 J -3 -2 -1 0 1 2 3 4 Figure 6-29 The radial (left), azimuthal (centre) and axial (right) summed percent difference (datal = calculated, data2 — measured) profiles for the randoms distributions of the point source position 1 in water. 177 Figure 6-30 The radial (left), azimuthal (centre) and axial (right) summed percent difference (datal = calculated, data2 = measured) profiles for the randoms distributions of the point source at position 1 in air. The randoms sinograms for the other point source positions vary considerably in appearance. However, the calculated distributions again show similarly good agreement with the measured ones. Likewise, the summed percent difference profiles for the calculated and measured randoms for all the studies show similar results to those in Figure 6-29 and Figure 6-30. The radial profiles all show a minor systematic bias causing a slight slope. The point sources in air measurements also show the artifact from the dip in the measured randoms. The axial percent difference profiles all show excellent agreement. As well, results from the studies of point sources in complex attenuation phantoms are comparable. Table 6-11 shows the quantitative figures of merit found when comparing the measured and calculated randoms distributions for the different point source configurations. There are a variety of source configurations and total acquired counts represented in this group of studies. For those studies where }f/N ~ 1 (lower counts), the difference between the calculated and measured data is dominated by the Poisson noise and, therefore, the bias is not detectable by the other FoMs. For those studies with y2'/JV > 1 (higher counts), the statistical noise no longer dominates and the F o M become more sensitive to bias. In these studies, the low values of NMSE and sd(%diff) and the high values of the 2 D correlation coefficient indicate that any biases are small. 178 Table 6-11 FoMs comparing the measured and calculated randoms distributions from the different point source configurations. The values are for the central slice sinograms. In brackets are the FoMs over all the pixels in the data set. Study position/ configuration Total Delays (cts) NMSE (%) s.d(%diff) (%) 7?™ r2D (%) 1 (water) 87.8 0.69 (2.93) 5.1 2.25 (1.85) 90.5 1 (air) 51.0 1.09 (4.34) 6.5 2.11 (1.59) 84.3 2 (water) 26.4 1.68 (8.55) 8.1 1.52 (1.37) 93.8 2 (air) 4.62 6.90 (39.51) 16.2 1.13 (1.11) 72.5 5 (water) 53.1 0.85 (4.38) 5.8 1.62 (1.53) 88.4 5 (air) 34.2 1.15 (5.9) 6.8 1.42 (1.35) 85.0 6 (water) 7.20 4.36 (26.42) 12.9 1.10 (1.10) 85.1 6 (air) 1.88 15.45 (87.59) 24.0 1.04 (1.03) 53.8 7 (water) 17.8 2.01 (11.35) 8.8 1.27 (1.22) 67.9 7 (air) 11.1 2.92 (16.79) 10.5 1.17 (1.14) 60.2 8 (water) 5.77 5.39 (32.35) 14.3 1.09 (1.07) 74.0 8 (air) 2.41 12.15 (69.65) 21.3 1.05 (1.03) 50.2 6.4.3 Uniform Cylindrical and Contrast Phantoms The results from the cylindrical and contrast phantom studies are listed in Table 6-12. H i e calculation was able to reproduce the randoms rate to within <1.2%, irrespective of the high activities (and large dead-times) used in some studies. Even with the high activities, the highest randoms fraction achieved was only 15.5%. Increased radioactivity outside the axial F O V also did not affect the accuracy of the calculated randoms rates. However, this is expected because the 179 randoms are scaled to the measured singles rate; thus, the activity outside the A F O V results in a bias in the distribution rather than an error in the rates. The trues rates, however, are not generally in perfect agreement, and there are large errors with activity outside the A F O V . Table 6-12 Measured and calculated count rates for cylindrical phantom studies. Study measured (calculated) Total Delays (Mcts) Singles (keps) Trues (keps) Delays (keps) Delays % diff Randoms Fraction (%) Activity outside AFOV lgcyl 30.8 2567 (2567) 138 (149) 8.56 (8.56) 0.0 5.8 -sm cyl z = 0.0 cm 420 9626 (9626) 638 (663) 117 (118) -1.2 15.5 0% sm cyl z = 1.0 cm 82.3 5974 (5974) 419 (435) 45.7 (45.8) -0.2 9.8 5% sm cyl z - 2.0 cm 51.9 4727 (4727) 306 (335) 28.8 (28.8) 0.0 8.6 20% sm cyl z = 3.0 cm 47.3 3167 (3167) 165 (197) 13.2 (13.2) 0.0 7.4 30% contr phant 1 75.3 4011 (4011) 244 (258) 20.9 (21.0) -0.5 7.9 -contr phant 2 study 1 27.5 8651 (8651) 512 (539) 93.4 (93.3) 0.1 15.4 -contr phant 2 study 2 316 5769 (5769) 337 (359) 43.8 (44.3) -1.1 11.5 -contr phant 2a 29.8 2524 (2524) 143 (163) 8.28 (8.26) 0.2 5.5 -Figure 6-31 shows the radial, azimuthal and axial summed count profiles for the large uniform cylinder. These show excellent agreement between the calculated and randoms distributions. The same data is shown again in the radial, azimuthal and axial summed percent difference profiles in Figure 6-32. Flere the radial and azimuthal profiles are nearly flat, but die axial profile shows a distinct slope caused by an underestimation of the randoms at the end of the phantom where there is activity outside the A F O V . Although this leads to a relatively large percent difference, the axial count profile (Figure 6-31, right) shows that the absolute number of counts near the axial ends is small. Although these are scaled by the sensitivity following reconstruction, the noise due to the 180 low counts would be much greater than the biases, so there is only a minor impact on the distributions. x 10s radta) profile x ig s azimuthal profile x 10s axial profile -5 4 -3 -2 -1 0 1 2 3 4 5 20 40 SO 30 100 120 140 160 180 -3 -2 -1 0 1 2 3 radial bin (cm) azimuths bin (deg) axial bin (cm) Figure 6-31 The radial (left), azimuthal (centre) and axial (right) summed count profiles of the measured (blue) and calculated (red) randoms distributions from the large uniform cylinder study. radial profile azimuthal profile axial profile radial bin (cm) azimuthal bm (deg) axial bin (cm) Figure 6-32 The radial (left), azimuthal (centre) and axial (right) summed percent difference (datal = calculated, data2 = measured) profiles for the large cylindrical phantom. The effects of activity outside the axial F O V were examined with the small cylinder. Figure 6-33 shows the measured and calculated axial summed count profiles for the small cylinder in different axial positions. For the cylinder fully in the F O V (0.0 cm displacement), the axial distributions are nearly symmetric and there is good agreement. When the cylinder is displaced by 3.0 cm, 30% of the radioactivity is outside the A F O V . Although this changes the distribution, there is still excellent between the calculated and randoms axial distributions. 181 Figure 6-33 The axial summed count profiles of the measured (blue) and calculated (red) randoms distributions for the small uniform cylinder at 0.0 cm (left) and 3.0 cm (right) displacements. The axial summed percent difference profiles of the same data are shown in Figure 6-34. They show the increased bias with greater activity outside the F O V . For the 3.0 cm displacement the error is ~8% at each of the axial ends. However, this percent difference is almost undetectable in the count profile (Figure 6-33) because the absolute number of counts is small at the axial ends. 4 I 1 1 1 1 1 1 1 1 20 ' 1 1 1 1 1 1 1 ' 4 - 3 - 2 - 1 0 1 2 3 4 4 - 3 -2 -1 0 1 2 3 4 axial bin (cm) axial bm (cm) Figure 6-34 The axial summed percent difference profiles of small uniform cylinder at 0.0 cm (left) and 3.0 cm (right) displacements. 182 The results for the contrast phantoms showed similarly good agreement between calculated and measured randoms distributions. Figure 6-35 show the radial, azimuthal and axial summed percent difference profiles for contrast phantom 2, study 1. These again show good agreement. The slope seen in some of the axial profiles is indicative of activity outside the F O V . The results from the other contrast phantom studies are similarly good, but also show the bias in the axial profiles due to activity outside the axial F O V . radial profile azimuthal profile axial profile radial bin (cm) azimuthal bin (deg) axial bin (cm) Figure 6-35 The radial (left), azimuthal (centre) and axial (right) summed percent difference (datal = calculated, data2 = measured) profiles for contrast phantom 2, study 1. The profiles are summed over the other binning directions. The contrast phantoms were also reconstructed using both the measured and calculated randoms corrections. The data from contrast phantom 2, study 1, had the highest randoms fraction (15%). To examine the effects of the randoms correction on image noise, this data was histogrammed into a 60 second frame to reduce the counts and increase the noise. This data was then randoms corrected using both the measured and calculated distributions and then reconstructed. Visually, there were no apparent differences in the images that were randoms corrected by the different methods. Two count profiles were examined: one through a typical transverse slice, and the other along the long axis of the phantom. The profiles from both randoms correction methods were overlaid. Figure 6-36 shows the profile through the transverse slice and Figure 6-37 shows the profile through the coronal slice. In both cases, there are no significant differences between the profiles, inferring that the calculated correction introduces no detectable biases. 183 Figure 6-36 Profiles through transverse slice of the reconstructed image of contrast phantom 2 with measured (pink squares) and calculated (blue diamonds) randoms correction. 3 > 0.09 0.08 0.07 0.06 0.05 0.04 •itTrrlnih t 1 b i 1 20 -60- 3. Profile Axis (mm) Figure 6-37 Profiles through coronal slice (long axis) of the reconstructed image contrast phantom 2 with measured (pink squares) and calculated (blue diamonds) randoms correction. The low randoms fractions in the contrast and cylindrical phantoms experiments made them unsuitable for detecting the changes in noise due to the different methods of randoms correction. Therefore, data from an earlier scan of a uniform cylinder, with a significantly higher activity, were analyzed. This study used a uniform cylinder filled with n C which was scanned for 5 minutes. The 184 resulting average singles rate was 35.8 Mcps, with a trues rate of 2.6 Mcps and randoms rate of 1.2 Mcps (all values were dead-time corrected). The randoms fraction was 45%, and a total of 350 million delays were counted. Images were reconstructed from the prompts data and after correction by both measured and calculated randoms. At these count rates, much higher than a realistic scanning situation, the dead-time corrections did not give accurate results. For this particular study the calculated randoms were scaled to the measured randoms. The noise in these image data sets was measured to determine the impact of the randoms correction method on noise. The prompts image should have the lowest noise. The image from a noiseless randoms correction should have identical noise to that in the prompts image. The subtraction of a noisy randoms distribution increases the image noise. The image noise was measured using 16, 2 x 2 voxel ROIs placed at equal radial distances in one slice, as shown in Figxire 6-38. For the randoms subtracted images, the noise was calculated as the ratio of the standard deviation in the ROIs of the corrected image to the mean of the ROIs in the prompts image, to account for the effect of subtracting counts on the mean value. Figure 6-38 The placement of the ROIs on the hot uniform cylindrical phantom image. Note the image artifacts in the axial direction. 185 The resulting measured noise values in an off-centre slice (slice 24) are listed in Table 6-13. For this slice, the noise in the calculation corrected image is essentially identical to that in the prompts, while the image corrected by the measured delays had greater noise. Table 6-13 The measured noise in slice 24 of the hot cylinder when the images are randoms corrected by difference methods. Randoms Correction Method Noise prompts only (no subtraction) 4.8% prompts— measured delays 5 . 7 % prompts — calculated randoms 4.7% V(/nile this would appear to show the noise benefits of the calculated randoms correction, the results for all the slices were not the same. Figure 6-39 shows the noise in all the slices of the hot cylinder for the different randoms corrections, measured using the same ROIs. In most of the slices, the delayed-corrected data shows the largest measured noise. However, for some slices the delayed-corrected noise is inexplicably lower than that from the prompts and calculation subtracted data. This cause for this discrepancy is likely related to the dead-time induced image artifacts, which are apparent in the Figure 6-38. These visual artifacts were not seen in a later study which was identical, except that the activity had decayed and the count rates were significantly lower. As well, although the randoms fraction in this data is high, the noise is low due to the large number of measured delays so the variations in the measured noise values may be simply random. However, in most slices the noise in the calculation subtracted images is usually close to that in the prompts, irrespective of the noise in the delays subtracted images. 186 % n o i s e v s s l i c e ( c o r r e c t e d to p r o m p t s ) 20.00% 18.00% 16.00% 14.00% 12.00% 10.00% 8.00% 6.00% 4.00% 2.00% 0.00% 3 = 1 :- i k - - » i " 3 l f — i i i i I I • prompts sub delays sub calc 0 10 20 30 40 50 60 70 slice Figure 6-39 The measured noise in the slices of the hot phantom for different randoms corrections. 6.4.4 A n i m a l Study The results from the mouse study are listed in Table 6-14. Again, the calculation reproduces the measured randoms rates to within 0.4%. The trues rates agree to within 5%. Table 6-14 Measured and calculated average count rates from the two mouse study, Total Delays Singles Trues Delays Delays Randoms (Mets) (kcps) (kcps) (kcps) % diff Fraction (%) 5.9 618 37.1 0.539 -0.4 1.4 (618) (39.0) (0.541) The measured and calculated central slice and summed slice sinograms are shown in Figure 6-40 and Figure 6-41. Again, the general features of the measured randoms sinograms are well 187 reproduced. The percent difference sinogram for the summed slice sinogram shows only minor differences, mainly from the effects of random detector efficiency variations. 10 20 30 40 50 60 70 80 10 20 30 40 50 60 70 80 10 20 30 40 50 60 70 80 radial bin (d) radial bin (d) radial bin (d) Figure 6^0 Measured (left) and calculated (centre) and percent difference (right) randoms sinograms for the central slice of the two mouse study. Note that the measured and calculated sinograms do not have the same scale. 10 20 30 40 50 60 70 80 10 20 30 40 50 60 70 80 10 20 30 40 50 60 70 80 radial bin (d) radial bin id) radial bin (d) Figure 6-41 Measured (left) and calculated (centre) and percent difference (right) randoms sinograms for the summed slice of the two mouse study. Note that the measured and calculated sinograms do not have the same scale. The radial, azimuthal and axial summed count profiles are shown in Figure 6-42 and the summed percent difference profiles are shown in Figure 6-43. Again, the overall fit is very good. The small 188 slope seen in the percent difference axial profiles is indicative of a small amount of radioactivity outside the FOV (see Figure 6-8.) Figure 6-42 The radial (left), azimuthal (centre) and axial (right) summed count profiles of the measured (blue) and calculated (red) randoms distributions from the two mouse study. r a d i a l p r o f i l e a z i m u i h a l p r o f i l e a x i a l p r o f i l e -6 4 -2 0 2 4 6 ' 0 20 40 60 80 100 120 140 160 180 ' 4 -3 -2 -1 0 1 2 3 4 r a d i a l b m ( c m ) a z i m u i h a l b i n I d e g ] a x i a l b i n ( c m ) Figure 6-43 The radial (left), azimuthal (centre) and axial (right) summed percent difference (datal = calculated, data2 = measured) profiles for the mice study. The calculated FoMs for the sinograms in this study are given in Table 6-15. In the central slice sinogram, yf/N ~ 1, so the FoMs are dominates by Poisson noise. For the summed slice sinogram, yf /N >1, so the low values of the NMSE and sd(°/odiff) and the high value of the 2D correlation coefficient indicate the biases are very small 189 Table 6-15 The values of the FoMs between measured and calculated randoms distributions in the slice NMSE sd(% diff) tm tZD (%) ' (%) central 4.8 13.5 1.06 67.5 summed 0.37 3.8 2.55 95.9 The mice study data was then corrected by both the measured and calculated randoms and images were reconstructed. Profiles through both images are shown in Figure 6-44 and Figure 6-45. These profiles show good agreement indicating that the calculated randoms correction does not introduce a significant bias into the reconstructed images. 120 Profile Axis (mm) Figure 6-44 Profiles through transverse slice of the reconstructed mice image using measure (pink squares) and calculated (blue diamonds) randoms correction. 190 0 .007 o 4 -, , , , , , , 1 0 10 2 0 3 0 4 0 5 0 6 0 7 0 8 0 Prof i le A x i s (mm) Figure 6-45 Profiles through coronal slice (long axis) of the reconstructed mice image using measure (pink squares) and calculated (blue diamonds) randoms correction. The measured and calculated randoms sinograms from the mouse study were also reconstructed into images of the randoms distributions using F O R E and 2 D - F B P . Representative transverse and coronal slices of the measured, calculated and subtracted images are shown, respectively, in Figure 6-46 and Figure 6-47. They show an excellent agreement in the overall shape of the randoms image. However, the randoms image reconstructed from the calculated sinograms is far less noisy and has higher contrast than the image from the measured randoms. When the images are subtracted, the resulting image shows no structure, only noise for this slice. This indicates that there is no detectable bias between the methods in this example. The coronal images likewise agree, but show some possible reconstruction artifacts. 191 Figure 6-46 Reconstructed randoms central transverse slice images from measured (left) and calculated (centre) randoms sinograms. Also shown is the subtraction image (right). IT' ' i J fa -Figure 6-47 Reconstructed randoms coronal images from measured (left) and calculated (centre) randoms sinograms. Also shown is the subtraction image (right). 192 Chapter 7 Discussion and Summary 193 7.1 Discussion of Results 7.1.1 Chapter Overview Chapter Six described the studies performed to test the accuracy of the model-based randoms correction against measured randoms distributions from phantom and animal scans. The results were analysed by comparing measured and calculated randoms rates and randoms distributions. As well, images reconstructed from data corrected using measured and calculated randoms were compared. Chapter Seven provides a discussion of the results the validation tests and a summary of this work. The chapter begins with a discussion of the results of the validation testing. Then, possible improvements to the model-based calculation method are described. Next, suggestions for future work using this method are given, mcluding a discussion on implementation for other tomographs. The chapter ends with a summary of the accomplishments of this work. 7.1.2 Accuracy of Calculated Global Rates The randoms calculation reproduced the measured global randoms rates to within < 1.4% for essentially all the phantom and animal studies. The accuracy of the calculated randoms rates is almost entirely attributable to scaling by the measured global singles rates. This parameter provides an empirical factor that negates the absolute errors in the calculated solid angle, detector efficiency, photon survival and other parts of the singles calculation by forcing the global singles rates to be correct. Nonetheless, the accuracy of the calculated randoms rates also depends the computation of the randoms from the singles, and on the dead-time and decay corrections. The corrections used in the randoms calculation were found to give accurate results for studies with singles rates < 9.6 Mcps and dead-times < 63%; higher singles rate and the dead-times were not studied. These rates are already much higher than those in the representative animal study; which had an average singles rate of only 618 kcps over a three hour scan. However, the decay correction was only accurate when the L S O background singles rate was taken into account. 194 The only large discrepancy in randoms rates was for the 12 hour frame of the line source study (4.7%). The likely cause of this disagreement is an error in the dead-time correction factor calculated by the MicroPET for scans of such long duration. The dead-time correction factor value is derived from the average singles rate over the entire frame duration. The dead-time correction calculated by the MicroPET was earlier found to have problems when the duration of the time frame is much longer than the half-life of the radionuclide [109]. Since animal studies generally do not have long frame durations, this is not problematic. One possible factor could be the background singles from L S O background, which has the largest effect when the source activity is low [103]. It is unclear whether the MicroPET calculation takes this into account. A newer version of the MicroPET dead-time correction may improve this aspect of its performance. The calculated trues rates were compared to the measured rates to assess the accuracy of the model-based calculation in reproducing the magnitude of the trues. The differences between measured and calculated trues rates ranged from <1% to 32%. The error in the trues rates for the animal study was only 5%. Not surprisingly, the largest errors were for point sources near the axial ends of the tomograph. This was compounded by the small values of trues rates near axial edge; small absolute differences in rates translate into large relative differences. Stadies with point sources near axial edge in water were especially inaccurate because photons have the longest path in attenuating material and the effects of attenuation and scatter are the greatest. As well, scatter has a larger effect on coincidence events than singles events. Activity outside the A F O V also causes significant error in the calculated trues rate. Since the present photon detection model is not optimized for calculating trues, the accuracy is not expected to be as good as that for the randoms. Specifically, the calculation does not take the path of the annihilation partner photons into detailed account, especially near the axial edge of the tomograph. As well, the exact detector(s) that the partner photons hit are not calculated; thus, the crystal efficiency variations cannot be included in the trues calculation. As well, any inaccuracy in the photon detection model is compounded by the required detection of two photons, and the scaling to the measured singles rates enhances this error for the trues. For example, an error in the 195 estimation of the detection efficiency is squared for the detection efficiency for two photons. Taking these issues into account, it was expected that the accuracy of the calculated trues would be ~ 10-20%, and to be worst near the edge of the axial F O V . Future improvements in the photon detection model (Section 7.2) could potentially lead to a more accurate calculation of the trues. 7.1.3 Accuracy of Calculated Randoms Distributions The accuracies of the spatially variant calculated randoms distributions were also measured for many different sources configurations. The studies showed excellent visual agreement between the measured and calculated randoms sinograms; however, the calculated sinograms were completely smooth,.while the measured ones were generally very noisy. The calculated sinograms were able to replicate all the features of the measured sinograms: the high frequency "block structure" due to systematic variations in crystal efficiency within a detector block, and the low frequency variations due to the non-uniform singles distribution. The point sources at different positions in the F O V showed a variety of randoms distributions, which the calculation was able to reproduce. The calculation also worked well for the more complex activity and attenuation phantoms. The most noticeable difference between the measured and calculated sinograms was seen as a few randomly positioned diagonal lines in the percent difference sinograms. These produce the largest discrepancies in the percent difference sinograms. The lines were caused by random variations in the singles efficiencies of individual crystals. The singles calculation models only systematic efficiency variations within the detector blocks. Random differences between individual crystal efficiencies are not measured nor included in the model. These differences only affect a few pixels; thus, they do not have a major impact in the overall randoms contribution. The low number of counts in the affected pixels, caused by the lower efficiency of the edge crystals, enhances die percent difference seen in the sinograms. The other noticeable difference between the measured and calculated sinograms is the curved line (or band) of reduced counts ("dip") in the measured randoms at the L O R s that correspond to the L O R s with the highest count rates in the trues sinograms. This feature is most noticeable in the 196 analysis of the line and point sources in air, where the trues are concentrated in a few LORs . In the radial summed percent difference profiles, this artifact has a magnitude of up to 5-6%. For sources in water, the scatter broadens the trues sinograms and the magnitude of the artifact is reduced. This feature may also be seen in the randoms sinograms of extended sources, such as uniform cylinders, but it is much more diffuse. Nevertheless, this count reduction is not seen in slices of the randoms distribution that contain few counts in the trues distribution (Figure 6-12). This artifact is not reproduced by the randoms calculation and the cause of the artifact is not clear. Since the randoms distributions are based on a smooth singles distribution, such a sharp feature is not likely due to the singles distribution. The possibility of it being caused by dead-time was investigated. A high count study using a low activity 6 8 Ge point source was done at low count rates. The measured randoms sinograms from this study showed the same artifact. However, a detailed Monte Carlo simulation of an identical point source configuration in the MicroPET was done using G A T E [108]. The simulated randoms distribution did not reproduce this artifact [109]. The possibility of this artifact being caused by die tomograph coincidence processor or the histogramming software requires investigation. If tiiis is a problem with the delayed coincidence processing, the calculated correction may be more accurate than the measured correction in studies where is artifact is pronounced. The comparison of the count and percent difference profiles also showed remarkably good agreement between measurement and calculation for all studies. In the high count 12 hour line source study, the individual radial, azimuthal and axial profiles were directly compared in both 3D data sets and rebinned data. In the other studies, the comparison of summed profiles was required to reduce the noise so that biases could be detected. For most studies, the biases in the summed radial profiles were witiiin as about + 2-3% across the entire radial F O V . The point source studies had the worst bias in the radial profiles because of the dip artifact (for example. Figure 6-29 and Figure 6-30). The results were also with ± 2-3% for the azimuthal and axial profiles, except when activity outside the axial F O V increased the bias in the axial profiles. However, it should be noted that summed profiles could potentially mask some systematic biases. 197 The calculated figures of merit were of some use in detecting and quantifying the bias. The value of ^/N identified the studies for which the differences were not purely Poisson, and were therefore due to bias. The magnitude of the bias for these studies was then estimated from the NMSE and other FoMs. For these all studies, the bias measured by the F O M s appeared to be small. The good agreement of the randoms calculation appeared to be independent of object size, for the phantoms that were studied. This demonstrates that the scaling to the measured singles rates takes the global effects of object size into account. 7.1.4 Effects of Count Rates and Radioactive Background Changes in the singles and randoms distributions due to high count rates were not detected in the validation studies. Plowever, changes in the randoms distributions at low count rates were found. These were due to randoms caused by the background activity in the LSO. For low count rate studies, the accuracy of the randoms calculation depended on an accurate measurement of the detector singles rates in the absence of activity in the tomograph F O V . When this effect was included, the calculation produced a randoms distribution in agreement with the measurements. 7.1.5 Effects of Radioactivity outside the A F O V The effects of radioactivity outside the axial F O V were assessed using uniform cykndrical phantoms. For the large phantom, the activity outside the F O V caused a slope in the axial percent difference profile which amounted to a bias of ± 5% at the edges of the axial F O V (Figure 6-32). The scaling to the singles ensures that the percent difference at the centre slice is ~ 0%, and the errors are largest at the axial ends. For the small cylinder, the largest axial bias is ± 8% at the edges, when 30% of the activity is outside the A F O V . While these are relatively large differences, the total effect on the image is small. The total true and random counts in the end planes are the lowest of any planes because of the triangular shape of the axial counting sensitivity (for example, Figure 6-31, right). During reconstruction, these planes are scaled by the inverse of the counting sensitivity in the images so the bias might be 198 enhanced. However, because the resulting image noise in these planes is very high, biases in the randoms correction of this magnitude are likely to be insignificant compared to the noise. 7.1.6 Effects of the Calculated Randoms Correction on Reconstructed Images For both contrast phantoms and the mouse study, the use of the calculated randoms correction did not introduce any appreciable bias into the reconstructed images, as seen by the profiles. However, there was also no apparent difference in the image noise. The calculated randoms sinograms clearly show much lower noise than the measured randoms. The reconstructed image of the calculated randoms distribution of the mice study was also significantly less noisy than the image from the measured randoms (Figure 6-46). Therefore, it is expected that the smoothness of the calculated sinograms should translate into reduced image noise, and improved contrast-to-noise levels. Thus, the high activity cylindrical phantom data was used to determine the impact of the calculated randoms correction on image data. This data had a large randoms fraction (45%), so that the randoms contribute more to the image data and the effects of the randoms correction would be more apparent. The results, though showing some improvements, were not easy to interpret because of the image artifacts, most likely caused by the high dead-time. Some measurements of image noise were also done on the lower, but still high, count rate scans of the contrast phantoms with a randoms fraction of 15%. These images showed no dead-time related artifacts, but the measurements showed no discernable change in image noise. The count rates and randoms fractions in these studies are much higher than those attained in typical animal scans. The short coincidence timing window and high sensitivity of this tomograph result in very low randoms fractions. Therefore, it may be concluded that, for realistic studies performed on this tomograph, the calculated randoms correction would have no discernable impact on the noise. This is not because of inadequacies in the method, but simply due to the small contribution of the randoms to the image data. However, the low noise properties of this method might have a greater impact i f used on larger tomographs for whole-body human studies where the randoms fractions are much higher. 199 7.2 Possible Improvements to the Single Photon Detection Calculation 7.2.1 Introduction A number of improvements to the randoms calculation have been suggested by the validation tests. Each improvement could have an impact on the accuracy, but also on the computational efficiency. The importance of each is dependent on the tomograph being modeled and the acquisition parameters used. 7.2.2 Detector Binning In tomographs with large numbers of crystals, the burden of calculating the solid angle and especially, the photon survival, to each crystal could cause unreasonably large calculation times. Since the singles distribution changes slowly, a coarser sampling of the tomograph could be used, where a single detector bin would represent a few crystals. The calculated singles distribution could be interpolated to the true number of crystals before the block structure is applied and the randoms distributions are calculated. 7.2.3 Detector Solid Angle Calculation In the model-based calculation, the solid angle is calculated as i f all the photons interact at a fixed surface of the detectors. A more accurate calculation of the solid angle could be done by integrating the intersection of a rectangular photon beam with the crystal from the front face to the back of the scintillator block. This calculation would include the complex effects of varying beam intensity due to attenuation in the crystal. The contributions to multiple crystals and the effects of the block cuts would be included in the calculation. This change especially improves the accuracy of the calculation near the axial edges of the tomograph where the photon beam may intersect only a few partial crystals. However, the method of solid angle calculation would greatly increase the computational burden. Such an improvement would be added only i f the accuracy of the present calculation is found to be inadequate. A n improvement in the solid angle calculation could also be done for the collinear annihilation partner photons used in the calculation of the true coincidences. Presentiy, the beam spread of the 200 partner photons is not computed, nor is the detection by specific crystals. This leads to errors in the calculated trues rates and distributions. If the individual crystals that detect the partner photons are identified, the crystal efficiency variations could be used in the trues calculation to improve the accuracy. 7.2.4 Broad-beam and Window Fraction Correction Factors As described in Section 4.6.1, the present photon survival calculation uses a single, average value for the broad-beam correction xhnai. In general, build-up factors depend on the product px and should vary for every photon path. Since the photon survival calculation explicitly determines px, a path-dependent broad-beam correction factor could be easily implemented i f necessary. The path dependent value of % w may be estimated from (4.24) and (4.25) using the values of pobJxdJ for a given path found by the photon survival calculation (4.8). This modification is not expected to increase computational burden. Such a correction would probably be of greatest benefit in the photon detection model for whole-body tomographs, where the photon paths are longer and the build-up factors vary more. A path-dependent detection efficiency correction factor might also be investigated. Since, the detector efficiency calculation explicitly determines px for the photon path in each crystal; xJet could be made to vary with px, mainly to account for changes in the Compton-absorption probability. Again, this would likely not significantly increase computational burden but could marginally increase the accuracy of the detection model for all tomographs. This could be implemented in conjunction with changes in the solid angle calculation (Section 7.2.3). 7.2.5 Single Photon Scatter Distribution The present model-based calculation assumes that the amount of single photon scatter and its effects on the singles distribution are small. In larger tomographs this is not true, and a more detailed correction for the effects of single photon scatter on the singles distribution may be necessary. One possible method would be to calculate the distribution of object-scattered single photons about their unscattered path (Section 4.6.3). While this might improve the accuracy of 2 0 1 the calculated singles distribution, there would probably be an enormous penalty in the required computation times, because of the complexity of this calculation. 7.2.6 Crystal Efficiency Variations The present single photon detection model uses a simple model of detector efficiencies that takes into account only systematic variations, but not individual detector efficiencies. A more accurate method would use measured individual crystal singles efficiencies. These values may be extracted from the component-based normalization data or from measuring the energy spectra of individual crystal block elements. Using individual singles efficiencies in the calculation would reduce the effects of the random efficiency variations on the calculated randoms distributions. Implementing this change in the algorithm would be trivial and the impact on computation times would be negligible. However, the measurements of singles efficiencies could be problematic and may require extensive measurements and analysis. As well, the detector efficiency model would require regular updating. 7.2.7 Radioactivity Outside of the Axial Field of View The randoms calculation assumes that all of the radioactivity distribution that contributes to the measured singles rates is present in reconstructed radioactivity image. The contribution from singles originating in radioactivity not in the image is not calculated. In practice, this occurs when significant radioactivity is present outside the axial field of view. The effect of activity outside the A F O V is greater for tomographs with larger ring diameters. The calculated randoms correction could easily correct for this i f multiple bed positions are scanned. The reconstructed image data from multiple views would be knitted together, and activity sample points would be generated in this extended volume. The singles distribution would then be calculated, and the randoms distribution computed, for multiple bed positions separately. If scans are done using only a single bed position, other strategies would be required. 202 7.3 Future Directions for this Work 7.3.1 Introduction This work demonstrates that the model-based randoms calculation is a valuable, viable method for performing accurate and noiseless randoms correction. Further testing might be done to determine the performance of the method under a greater range of conditions. As well, the present method may be adapted for use on other tomographs. 7.3.2 Other Measurements While the previous testing was able to establish the basic accuracy of die model-based calculation, further tests may be helpful. Given the short coincidence timing window and high sensitivity of the MicroPET, it was difficult to achieve high randoms fractions. Tfigher randoms fractions could be reached by using higher activity sources; however, the detector dead-time may become an issue. Higher randoms fractions could also be achieved simply by using a longer coincidence timing window. For example, studies acquired using Tcrw = 12 ns would have double the random coincidences compared to acquisition with the standard 6 ns. This would allow the testing of this method in the presence of higher randoms fraction and would better show the effects of the calculated correction on noise. However, this would be somewhat artificial. The calculated randoms correction method would be better tested by adapting it for use on a whole body scanner and testing it on such as system. This method is applicable to evolving tracer distributions. The validation tests only involved measuring static radionuclide distributions. The application of this randoms correction should be tested for dynamic studies by calculating the randoms corrections for individual time frames. 7.3.3 Extension to other Tomographs The testing and validation of this method has only been performed on one type of small animal scanner. In theory, this method is applicable to any coincidence imaging system. A natural extension of this work would be to adapt the method and test it on other tomographs. The geometric aspects of the model-based randoms calculation should scale with the tomograph size. 203 The noiseless character of this randoms correction would be of the greatest benefit when used for whole body imaging of human subjects. Therefore, this method should be applied to whole body P E T imaging. A number of issues would need to be addressed when adapting this method for larger scanners and subjects. First, the singles and randoms sensitivities are strongly dependent on the ring diameter and axial F O V [109]. Activity outside the axial F O V also causes more singles in whole body scanners, which may be problematic for this method. Also scatter and attenuation become more significant when imaging larger subjects. As well, the increased ring diameter increases both the displacement of the scattered single photons, which may reduce the validity of using a simplistic broad-beam model for single photon scatter. These, and other differences, may require modifications to the single photon detection modeL Potential solutions to these some of these issues are described in Section 7.2. 7.3.4 Other Uses for this Work Although this work has focussed on randoms correction, a number of derivative uses for the methods and tools developed during the research have been suggested. These include expanded uses for the model-based simulation and for the analysis tools that were developed This randoms correction method uses a newly developed model-based simulation. In the present application, the simulation is data driven; that is, it is based on, and scaled to, measured data. However, this model-based simulation algorithm could potentially be used for other simulation research, in place of slower Monte Carlo simulations for some applications. However, much more work would be required to improve the accuracy for trues calculation and produce accurate results when scaling to absolute values in the activity object rather than measured count rates. The C O D E X suite of analysis tools (Section 5.1.3) was developed as part of the testing and validation work for the randoms correction calculation. These tools could potentially be used for visualization and analysis with any image data set, measured or simulated. Since they employ a graphical user interface, they are simple to use. These tools were developed and coded in 204 M A T L A B so they could also be easily ported to multiple computer platforms. There has already been an expression of interest in possibly incorporating the C O D E X tools with another set of software used in jrnaging research. 7.4 Summary and Conclusion A new method of randoms correction for P E T has been presented. This is a singles-based method that uses the calculated singles distribution from a model-based simulation involving the preHminary reconstructed radioactivity and attenuation data. The calculated singles are scaled to the measured global singles rate. The spatially variant randoms, distributions are calculated from the singles, decay corrected and subtracted from the measured prompts sinograms. This method is a logical extension of the model-based calculations presently used in scatter corrections and normalization. The calculated randoms has a number of benefits over the measured delay subtraction method. First, the model-based correction requires no measurement of the delayed coincidence channel. This reduces the load on the coincidence processor during acquisition and potentially decreases the coincidence dead-time during acquisition. As well, since no delayed coincidences are collected, the size of the data set is reduced by a factor of two if delayed coincidence data is stored separately in histogram mode. Since this new method does not make any use of delayed coincidence events, this goal is clearly achieved. In addition, other singles-based randoms corrections require the measurement, storage and processing of detector singles events. The model-based meuhod only requires the measured global average singles rate for scaling. This again reduces the measurement and storage requirements compared to some other singles-based methods. As well, since the singles rates are calculated for every crystal, the model-based method avoids the biases in single-based methods that use the measured singles from large groups of detectors. Another anticipated benefit is that the calculated randoms distributions should be smooth. This properly leads to less noise in the randoms corrected data and, consequently, less noise in the reconstructed images. Inspection of the calculated randoms sinograms clearly shows they are 2 0 5 smooth. The images reconstructed from the calculated randoms sinograms are much less noisy than images from the corresponding measured randoms sinograms. However, measurements of the noise in reconstructed images did not show any significant reduction when the calculated randoms correction was used because the random fraction of the data was too low. The accuracy and reproducibility of the calculated randoms distribution is also an important issue. Comparison between the measured and calculated randoms distributions showed only minimal bias. In general, the bias was much smaller than the noise in the measured distributions. The accuracy of the calculated randoms was also found to be nearly independent of the value of the empirically measured calculation parameters. However, it was also found that the inclusion of a correction for the L S O background was necessary for accurate results, especially for low activity studies. The reproducibility of die calculated randoms distributions depends on the number of sample points used. A large set of sample points also miriimizes any biases caused by the noise in the preliminary activity image. Thus, the model-based randoms correction was shown to work for a small animal scanner. This method could also bring its advantages to human, whole body P E T imaging. The smooth randoms distributions would reduce the noise in the corrected image data. As well, the reduced data acquisition and storage demands would lessen the load on the acquisition and processing hardware. In general, this randoms correction method could potentially benefit all P E T imaging. 206 BIBLIOGRAPHY [I] Webb S, ed. The Physics of Medical Imaging. Institute of Physics Publishing Bristol and Philadelphia, 1988. [2] Cho Z - H , Jones JP, Singh M , Foundations ofMedical Imaging. Wiley, New York, 1993. [3] Hounsfield G N . A method of, and apparatus for, examination of a body by radiation such as X-ray or gamma radiation. British Patent No . 1283915, London, 1972. [4] Ter-pogossian M , Phelps M , Hoffman EJ, et al. A positron emission transaxial tomograph for nuclear imaging (PETT). Radiology 114:89-98, 1975. [5] Pais, A . Niels Bohr's Times Oxford University Press, p.27 1991. [6] Pointon BW. SPECT: Physics and Instrumentation for Technologists. Canadian Association of Medical Radiation Technologists (CAMRT). Ottawa, 1997. [7] Ter-Pogossian M M , The origins of Positron Emission Tomography. Semin Nucl Med. 12:140-149, 1992. [8] Ruth TJ. Private communication. [9] Friedlander G , Kennedy JW, Macais ES, Miller JM. Nuclear and Radiochemistry 3rd Ed. Wiley. New York, 1981. pp.78, 467-469. [10] Charlton M , Humberson JW. Positron Physics. Cambridge University Press, Cambridge, U K . , 2001. pp.3-11,264-274. [ I I ] Levin CS, Hoffman EJ. Calculation of positron range and its effect on the fundamental limit of positron emission tomography system spatial resolution. Phys Med Biol. 44:781-799, 1999. [12] Duxbury D M , Ott RJ, Flower M A , Erlandsson K , Reader AJ . Preliminary results from the new large-area P E T R R A positron camera. I E E E Trans. Nucl SciA6:\0b0-\05A, 1999. [13] Bailey D L , Young H , Bloomfield P M , Meikle SR, Glass D , Myers MJ , Spinks TJ, Watson CC, Luk P, Peters A M and Jones T. E C A T A R T - a continuously rotating P E T camera: performance characteristics, initial clinical studies, and installation considerations in a nuclear medicine department. Eur. ] Nucl. Med. 24:6-15, 1997. 207 [14] Pattern JA, Turkington T G . Coincidence imaging with a dual-head scintillation camera. J Nucl Med. 40:432-441, 1999. [15] Zanzonico P. Positron Emission Tomography: A review of basic principles, scanner design and performance, and current systems. Semin Nucl Med.2>A:cYl -\\\, 2004. [16] Humm JL, Rosenfeld A , De l Guerra A. From P E T detectors to P E T scanners. Eur ] Nucl Med Moi Imaging. 30: 1574-1597, 2003. [17] Wienhard K , et al. The E C A T H R R T : performance and first clinical application of the new high resolution research tomograph. IEEE Trans Nucl Sci. 49:104-110, 2002. [18] Laforest R, Longford D , Siegel S, Newport D , Yap J. Performance evaluation of the MicroPET-Focus 120. IEEE Nucl Sci Sjmp ConfRec. 5: 2965-2969, 2004. [19] Melcher C L . Scintillator Crystals for P E T .J Nucl Med. 41:1051-1055, 2000. [20] Adapted from refs 9, 10 and from Cherry SR, Sorenson, J A , Phelps M E . Physics in Nuclear Medicine, 3rd Edition. Saunders, Philadelphia, P A , 2003. p 340. [21] Melcher C L , Schweitzer JS. Cerium Doped Lutetium oxyortho-silicate: A fast and efficient new scintillator. IEEE Trans Nucl Sci. NS-39:502-505, 1992. [22] Watson CC, Casey M , Eriksson L , Mulnix T, Adams D , Bendriem B. N E M A N U 2 performance tests for scanners with intrinsic radioactivity. J Nucl Med. 45:822-826, 2004. [23] Yamamoto S, Horii , H , Hurutani M , Matsumoto K , Senda M . Investigation of single, random and true counts from natural radioactivity in LSO-based clinical P E T . Annals of Nucl Med. 19:109-114,2005. [24] Casey M E , Nutt R. A multi-slice two-dimensional B G O detector system for P E T . IEEE Trans Nucl Sci. NS-33:760-763, 1986. [25] National Electrical Manufacturers Association; NEMA Standards Publication NU 2-2001: Performance Measurements of Positron Emission Tomographs. Rosslyn, V A : National Electrical Manufacturers Association; 2001. [26] Daube-Witherspoon M E , Karp JS, Casey M E , et al. P E T performance measurements using the N E M A N U 2-2001 standard./ Nucl Med. 43:1398-1409, 2002. [27] Moses WW, Derenzo SE. Empirical observation of resolution degradation in positron emission tomographs using block detectors [abstract]. J Nucl Med. 34 (suppl):101P, 1993. 208 [28] Cherry SR, Sorenson, JA , Phelps M E . Physics in Nuclear Medicine, 3rd Edition. W.B. Saunders Company, Philadelphia, P A , 2003. pp 325-359. [29] Strother SC, Casey M E , Hoffman EJ. Measuring P E T scanner sensitivity: relating countrates to image signal-to-noise ratios using noise equivalent counts. IEEE Trans Nucl Sci. 37:783-788, 1990. [30] Rahmim A , Lenox M , Reader AJ, Michel C, Burbar Z , Ruth TJ, Sossi V . Statistical list-mode image reconstruction for the High Resolution Research Tomograph. Phys Med Biol. 49:4239-4258, 2004. [31] Fahey F. Data acquisition in P E T imaging. / Nuc Med Technol. 30:39-49, 2002. [32] Defrise M , Kinahan P, Data acquisition and image reconstruction for 3D P E T , Chapter 2 in Bendriem B and Townsend D W , eds. The Theory and Practice of 3D PET, Kluwer Academic Publishers, Boston, 1998. pp 11-53. [33] Bruyant PP. Analytic and iterative reconstruction algorithms in SPECT. J Nucl Med. 43:1343-1358, 2002. [34] Defrise M , Kinahan P E , Michel C. Image reconstruction algorithms in P E T , Chapter 4 in Positron Emission Tomography Basic Science and Clinical Practice, eds. Valk P E , Bailey D L , Townsend D W , Maisey M N , Springer, London, 2003. [35] Shepp L A , Vardi Y . Maximum likelihood reconstruction for emission tomography. IEEE Trans Med Imaging. M1-L113-122, 1982. [36] Lange K , Carson R. E M Reconstruction algorithms for emission and transmission tomography. J ComputAssist Tomogr. 8:306-316, 1984. [37] Hudson H M , Larkin RS, Accelerated image reconstruction using ordered subsets of projection data, IEEE Trans Med Imag. 13, 601-809,1994. [38] Politte D G , Snyder D L . Corrections for accidental coincidences and attenuation in maximum-likelihood image reconstruction for positron-emission tomography. IEEE Trans Med Imaging. 10: 82-89, 1991. [39] Hogg D , Thielemans K , Mustafovic S, Spinks TJ, A Study of Bias for various Iterative Reconstruction Methods in PET, 2002 IEEE Nuclear Science Symposium Conference Record. 3:1519-1523, 2002. 209 [40] Q i J, Leahy R, Resolution and noise properties of M A P reconstruction for fully 3-D P E T . IEEE Trans Med Imag. 19:493-506, 2000. [41] Kinahan P E and Rogers J G . Analytic 3D image reconstruction using all detected events. IEEE Trans. Nucl. Sci. 36: 964-968, 1990. [42] Defrise M , Kinahan P E , Townsend D W , Michel C, Sibomana M , Newport D . Exact and approximate rebinning algorithms for 3D P E T data. IEEE Trans Med Imag. 16:145-158, 1997. [43] Daube-Witherspoon M E , Muehllehner G . Treatment of axial data in three-dimensional positron emission tomography. J Nucl Med. 28:1717-1724, 1987. [44] Zaidi H , Hasegawa B. Determination of the attenuation map in emission tomography. J Nucl Med. 44:291-315, 2003. [45] Jones W, Vaigneur K , Young, J , et al. The architectural impact of single photon transmission measurements on full-ring 3D positron tomography. IEEE Nucl Sci Symp Med Imaging ConfRec. 2:1026-1030, 1995. [46] Watson CC, Jones W F , Brun T, et al. Design and performance of a single photon transmission measurement for the E C A T A R T . IEEE Nucl Sci Symp Med Imaging Conf Rec. 2:1366-1370, 1997. [47] Kinahan P E , Townsend D W , Beyer T, et al. Attenuation Correction for a Combined 3D P E T / C T Scanner. Med Phys. 25:2046-2053, 1998. [48] X u E Z , Mullani N A , Gould K L , et al. A Segmented Attenuation Correction for PET. / Nucl Med. 36:1680-1688,1991. [49] Meikle SR, Dahnbom M , Cherry SR: Attenuation correction using count-Hmited transmission data in positron emission tomography. JNuclMed. 34:143-150, 1993. [50] Bailey D , Gilardi M C , Grootoonk, et al., Quantitative Procedures in 3D P E T in Bendriem B, Townsend D W , eds. The Theory and Practice of 3D PET. Kluwer Academic Publishers, Boston, 1998. pp 55-109. [51] Hoffman EJ, Guerrero T M , Guido G , Digby W M , Dahlbom M . P E T system calibrations and corrections for quantitative and spatially accurate images. IEEE Trans Nucl Sci. 36:1108-1112, 1989. [52] Bai B , L i Q, Holdsworth C H , et al. Model-based normalization for Iterative 3D P E T Image Reconstruction. Phys Med Biol. 47:2773-2784, 2002. 210 [53] Badawi R, Lodge M A Marsden P. Algorithms for calculating detector efficiency normalization coefficients for true coincidences in 3D P E T . Phys Med Biol. 43:189-205, 1998. [54] Casey M E , Gadagkar H , Newport D . A component based method for normalization in volume PET. In: Grangeat P and Amans J-L, eds. Proceedings of the 3rd International Conference on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, Aix-les-Bains, 1995. pp. 67-71. [55] Zaidi H . Comparative evaluation of scatter correction techniques in 3D positron emission tomography. Eur E Nucl Med. 27:1813-1826, 2000. [56] Bergstrom M , Eriksson L , Bohm C, Blomqvist G , Litton j . Correction for scattered radiation in a ring detector positron camera by integral transformation of the projejections. J Comput Assist Tomogr. 7:42-50, 1983. [57] Shao L , Karp JS. Cross-plane scatter correction-point source deconvolution in P E T . IEEE Trans Nucl Med Imaging. 10:234-239, 1991. [58] Bailey D , Meikle SR. A convolution subtraction scatter correction method for 3D P E T . Phys Med Biol. 39:411-424, 1994. [59] Ollinger J M . Model-based scatter correction for fully 3-D PET. Phys Med Biol. 41:153-176, 1996. [60] Watson CC, Newport D , Casey M E . A single scatter simulation technique for scatter correction, in 3D PET in Three-Dimensional Image Reconstruction in Radiation and Nuclear Medicine, Kluwer Academic Publishers, Netherlands, 1996. pp. 255-268. [61] Watson CC. New, faster image-based scatter correction for 3D P E T . IEEE Trans Nucl Sci. 47:1587-1594,2000. [62] Daube-Witherspon M E , Carson R E . Unified deadtime correction model for PET. IEEE Trans Nucl Med Imaging. 10:267-275, 1991. [63] Hoffman EJ , Huang SC, Phelps M E . Quantitation in positron emission tomography: 1. Effect of object size. / Comput Assist Tomogr. 3:299-308, 1979. [64] Hoffman EJ, Huang SC, Plummer D , Phelps M E . Quantitation in positron emission tomography: 6. Effect of nonuniform resolution. J Comput Assist Tomogr. 5:987-999, 1982. [65] Rousset O G , Ma Y , Evans A C . Correction for Partial Volume Effects in PET: Principle and Vakdation./ Nucl Med. 39:904-911,1998. 211 [66] Pajevic S, Daube-Witherspoon M E , Bacharach SL, Carson R E . Noise characteristics of 3-D and 2-D P E T images. IEEE Trans Med Imag. 17:9-23,1998. [67] Hoffman EJ , Huang SC, Phelps M E , K u h l DE,, Quantitation in positron emission computed tomography: 4. Effect of accidental coincidences. / Comput Assist Tomogr. 5:391-400, 1981. [68] Badawi R, Marsden P K , Cronin B F , Sutcliffe J L Maisey M N . Optimization of noise-equivalent count-rates in 3D PET. PhysMedBiol. 41:1755-1776, 1996. [69] Karp JS, Kinahan P E , Muehllehner. Effect of increased axial field of view on the performance of a volume P E T scanner. IEE Trans Med Imag. 12:299-306, 1993. [70] Reader A J , Zhao S, Julyan P, Hastings D L , Zweit J. Adaptive correction of scatter and random events for 3D backprojected P E T data. IEEE Trans Nucl Sci. 48:1350-1356, 2001. [71] Sashin D , Spinks TJ, Grootoonk S, Jones T. Smoothing of random data to reduce noise in 3D P E T . Conference Record of the 1992 IEEE Nuclear Science Symposium and Medical Imaging Conference. 3:973-975, 1992. [72] Hoffman E , Huang S-C, Phelps M , Kuh l D . Quantitation in positron emission computed tomography. 4. Effect of accidental coincidences. / Comput Assist Tomograph. 5:391-400,1981. [73] Casey M E and Hoffman EJ . Quantitation in positron emission computed tomography. 7. A technique to reduce noise in accidental coincidence measurements and coincidence efficiency calibration. / Comput Assist Tomograph. 10:845-850,1986. [74] Badawi R, Miller M , Bailey D , Marsden P. Randoms variance reduction in 3D P E T . Phys Med Biol. 44:941-954,1999. [75] Mumcuoglu E U , Leahy R M , Cherry SR, Zhou Z . Fast gradient-based methods for Bayesian reconstruction of transmission and emission P E T images. IEEE Trans Med ImagingA3:681-701, 1994. [76] Mumcuoglu E U , Leahy R M , Cherry SR. Bayesian reconstruction of P E T images: Methodology and performance analysis. Phys Med Biol. 41:1777-1807, 1996. [77] Brasse D , Comtat C, Trebossen R, Tararine M , Nguyen QT, Bendriem B. Correction for the Random Coincidences in Dual-Head Gamma Camera Imaging. IEEE Trans Nucl Sci. 48:864-871,2001. 212 [78] Vandenberghe S, Asseler Y D , Koole M , Bouwens L , Van de Walk R, Lemahieu I, Dierckx R A . Randoms correction for gamma camera based P E T list-mode reconstruction. Conference Record IEEE Medical Imaging Conference 2001. San Diego, C A ; 4:2031-2035, 2001. [79] Smith RJ, Karp JS. A practical method for randoms subtraction in volume imaging P E T from detector singles countrate measurements. IEEE Trans Nucl Sci. 43:1981-1987, 1996. [80] Steams CW, McDaniel D L , Kohlmyer SG, A m i P R Geiser BP , Shanumugan V . Random coincidence estimation from single events rates on the Discovery ST P E T / C T scanner. Proceedings of the 2003 IEEE Nuclear Science Symposium and Medical Imaging Conference. Portland, OR. I E E E 5:3067-3069, 2003. [81] Rokitta O, Casey M E , Wienhard K , Pietrzyk. P. Random correction for positron emission tomography using singles counts rates. Proceedings of the IEEE Nuclear Science Symposium and Medical Imaging Conference. Lyon, France: I E E E 3: 17/37-17/40, 2000. [82] Brasse D , Kinahan P E , Lartizien C, Comtat C, Casey M , Michel C. Correction methods for random coincidences in fully 3D whole-body PET: Impact on data and image quakty. J Nucl Med. 46:859-867, 2005. [83] Comtat C, Kinahan P E , Defrise M , Michel C, Townsend D W . Simulating whole-body P E T scanning with rapid analytic methods. Proceedings of the 1999 IEEE Nuclear Science Symposium and Medical Imaging Conference. Seattle, W A . I E E E 3: 1260-1264, 1999. [84] for example, Siegel JA , Wu R K , Maurer A H . The buildup factor: Effect on absolute volume determination. ] Nucl Med. 26:390-394, 1985. and references. [85] Cherry SR, Dahlbom M , Hoffman EJ. Three dimensional positron emission tomography using a conventional multi-slice tomograph without septa. / Comput Assist Tomogr. 15:655-668, 1991. [86] Jonsson, C and Larsson, SA. A spatially varying Compton scatter correction for S P E C T utilizing the integral Klein-Nishina cross section. Phys Med Biol. 46:1767-1783, 2001. [87] Thompson CJ. The problem of scatter correction in positron volume imaging. IEEE Trans Med Imaging \2-A24A32,1993. [88] Sossi V , Krzywinski MI , Cohen P, Mankoff D A , DeRosario J, Ruth TJ. . Effect of count rate on contrast in the A D A C M C D camera. IEEE Trans Nucl Sci. 46:1907-1911, 1999. [89] Sossi V , Pointon B , Boudoux C, et al. N E M A N U - 2 2000+ performance measures on an A D A C M C D camera. IEEE Trans Nucl Sci. 48:1518-1523, 2001. 213 [90] Sossi V , Pointon B , Cohen P, et al. Effect of sliielding the radioactivity on the outside the field of view on image quality. IEEE Trans Nucl Sci. 47:1561-1566, 2000. [91] Pointon B W and Sossi V . Towards model-based scatter correction for 3D dual head coincidence imaging. / Nucl. Med. 44(5):275P, 2003. [92] Pointon B , Sossi V . Towards model-based randoms correction for 3-D dual head coincidence imaging. Conference Record IEEE Medical Imaging Conference 2003.4:2799-2803, 2003. [93] read_asipro.m, © 2003 Raymond F. Muzic, Jr. [94] Sossi V , Ruth T. Micropet imaging: in vivo biochemistry in small animals. / Neural Transm 112:319-330, 2005. [95] Cherry SR, Shao Y , Silverman RW, Meadors K , Siegel S, Chatziioannou A , Young JW, Jones W, Moyers JC, Newport D , Boutefnouchet A , Farquhar TPI, Andreaco M , Paulus MJ , Binkley D M , Nutt R, Phelps M E . MicroPET: a high resolution P E T scanner for imaging small animals. IEEE Trans Nucl Sci. 44:1161-1166, 1997. [96] Tai Y C, Chatziioannou A , Siegel S et al. Performance evaluation of the MicroPET P4: a P E T system dedicated to animal imaging. Phys Med Biol. 46:1845-1862, 2001. [97] Concorde Microsystems Inc. Micropet Manager, Version 2.2.4.0, 2006. [98] Concorde Microsystems Inc. ASIPro Version 6.2.5.0., Acquisition, Sinogram and Image Processing™ Version 2.2.4.0, 2006. [99] Knoess C, Siegel S, Smith A , et. al. Performance evaluation of the MicroPET R4 P E T scanner for rodents. Eur J Nucl Med Moi Imaging. 30: 737-747, 2003. [100] Mok S-P, Wang C - H , Chen J-C and L iu R-S. Performance evaluation of the High Resolution Small Animal P E T Scanner. Biomed Eng Appl Basis Comm. 15:143-149, 2003. [101] Weber S and Bauer A . Small animal PET: aspects of performance assessment. Eur ] Nucl Med Moi Imaging. 31:1545-1555, 2004. [102] Bailey D L , Jones T, Spinks TJ. A method for measuring the absolute sensitivity of positron emission tomographic scanners. Eur] Nucl Med. 18:374—379, 1991. [103] Goertzen A , Suk JY , Thompson C.J O n the imaging of very weak sources in an L S O P E T scanner. Conference Record IEEE Medical Imaging Conference 2006. to be published. 214 [104] Andrew Goertzen, Private communication. [105] Attenuation coefficient and relative scatter probabilities are taken from SimSET data. See Kaplan MS, Harrison R L Vannoy SD. Coherent scatter implementation in SimSET. IEEE Trans Nucl Sci. 45:3064-3068, 1998. [106] David Bailey, Siemen's Pre-clinical solutions, private communication, 2005. [107] Harrison RL, Haynor D R , Gillispie SB, Vannoy SD, Kaplan MS, Lewellen T K . A public-domain simulation system for emission tomography: photon tracking through heterogeneous attenuation using importance sampling. / Nucl Med. 34:60P, 1993. [108] Jan S, Santin G , Strul D , et al. G A T E : a simulation toolkit for P E T and SPECT. Phys. Med. Biol. 49:4543-4561, 2004. [109] Eric Vandervoort, personal communication. [110] Badawi R D , Kohlmeyer SG, Harrison R L , Vannoy SD, Lewellen T K . The effect of camera geometry on singles flux, scatter fraction, and trues and randoms sensitivity for cylindrical 3D P E T - a simulation study. IEEE Trans Nucl Sd. 47:1228-1232, 2000. 215 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0085806/manifest

Comment

Related Items