Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Monte-Carlo simulations of positron emission tomography based on liquid xenon detectors Lu, Philip Fei-Tung 2008

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2008_spring_lu_philip.pdf [ 858.16kB ]
Metadata
JSON: 24-1.0066309.json
JSON-LD: 24-1.0066309-ld.json
RDF/XML (Pretty): 24-1.0066309-rdf.xml
RDF/JSON: 24-1.0066309-rdf.json
Turtle: 24-1.0066309-turtle.txt
N-Triples: 24-1.0066309-rdf-ntriples.txt
Original Record: 24-1.0066309-source.json
Full Text
24-1.0066309-fulltext.txt
Citation
24-1.0066309.ris

Full Text

MONTE-CARLO SIMULATIONS OF POSITRON EMISSION TOMOGRAPHY BASED ON LIQUID XENON DETECTORS  by PHILIP FEI-TUNG LU   A THESIS SUMBITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE  in  THE FACULTY OF GRADUATE STUDIES  (Physics)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)   March 2008  © Philip Fei-Tung Lu, 2008  ii  Abstract  The prospects for enhanced Positron Emission Tomography imaging using liquid xenon (LXe) gamma ray detectors had been examined.  Monte-Carlo simulations using GEANT4 were performed and the results were used to study the expected performance of a small animal PET scanner in comparison with a simulated conventional small animal scanner (LSO Focus 120).  A NEMA-like cylinder phantom and an image contrast phantom were simulated with both scanners to compare performance characteristics.  A Compton reconstruction algorithm was developed for the LXe scanner, and its performance and limitations studied.  iii  Table of Contents Abstract .................................................................................................................................................... ii Table of Contents .................................................................................................................................... iii List of Tables ........................................................................................................................................... v List of Figures ......................................................................................................................................... vi Acknowledgments .................................................................................................................................. vii Dedication ............................................................................................................................................. viii Chapter 1  Introduction .......................................................................................................................... 1 1.1  Positron Emission Tomography ........................................................................................ 2 1.2  Nature of PET ................................................................................................................... 3 1.3  Challenges of PET ............................................................................................................ 4 Chapter 2  Motivation for Using Liquid Xenon Detectors in PET ......................................................... 9 2.1  Properties of Liquid Xenon ............................................................................................. 10 2.2  Compton Kinematics ...................................................................................................... 10 Chapter 3  Monte-Carlo Simulation with GEANT4.............................................................................. 12 3.1  Detectors ......................................................................................................................... 13 3.2  Phantoms ........................................................................................................................ 15 Chapter 4  Data Analysis ...................................................................................................................... 18 4.1  System Parameters .......................................................................................................... 19 4.2  Count Rate Estimates ...................................................................................................... 21 4.3  Compton Reconstruction Algorithm ............................................................................... 25 4.4  True, Scatter, and Random Rates with NEMA 2001 Standard ....................................... 28 4.5  Image Reconstruction with MATLAB ........................................................................... 30  iv Chapter 5  Simulation Discussion ........................................................................................................ 31 5.1  Event Topology .............................................................................................................. 32 5.2   Acceptance ...................................................................................................................... 35 5.3  Double-Site Ambiguity ................................................................................................... 37 5.4  Background Noise .......................................................................................................... 39 5.5  Noise Reduction Schemes: Filtering ............................................................................... 41 5.6  Noise Reduction Schemes: Weighting ............................................................................ 43 5.7  Noise Reduction Schemes: Remarks .............................................................................. 45 Chapter 6  Contextualizing the Simulation .......................................................................................... 47 6.1  Sensitivity ....................................................................................................................... 48 6.2  Point Spread Function ..................................................................................................... 49 6.3  Scatter Fraction ............................................................................................................... 50 6.4  Noise Equivalent Count Rates ........................................................................................ 52 6.5  Reconstructed Images ..................................................................................................... 54 Conclusion ............................................................................................................................................. 56 Citations ................................................................................................................................................. 58  v  List of Tables  3.1.1 Simulation specifications of Focus 120 microPET scanner ............................................. 15  3.1.2 Simulation specifications of LXe small animal scanner .................................................. 15  4.1.1 System parameters for LXe small animal scanner ........................................................... 21  4.1.2 System parameters for Focus 120 microPET scanner ..................................................... 21  4.2.1 Probability that the detector detects zero or triggering on single, double-with-scatter, or   double-without-scatter photons from an annihilation ...................................................... 23  4.2.2 Second stage triggering constants for different resolution parameters ............................ 25  5.1.1 Event topology for double-without-scatter events ........................................................... 33  5.1.2 Event topology for double-with-scatter events ................................................................ 34  5.1.3 Event topology for double-random events ....................................................................... 35  5.5.1 Normalized event counts with different energy windows and energy resolutions ........... 43  6.1.1 Simulated sensitivity for the Focus 120 and LXe Detector ............................................. 49  6.3.1 Simulated scattering fraction for the Focus 120 and LXe Detector ................................. 52  vi  List of Figures  1.2.1 PET ring detector array, coincidence detection, and image reconstruction ....................... 4  1.3.1 Interaction topologies for Random, Scatter, and True events ............................................ 6  1.3.2 Limitations of scintillating crystals, multiple triggering and parallax error ....................... 7  2.2.1 Compton scattering schematics ....................................................................................... 11  3.1.1 Wire frame view of the F120 detector ............................................................................. 13  3.2.1 A view of the scaled down NEMA test phantom ............................................................. 16  3.2.2 Close-up view of the micro-Derenzo phantom showing the rod arrangement ................. 17  4.4.1 Obtaining event rates from LOR projections with NEMA 2001 standards ..................... 29  5.2.1 Acceptance as a function of χ2 for different topologies .................................................. 36  5.3.1 Geometric representation of the double site ambiguity topology .................................... 38  5.3.2 Identification fraction as a function of apparent scattering angle for the 1-2 topology ... 39  5.4.1 Normalized point spread function of the LXe scanner in different topologies ................ 40  5.4.2 Point spread function showing relative height between different topologies .................. 41  5.6.1 Point spread function for the 1-2 topology with different weighting schemes ................ 44  6.2.1 Point spread function comparison between the LXe and the Focus 120 detectors .......... 50  6.4.1 NECR curve for different energy windows for the LXe and the Focus 120 detectors .... 53  6.5.1 Reconstructed image comparison of the Micro-Derenzo phantom for both detectors ..... 55  vii  Acknowledgments  I would like to acknowledge my supervisor Dr. Douglas Bryman for his patience and guidance over the long months leading to this thesis, whose stimulating suggestions and comments challenged me to think outside the box.  I would also like to thank Dr. Vesna Sossi for her helpful pointers, thoughtful suggestions, and for teaching me what medical imaging is all about.  Furthermore I am indebted in many ways to all of my present and past colleagues at TRIUMF, whom at one time or another had provided me with valuable encouragements, insights, and helpful advices.  Additionally, I want to thank Dr. Frances Bates at the University of British Columbia for her invaluable assistances and tips; Dr. Janis McKenna for being the energetic and considerate Graduate Advisor when I first enrolled in the Masters Program; and Dr. Lee Gass of UBC Zoology for his wonderful humour and insights into what it means to be learning.  Lastly, I would like to express my sincerest gratitude to my family and closest friends, for without them I would never have gotten this far.  viii  Dedication  This thesis is dedicated to my parents for all their love, encouragement, and tireless support all these years.  As well I want to dedicate this thesis to Briahlen, my best friend and sister, who inspired me and reminded me never to give up.  1  Chapter 1 Introduction  Medical imaging is used to identify and map anatomic features or biological concentrations of molecules, in order to diagnose illness, prescribe treatments, and visualize results.  Various types of scanners are available for modern physicians, each with a purpose suitable for differing applications.  One of the imaging techniques of growing importance is Positron Emission Tomography, or PET.  PET is a functional scanning technique that excels in revealing biological processes and related pathologies.  This thesis will outline the challenges facing present-day PET scanners, and show how a liquid xenon (LXe) gamma ray detector might address the limitations of the existing scanners.  Finally, a comparison of the simulated performance of an LXe scanner to a traditional scintillating crystal-equipped Focus 120 scanner will be discussed.  2 1.1  Positron Emission Tomography  Positron Emission Tomography (PET) is a relatively new medical imaging technology developed in the last century.  While the basic principles were well understood in the 1950’s, with the first coincidence counting positron detectors built by Gordon Brownell and H.H. Sweet in 1952, the technique only became viable in the late 20th century as a research and diagnostic tool with the introduction of sensitive detectors and the rapid increase in computing power.  The basic concept of PET revolves around radioisotopes injected into patients that decay by emitting a positron, which then annihilates with a nearby electron up to a few mm away to produce two orthogonally polarized photons each with 511 keV energy and heading in opposite directions.  The idea is to detect these oppositely traveling photons, and trace the Lines of Response (LOR), lines between detector pairs that detected the gamma rays in coincidence, which then can be used to infer where the original positron-electron annihilations occurred, and by extension the source location of the radioisotopes.  By combining these LORs, it becomes possible to mathematically reconstruct a map of the annihilation locations, and thus the distribution of the radioisotopes within the subject.  Unlike scanners that are used to visualize anatomy such as Computed Tomography (CT) that reveal detailed physiological features, PET is a functional scanning method.  By incorporating radioisotopes into biologically relevant molecules such as sugar or water, one creates radiotracers known as radiopharmaceuticals that can be tracked to reveal how the organisms in study transport and utilize these molecules, which then can shed light on the biological processes in question, and reveal abnormalities.  For example, tumour cells are inefficient glucose processors, and as such a high glucose intake is required to fuel tumour growth.  Therefore by injecting the patient with 18F-fluorodeoxyglucose (FDG), a glucose analog where a radioactive fluorine atom has replaced an oxygen atom, a PET scan can provide a concentration map of glucose metabolism that can then be used as a staging procedure to diagnose cancer spread.   3  1.2  Nature of PET  PET scans revolve around radioactive isotopes that decay via positron emission.  These positrons, being the anti-particles to electrons, typically travel about 1 mm before annihilating with nearby electrons. The products of annihilation are two 511 keV energy (rest mass of electron) photons heading in opposite directions.  As the electron in orbit around the atom is moving prior to annihilation, momentum conservation results in the two photons travelling collinearly with minute variations (FWHM 0.5 o ).  This acollinearity results in deviation between the established LOR of a photon-pair and the actual annihilation point, introducing degradation to reconstructed PET images in the forms of blurring.  Larger detectors are in principle more susceptible to the effects of acollinearity due to the increasing separations between detected coincident photons that span a distance on the order of the PET scanner’s diameter.  The setup of PET scanner is typically a ring geometry (Figure 1.2.1).  By arranging the detector array around the center of the field of view, one attempts to capture the pairs of 511 keV photons from each annihilation in coincidence, and establishes the associated Line of Response (LOR).  Collectively, these LORs are elements of the system’s response to the radioactive source, known as sinograms.  The sinograms are the “three-dimensional representation[s] of the [radioisotope] signal measured at a given angle in the imaging plane at varying distances r along the detector array” [1]  Finally, using image reconstruction algorithms, an image map of the source distribution within the patient is made from the sinogram data.  4  Figure 1.2.1.  Principles of PET.  Annihilations of positron-electron pairs produce 511 keV photons that are detected, in coincidence, by ring detector arrays.  Then, the associated Line of Response for each annihilation is mapped into sinogram space, and finally reconstructed as an image map detailing the radiopharmaceutical distribution within the patient [F. Retière’s PowerPoint presentation slide].  1.3  Challenges of PET  While the fundamental concepts of PET are fairly straightforward, its practical implementation is not as simple due to various background noise contributions.  Because nuclear decay is an inherently random process, all radioisotopes have identically the same probability to decay and produce pairs of 511 keV photons in any given time period.  For PET imaging, the commonly used radioisotopes (e.g. Carbon-11, Nitrogen-13, Oxygen-15, Flourine-18, and etc.) have half-lives ranging from minutes (e.g. 2.04 minutes for  5 C-11) to hours (e.g. 1.83 hours for F-18), and even days (e.g. 275 days for Germanium-68) [2].  Photons from two separate annihilations can occur nearly simultaneously as shown in Figure 1.3.1a, and the resultant LOR from such a Random coincidence is not representative of the actual source distribution.  As the source activity increases, the detection rate of Randoms increases quadratically, and thus introduces a major source of background noise at higher activities.  To reduce the impact of Randoms, statistical data subtraction by delayed sampling is sometimes used [3].  An alternate subtraction method is carried out by calculating the random rates between detector pairs using the detection rates of single events of each detector element [4].  Randoms are not the only source of background noise.  Before being photoelectrically absorbed by matter, 511 keV photons often undergo single or multiple Compton scatterings, where the photons lose some energy and change trajectories through collision with atomic electrons, and the lost energy absorbed by the recoiled electrons.  Scattered events, where the photons Compton scatter en-route to the detector is another source of image noise where the LOR from scattered photons will not reflect the actual annihilation positions (see Figure 1.3.1b).  The main method of reducing noise from scattered events is through the use of energy thresholds. Based on the idea that scattered photons have lost energy before interacting in the detector, one can screen out Scatter events by requiring a minimum total energy to be deposited by each photon.  Ideally, this lower threshold should be as close to 511 keV as possible, so only full energy photons are allowed; but due to instrumental limitations in energy resolution, this threshold must be set lower to ensure good detection efficiency.  As a result, energy resolution of the detector plays a key role in noise reduction.       6      Figure 1.3.1.  Interaction topology for a) Random, b) Scatter, and c) True coincidences.  Even if a pair of photons detected was not a Random or Scatter type, and is therefore a True event (i.e. a detected unscattered event from a single annihilation as shown in Figure 1.3.1c), the photons can still undergo multiple interactions in the detector medium.  To obtain the best reconstructed images, ideally one would want to identify, among all the interaction sites, the first points where the photons interacted in the detector in order to trace back the correct LOR.  However, this requires the scanner to have the ability to resolve photon interactions in the detector either temporally or spatially.  To do so temporally requires an instrumental timing resolution in the pico-second (ps) range, which is uncommon for PET where usually the timing resolutions are in the nano-seconds (ns) range.  See [5] for details on the time-of-flight PET scanners. As for the possibility of spatial separations, conventional PET scanners also do not possess the ability to resolve separate interactions within a single detector block.  Instead, multiple scatterings within a single detector head for conventional PET would produce scintillation light which would collectively be mapped onto a single crystal position, introducing discretization on the order of crystal-to-crystal separation distance leading to limited resolution (Figure 1.3.2).  In short, with current generation PET scanners, the resolution limitation posed by separate Compton events occurred in the detector cannot be remedied easily.  To complicate matters more, conventional scanners use scintillating crystals as their detection medium for the 511 keV photons.  No Depth of Interaction (DOI) information is measured with scintillating crystals,  7 i.e. no information regarding where within a crystal the photon deposited its energy.  Instead, whenever a crystal is triggered, regardless of where the energy deposition took place in the crystal, the interaction is assigned a fixed position coordinate that is typically near the center of the crystal in question.  In the case of multiple crystals being triggered, an average position may be used, or the event may be rejected due to the ambiguity in position.  Since the crystals are of finite size and length, this not only limits the position resolution of the system to the size of the crystals, but also introduces parallax error where the reconstructed LOR deviate from the source annihilation points, and leads to distortions away from the center of the field of view (Figure 1.3.2).  Figure 1.3.2.  Illustration of the limitations of PET based on scintillating crystals; at the bottom left a photon Compton scattered into a nearby crystal, so either an average position has to be used, or the event has to be rejected because it is not known which crystal was triggered first.  On the right side is an example of parallax error where the actual photon interaction points are unknown, so by using the specific position point associated with each crystal, the apparent LOR in red is far from the actual photon path in black.  This effect is more pronounced further away from the center of the field of view.  The figure is not to scale.  Actual crystals are much shorter in length.  Scale exaggeration intended to demonstrate the nature of parallax error.  8  To combat the lack of DOI measurement in the traditional systems, various approaches had been proposed.  One technique suggested the employment of two rings of detector, one outer and one inner, which use scintillating crystals with different decay time constants.  By the interplay of the different decaying times, one in principle can gauge whether a photon interacted within the inner or the outer crystal array, thus providing a basic DOI measurement [6].  Other techniques take advantages of computer simulations, and seek to accurately model the system response in order to reduce parallax error when reconstructing image [7].  9  Chapter 2 Motivation for Using Liquid Xenon Detectors in PET  To alleviate and overcome some of the shortcomings of traditional PET, a new type of detector is proposed that would use liquefied xenon instead of solid crystals as the detection medium for gamma rays. This is a departure from conventional detectors, but it holds promise to provide sharper images with faster scanning and better detection efficiencies, which translates to shorter scan time and/or lower dosages for the patients.  This chapter will discuss the potential use of liquid xenon as a detector medium in PET.  10 2.1  Properties of Liquid Xenon  Liquid Xenon (LXe) has been shown to have high scintillation light output and coincidence timing resolution comparable to the best scintillating crystals [8,9]; it has a relatively high density of 2.9 g/cm 3  and high atomic number of 54, ensuring good gamma ray interaction cross-sections.  Furthermore, LXe lends itself naturally to the implementation of ionization charge drift chambers.  With large photo-electron production of about 65,000 electrons/MeV (with no applied electric field), and constant drift velocity of approximately 2 mm/μs for 1 kV/cm or higher electric fields, LXe charge drift chambers can reconstruct the energy and positions of separate interactions in three dimensions with high precision [10].  These properties of LXe contribute positively to identifying coincidence events and rejecting backgrounds Random and Scatter events, and thus to improving image qualities.  At the time of writing of this thesis, a  Columbia University group had reported an energy resolution of 1.7% (RMS) or 4.0% FWHM using light and charge signals together for 662 keV gamma rays in liquid xenon, and they believe that less than 1% rms is possible [11].  This is a significant improvement over the LSO crystals' ~17% FWHM at 662keV [12], which are commonly used in conventional PET scanners.  The immediate implication of better energy resolution is in the ability to set a tighter energy window threshold closer to 511keV, which as discussed in section 1.2, is the main method of rejecting background scattered events.  2.2  Compton Kinematics  Another exciting possibility of LXe PET is the use of a charge drift chamber that records ionization electrons liberated by photon interactions in the detector.  This allows individual interaction sites to be spatially resolved and measured.   11  Figure 2.2.1.  Compton scattering kinematics: a photon of incoming energy E collides with an electron, and scatters off at an angle θ with less energy than before, while the recoiled electron scatters off at an angle φ [13].  When a photon collides with an atomic electron, it gives some energy to the recoiled electron.  For reasons of momentum and energy conservation, the photon’s trajectory also changes as a result (Figure 2.2.1).  The Compton scattering cross section is well described by the Klein-Nishina formula, and the kinematics enables the use of Compton reconstruction algorithms to ascertain the true trajectory a photon undertakes when it scatters multiple times (See Section 4.3).  Since most (> 70%) 511 keV photons undergo Compton scattering in LXe before being photoelectrically absorbed, analysis algorithms that use Compton kinematics to constrain possible photon trajectories can be applied to filter through possible LORs on an event-by-event basis and select the most probable one.  This refinement means the best theoretical image resolution is dependent only on the positron range, because the drift chamber position resolution can be sub-mm in three dimensions [11].  Furthermore, since Scatter and Random events are not always possible kinematically under Compton constraints [14], the use of a Compton selection algorithm with a cut or weighting scheme can further suppress background contributions to image qualities (See Section 5.5~5.6).  This will, however, come at the expense of lowering overall efficiency; hence, the decision to apply this suppression will depend on the objectives of the scan (See Section 5.7).  12  Chapter 3 Monte-Carlo Simulation with GEANT4  GEANT4 is a software package developed at the European Organization for Nuclear Research (CERN) whose aim is to provide computer simulations of interactions of particles through matter.  “Its areas of application include high energy, nuclear and accelerator physics, as well as studies in medical and space science” [15].  For the purpose of this thesis, Monte-Carlo simulations of small animal PET scanners with two phantoms and different source distributions were carried out using the GEANT4 package, and data analysis was carried out using ROOT [16] and MATLAB [17] analysis software.  13 3.1  Detectors  Two small animal PET scanners were simulated: one was a liquid xenon scanner, and the other was a MicroPET Focus 120 (F120) by Siemens [18].  In contrast to full sized PET scanners used primarily for human studies with inner diameter of 1 m, axial length of 25-40 cm, and crystal size of 6.75 x 6.75 x 30 mm (e.g. a Conventional BGO scanner), small animal PET scanners are much smaller systems designed for animal studies; for example, the Focus 120 has inner diameter of only 14 cm, axial length of 8 cm, and crystal size of 1.5 x 1.5 x 10 mm (Table 3.1.1).  The reason for simulating the F120 was to compare simulated results with real measurements [19,20,21] in order to estimate how well the simulated version of the LXe small animal scanner may extrapolate to real prototype performance.  A GEANT4 simulation of the F120 geometry was constructed using the specifications listed in Table 3.1.1.  Steel of 1mm thickness was used for the detector casing, and the epoxy between LSO crystals [22] were approximated with water.  Figure 3.1.1 shows a wire frame representation of the F120.  Figure 3.1.1.  Wire frame representation of the Focus 120 MicroPET Detector.  14  Similarly, a simulation of the proposed LXe detector was made with the specifications listed in Table 3.1.2.  Steel of 1 mm thickness was again used for the detector casing, and 12 cm radial thickness of LXe was simulated to ensure good detection efficiency (~3 photon length).  Active volume segmentation was not simulated for the LXe detector; instead its effects on solid angle and thus sensitivity were estimated (See Section 6.1).  The LXe small animal PET scanner was simulated with the same inner diameter and axial length as the F120 in order for the comparisons to be on equal footings.  First, radionuclide beta+ decay sites were randomly generated in the GEANT4 “world volume” based on predefined source distributions such as a cylindrical rod embedded within a cylindrical phantom; then the positron-electron annihilation sites were computed by taking into account an isotropic positron drift with a mean range of 0.5 mm (RMS).  The value of 0.5 mm was chosen based on the mean positron range for the Fluorine-18 isotope [23], a commonly used isotope in PET imaging.  At the post-drift annihilation location, two back-to-back orthogonally polarized 511 keV photons were simulated, with non-colinearity effects omitted.  For every photon interaction step within the detector volume, in the case of the F120 detector, the crystal IDs for triggered crystals were recorded as position information; and in the case of the LXe detector, the actual 3D positions where the photon interacted in the detector were recorded.  In both cases, deposited energy was recorded for each interaction step in the detector.  No instrumental resolution effects were applied at the simulation stage; instead they were parameterized and applied in the analysis stage (See Section 4.1).        15 Table 3.1.1 Specification of Focus 120  Detector diameter (cm)    14 Axial length (cm)     8 Crystal material      LSO Crystal size (mm)     1.5 x 1.5 x 10 Crystal pitch (mm)     1.6 No. of Rings      4 No. of Blocks / Ring     24 No. of Crystal / Block     12 x 12 Total No. of Crystals     13,824  Table 3.1.2 Specification of Liquid Xenon small animal PET scanner  Inner Detector diameter (cm)   14 Outer Detector diameter (cm)   38 Axial length (cm)     8 Scintillation materials     LXe Total Liquid Volume (L)    7.84  3.2  Phantoms  Two phantoms were simulated in GEANT4. The first was a polyethylene (C2H4, density 0.960 g/cm 3 ) cylindrical phantom measuring 5 cm in diameter, 15 cm in length, with a 3.2 mm diameter water rod of 14 cm length at a radial offset of 12 mm embedded within, which was where the radioactive source presided (Figure  16 3.2.1).  This was a scaled down version of the test phantom specified in NEMA 2001 standards [24] for use with small animal scanners.  The second phantom was an acrylic (C5H8O2, density 1.19 g/cm 3 ) micro-Derenzo phantom [25] measuring 4 cm in diameter, 5 cm in length, with water cylindrical rods of 1.0, 1.2, 1.4, 1.6, 1.8, and 2.0 mm diameters.  Rod-to-rod separations were twice the rod diameters from center-to-center, and the whole micro-Derenzo phantom was immersed in a water cylinder of 6 cm diameter and 9.6 cm length, approximately the size of a rat to simulate the scattering effect of the rat’s body.  Note that conventionally the imaging of a micro-Derenzo phantom would be carried out without water surrounding it (Figure 3.2.2).  Figure 3.2.1.  NEMA-like rat sized polyethylene phantom, measuring 5 cm in diameter, 15 cm in length, with a source rod of 3.2 mm dia. and 14 cm length.  The cylindrical source (blue) was embedded within at 12 mm radial offset.    17  Figure 3.2.2.  Axial view of the micro-Derenzo image contrast acrylic phantom, measuring 4 cm in diameter, 5 cm in length, and with source rods of various sizes.  Center-to-center separation distances were twice the rod diameters.  18  Chapter 4 Data Analysis  The GEANT4 simulation provided output data in a ROOT [16] Tree format on a step-by-step basis for each photon that interacted in the detector.  No instrumental resolution effects were added at the simulation level; instead these were parameterized in subsequent data analysis using ROOT analysis software in order to examine the effects of varying different system settings.  Energy resolution, triggering threshold, two-hit separation distance, and detection threshold were varied to test the performance.  This chapter will also outline the methods used to estimate real system detection rates based on time-independent simulation events and selected instrument parameters, as well as describe the Compton reconstruction algorithm employed. Then, an outline will be given for the NEMA 2001 standard for estimating True, Scatter, and Random rates from LOR data.  Finally, an overview of a simple 2D image reconstruction using filter-back projection technique with MATLAB [17] will also be given.  19 4.1  System Parameters  The research into using LXe for scintillation detection is a relatively new field.  As such, improvements are continually being made to perfect such instruments.  In order to provide a concise overview of the potential capability of an LXe scanner, various system responses had been parameterized and tested using values ranging from conservative estimates to more ambitious scenarios.  An LXe charge drift chamber would measure both scintillation light and ionization charge generated from gamma ray interaction within the LXe, which provided essentially two ways to measure the energy deposited.  First the prompt scintillation light (LXe’s scintillation decay time constants are 3 ns and 27 ns) from all the interactions of a given gamma ray within the LXe would be measured together as the interactions occurred effectively simultaneously; it was not possible to separately measure the light output from individual interaction sites.  This would provide a measurement of the total energy deposited by the incoming photons.  By measuring in coincidence with a second detector on the opposite side of the scanner ring, the energy measurements could then be compared to the energy threshold for triggering to quickly reject possible Scatter events or photons that were not fully absorbed in the LXe.  Next, the localized charges liberated at each interaction site would drift slowly towards the anode in the presence of an applied electric field.  The slow drift coupled with finely pitched charge sensing wires/strips would allow the measurement of not only 3D position information (2 dimensions from an orthogonal wire/strip arrangement [26], and the 3 rd  dimension along the drift direction measured using the drift time and drift velocity at the applied field strength) but also the measurements of the energy deposited at each interaction sites.  As long as the interaction sites were separated greater than the two-hit separation parameter in at least one axis, the ionization charge measurement would be able to resolve the different interaction sites individually.  Low energy interactions, however, may not always be resolvable in contrast to the base electronic noise fluctuations inherent of the detection system.  The simulated parameter that defined this effect was the minimum charge energy threshold.  20  Table 4.1.1 lists the various system parameters used for the LXe scanner, while Table 4.1.2 lists the settings used for the MicroPET F120.  Energy resolution measurement using light alone was estimated to be 28% FWHM, as previous experiments have reported similar or better results [11,27].   Light triggering energy thresholds were chosen to be 220 keV when a 250 keV light-charge combined energy window was in use, and 320 keV for light alone when the 350 or 450 keV light-charge energy windows were in use, taking into account that energy resolution from scintillating light measurement alone was poorer than from using both light and charge measurements.  Energy resolution obtained from light and charge combined was modeled conservatively to be 9.4% and 18.8% at 511 keV.  The minimum charge energy detection threshold parameter was chosen to be 50 keV at 3-sigma confidence [Private communication with F. Retière], where the expected inherent electronic noise level of a possible LXe charge drift chamber was approximately 15 keV.  The position resolution of photon interaction points was chosen to be 0.7 mm FWHM, which corresponded to the nominal position resolution with 1 mm wire spacing for a time projection chamber [28,29].  Charge sharing between induction wires [30] could improve the resolution further.  The two-hit separation thresholds were chosen to be 1 mm and 2 mm, corresponding to one or two sensing wire spacing between charge clouds, or 0.5 and 1.0 μs in the drift direction given a 2 mm/μs drift velocity.  Lastly, the coincidence window and instrument dead times were chosen to be comparable to real existing detectors. The Focus energy resolution was based on published result.        21 Table 4.1.1 System Response Parameters for simulation of the Liquid Xenon scanner  Energy resolution – light (FWHM) 28% @ 511 keV Triggering thresholds – light    220 keV, 320 keV Energy resolutions – light-charge (FWHM) 9.4%, 18.8% @ 511keV Minimum charge energy threshold   50 keV for individual site detection Energy windows (light-charge)   250 keV, 350 keV, 450 keV Position resolution (FWHM)    0.7 mm in all 3 axes Two-hit separation distances    1 mm, 2 mm Dead time       0.5 μs Coincidence window     6 ns  Table 4.1.2 System Responses of Focus 120 microPET  Energy resolution (FWHM)    18% @ 511 keV [19] Energy windows     250 keV, 350 keV, 450 keV Dead time       0.5 μs Coincidence window     6 ns  4.2  Count Rate Estimates  The GEANT4 simulation produced data that were not dependent on the source activities; instead, it was a simulation of sequential photon pair production.  In order to simulate the measured event rate of a scanner with a given source activity, a Poisson statistical model was adapted.  The triggering system for the LXe detector was a two-stage system.  The first stage used only the  22 prompt scintillation light information, with an expected energy resolution of 28% FWHM using light alone, as ionization charge can take up to 60 μs to be measured.  The second triggering stage employed the combined light-charge information with improved energy resolution.  For the simulation, an energy window of 220 keV for energy measurement using light alone was used in conjunction with a combined light-charge energy window of 250 keV, and similarly an energy window of 320 keV for light alone (3-sigma wide with respect to 28% FWHM energy resolution measured with light alone) was used in conjunction with combined light-charge energy windows of 350 keV and 450 keV thresholds.  For each annihilation event, two photons were simulated.  These could both escape undetected (zero), or one of them could interact and be detected while the other photon didn’t (single), or they could both be detected by the PET scanner (double).  Additionally, one or both photons could Compton scatter off non-active materials prior to being detected by the scanner (scatter).  For our purpose of modelling the count rates, the probability of the detector detecting none of (zero), triggering on one of (single), or triggering on both of the photons (double-with-scatter and double-without-scatter) from a single annihilation event, after taking into account the energy resolution using light measurement alone and the energy window with scintillating light, were important.  These probabilities were obtained by applying a Gaussian blurring with sigma equal to the scintillating light energy resolution of the total energy deposited by a photon from the simulation, and comparing it against the selected scintillating light energy window threshold.  The last step was to divide the number of remaining events with sufficient total energy by the total number originally simulated.  Table 4.2.1 shows the first stage triggering probabilities of detecting zero (P0), triggering on single (P1), triggering on double-with-scatter (P2s) or double-without-scatter (P2) photons from each annihilation, for the NEMA-like phantom.  Given these probabilities, we could compute the detection rates for any given activity A and coincidence window Δt:  23       (4.2.1)         (4.2.2)          (4.2.3)   C2,0 is the triggering rate of a double without scatter event (i.e. detecting both photons    from an annihilation which have not Compton scattered prior to being detected.)   C2s,0 is the triggering rate of  a double with scatter event   C2r,0 is the triggering rate of two independent singles within the coincidence window  Table 4.2.1: probability that the detector detects zero or triggering on single, double-with-scatter, or double-without-scatter photons from an annihilation event with 28% FWHM energy resolution using light measurement alone, and different energy window thresholds, with the NEMA-like phantom. Scenario      Etotal > 220 keV  Etotal > 320 keV  [P0] Zero (%)     56.0    63.3 [P1] Single (%)     36.8    31.2 [P2s] Double with scatter (%)  3.0    1.7 [P2] Double without scatter (%)  4.2    3.8   For LXe, the timing resolution is expected to be on the order of 1 ns or better, but a 6 ns coincidence time window was used conservatively.  Next, combining (4.2.1)~(4.2.3) we obtained a total first stage trigger rate, taking into account the instrumental dead time τ,     total count rate prior to dead time. (4.2.4)  24   [31]    total count rate after dead time (4.2.5) Lastly, we must take into account the second stage triggering probabilities, i.e. the probability (denoted as ε with different subscripts) that an event of given type would deposit sufficient energy, as measured using the combined light-charge resolution and associated energy threshold, we arrive at the final detection rates,      final rate of double without scatter (4.2.6)      final rate of double with scatter (4.2.7)      final rate of two coincident singles  (4.2.8) The values of ε2, ε2s, and ε2r depended on the combined energy resolution and energy window threshold, as well as the reconstruction strategy used.  A Line-of-Response (LOR) rejection technique was also used, which required the reconstructed LOR to pass through the phantom.  Table 4.2.2 shows a list of the constants obtained using the NEMA-like phantom at various resolution parameters, where the energy resolution referred to the energy measurement from using the light and charge combination [11].  To obtain these values, one would take a sample of the appropriate data set passing the first stage energy threshold, (e.g. a data set of only double-with-scatter-event, or combined single-events in pairs to produce a double-random set), then run it through the reconstruction algorithm and compute the probability constants as the number of accepted events divided by the number of events started.  In this study, I employed a Compton reconstruction algorithm described in the next section.       25 Table 4.2.2: Second stage triggering constants for different resolution parameters, for each energy threshold, 3 columns were listed that showed the different effects 2-hit separation and energy resolution had on the triggering constants.  Energy Window 250 keV 350 keV 450 keV 2 hit-separation (mm) 1.0  2.0 3.0 1.0  2.0 3.0 1.0  2.0 3.0 Energy Resolution (%) 9.4  9.4 18.8 9.4  9.4 18.8 9.4  9.4 18.8  ε2 (%) 88.3  90.4 88.3 88.8  91.0 89.2 83.5  85.6 70.9 ε2s (%) 51.4  52.3 50.9 56.0  57.1 55.0 22.3  22.6 20.3 ε2r (%) 14.8  15.1 14.7 14.8  15.1 14.8 11.3  11.6 9.7  Once the three final detection rates (4.2.6)~(4.2.8) were computed, a data file simulating the scanner’s detection rate can be prepared.  This was done by scaling the simulated double-with-scatter, double-without-scatter, and double-random events (pair-wise combination of single-events) to obtain the proper mixing ratio C2:C2s:C2r and total detection rate C2 + C2s + C2r.  The data set then could be used to gauge the performance of the detector, e.g. by reconstructing an image from the data or use it to evaluate the detector sensitivity (See Chapter 6).  This scaling and mixing approach allowed us to use a single large set of simulation data to compute the behaviour of the detector and its performance at various energy and position resolution settings and source activities without the need to redo the simulation each time with new sets of detector parameters.  4.3  Compton Reconstruction Algorithm  The basic framework of the Compton reconstruction method used was the same as that described by  26 Boggs and Jean [32], and Oberlack et al [14].  The key difference was that in previous work, the reconstructions were used for astronomical measurements, where the photons coming from the sky were assumed to pass through the detector surface perpendicularly, which was a valid assumption due to the distance of the source.  With a PET scanner however, such an assumption was no longer valid.  But fortunately, by the nature of the PET systems, each event necessitates the detection of two instead of one photons, where upon for every possible trajectory combination, the supposed incoming photon directions could be ascertained using the interaction locations of both coincident photons.  This in turn provided good differentiating power, as will be demonstrated in later chapters. When a photon interacts within the detector, it can Compton scatter multiple times before being photo-absorbed (or exit the detector).  A 511 keV photon is roughly three times more likely to Compton scatter than be photo-absorbed when it first interacts in LXe.  The simplest interaction configuration is the 1-1 case, i.e. the detector registers only 1 resolved interaction point for each of the two photons.  Recall that in order to resolve multiple interaction points in the detector, each point must deposit more energy than the minimum detection threshold, and also must be spatially separated in at least one axis by more than the 2-hit separation parameter.  Practically, however, multi-hit scenarios such as 1-2, 1-3, 2-2, etc. are more common, and must be taken into account, as they contribute to the blurring of the Line of Response (LOR) due to ambiguity on the first interaction points.  The goal of the Compton reconstruction algorithm is then to sort through all the possible scattering sequences, and determine the path that is the most probable and in turn locate the most likely first interaction points and trace the associated LOR.  For each coincident event with I number of photon interaction points in the LXe (I  3), there are I – 2 number of scattering sites.  The Klein-Nishina formula (4.3.1) outlines the basis of Compton kinematics, which computes the expected scattering angle θE,i based on Ei the energy deposited at the i-th interaction step.          (4.3.1)  27 The Klein-Nishina formula gives a measure of the expected scattering angle given the energy deposited. Ideally, if the sequence in question is the correct one, the apparent scattering angle θG,i (4.3.2), computed from the interaction position information, will yield the same value as θE,i.  This provides a quantitative method to discriminate against false sequences (5).            (4.3.2)  Where       (4.3.3) The ability to identify the correct sequence, however, depends on the position and energy resolutions of the system.   Hence, a weighted χ2 statistic given the difference in the expected and scattering angles was used (4.3.4), with instrumental resolution limits introduced as the error terms in the denominator,   [14]       (4.3.4) Where the error terms are defined below,    (4.3.5)    (4.3.6) Finally, the viable sequence with the lowest test statistic score was chosen by the reconstruction algorithm and the associated LOR identified and recorded.  If no suitable interaction sequence was found, the event was discarded.  The numerator of equation (4.3.4) is likely to be larger for Scatter or Random events on average than for True events.  Hence a maximum χ2 threshold could be set to reject them, or alternatively a weighting scheme can be used instead (See Section 5.5 and 5.6).  For the purpose of this thesis, however, no effective χ2 cut was used in all but the study on effects of χ2 towards data acceptance in order to obtain the maximum count rates or maximum efficiency.   28 4.4  True, Scatter, and Random Rates with NEMA 2001 Standard  Thus far our discussion in this chapter had been about how to simulate detection rates using time-independent simulation data.  It is paramount for us to make the important distinction between rates like double-without-scatter and True events.  While conceptually the same, the first referred to an intrinsic process which was not directly distinguishable in actual measurements.  Instead, the definition of True, Scatter, and Random events must be consistent and applicable to actual experimental measurements, where it was not possible to tell if an event was True, Scatter, or Random intrinsically.  To address that, the National Electrical Manufacturers Association published a set of standards for rate measurements on full sized PET machines [24], which I have adapted and modified to suit the smaller animal imaging systems.  With the NEMA-like rat-sized phantom, first the reconstructed LORs were mapped onto different angular projections, with oblique projections resorted into direct projection using single-slice rebinning technique [33].  Then each projection was shifted and centered on its source peak, then combined with all other projections.  A wide band of 34.2 mm was centered on the source peak and all counts outside this band set to 0, establishing the total count rates.  True count rates were then estimated as all counts within a narrow band of 14.4 mm centered on the source peak that were above a flat background level.  The background events, i.e. Scatter and Random events, were assumed to be constant over the 14.4 mm narrow band centered on the source, and were interpolated as width of the gap multiplied by the average event count at the edge of this 14.4 mm band.  The remaining true coincidence counts were events within the central narrow band above the background (Figure 4.4.1).  29  Figure 4.4.1.  Example showing how to obtain True and Background rates from combined LOR projections using NEMA 2001 standards.  The intrinsic Scatter Fraction (SF) of the system is defined as the ratio of total Scatter background to total count rate, when measured at low activity where Random rates would be negligible (i.e. Scatter was assumed to be all the background at low activity).  For the simulation, this translated to mixing only the double-with-scatter and double-without-scatter events, as the double-random event rate would be negligible at low activities.  With a properly mixed data set, we then could obtain the SF by computing the ratio of background, which was assumed to be Scatter only, to the total event rate.  For higher activities, True coincidence counts were determined as before by assuming the constant Scatter and Random background over the central narrow band.  Then the Scatter and Random counts were computed as follow (4.4.1)~(4.4.2), where SF was the scattering fraction obtained earlier .  [20,21]        (4.4.1)   [20,21]       (4.4.2)   30 4.5  Image Reconstruction with MATLAB  For medical imaging purposes, a reconstructed image can help physicians in diagnosis.  Numerous methods exist for PET image reconstruction, such as the Ordered Subset Estimation Maximization (OSEM) [34], the Maximum Likelihood-Expectation Maximization (ML-EM) [35], and the Algebraic reconstruction techniques (ART) [36].  For simplicity, the simple and fast Filtered-Back Projection [37,38] technique was adapted for image reconstruction purpose.  The analysed data were first mapped onto sinogram space after instrumental blurring was taken into account and, in the case of the LXe scanner, after the Compton reconstruction algorithm was applied, using a bin size of 0.3 mm for LXe which had 0.3 mm RMS nominal position resolution, and 0.8 mm for F120 which had a crystal pitch of 1.6 mm.  Note that for the F120, position information was discretized, as each crystal embodied one position coordinate; thus the position resolution was limited to the crystal pitch.  Oblique projections were sorted with the single slice re-binning technique into direct projections [33].  When applicable, the central projection slice was used in this thesis for image reconstruction purposes to take advantage of maximal data counts.  For reasons of simplicity and practicality when doing analysis, a custom sinogram data format based on ROOT data structure was used, instead of the standard industrial format.  As such, in order to perform image reconstruction, a quick but effective method via MATLAB was chosen, where a simple macro can extract and output the pertinent projection information from the ROOT file into an ASCII text file in the form of projection row vectors that MATLAB can read and process.  The specific implementation of the MATLAB code was adapted directly from Akram [39] with the modification to read the projection matrix from text files.  The main code structures, including the modified RAMP filter used, remained unchanged.  For an overview on filtered-back projections and associated filters, see [37,38].  31  Chapter 5 Simulation Discussion  While ultimately it is the comparative performance of the LXe scanner to that of the Focus 120 that is of the most interest in ascertaining the potential capability of this liquid medium scanner, it is nevertheless insightful to look in details how the analysis algorithm performed.  In particular, the behaviors and shortcomings of the Compton reconstruction algorithm, the effects of better energy resolution and more stringent energy window requirements on the background noise, as well as the overall event topologies and what they may entail for future prospects of next generation high resolution PET scanners.  This chapter in particular tries to point out the shortcomings of the Compton reconstruction algorithms, possible remedies, and the general effect on background noise.  Chapter 6 will then focus on comparisons between the two scanners.  32 5.1  Event Topology  When photons interacted within the LXe detector, they may Compton scatter multiple times before they were fully absorbed.  Furthermore, not every scattering site was necessarily registered by the instrument.  When two sites were close in proximity, they would register as a single point instead, effecting a “merging” of position and energy; in such case an energy weighted position would be assigned.  In addition, if an interaction point did not deposit sufficient energy to meet the minimum charge energy detection threshold, then it would be treated as electronic noise and not be recorded.  Both these factors had the effect of making the final detected event topology consisted of fewer resolved sites than were present in reality.  Table 5.1.1~5.1.3 demonstrate this for the case of a 450 keV energy window with an associated 9.4% energy resolution using light and charge information, simulated with the NEMA-like phantom.  The responses were nearly identical for scenarios with different energy resolutions and energy windows.  In each event type, counts were normalized to total accepted counts.  Under the scenario column, 1-1 meant both photons only had one resolved site each, 1-2 meant one photon with one resolvable site and the other with two, and so on.            33 Table 5.1.1 Event topology fraction for double-without-scatter events, for the intrinsic case (i.e. before any instrumental resolution limit was applied), and in the scenarios where one or both resolution limits were active.  Worth noting is how the topology fraction became simpler with more resolution effects applied.  Topology  Intrinsic  With 2-hit sep.    With minimal  both            1 mm only     50 keV detection  1-1   7.6%    9.7%      9.7%   12.3% 1-2   20.8%  24.4%     28.0%   31.6% 1-3   12.6%  13.1%     12.5%   12.0% 1-4   4.9%   4.4%      2.2%   1.8% 2-2   14.1%  15.0%     20.2%   20.3% 2-3   17.3%  16.3%     18.1%   15.5% 2-4   6.7%   5.5%      3.2%   2.3% 3-3   5.3%   4.4%      4.0%   2.9% 3-4   4.0%   2.9%      1.4%   0.9% 4-4   0.8%   0.5%      0.1%   0.1%   An important consequence of the merged points and the discarded points due to insufficient energy, as shown in above tables, was that the majority of events (> 90%) were contained in only 5 scenarios, namely 1-1, 1-2, 1-3, 2-2, and 2-3.  As a result, analysis efforts would be focused on these 5 scenarios for the sake of simplicity, without loss of generality.  This simplification also assisted in reducing the computation complexity of the algorithm, as a 2-3 topology contained potentially 12 possible trajectories, while a 3-3  34 topology would contain 36 possible trajectories, and so on.  Table 5.1.2 Event topology fraction for double-with-scatter events, for the intrinsic case (i.e. before any instrumental resolution limit was applied), and in the scenarios where one or both resolution limits were active.  Worth noting is how the topology fraction became simpler with more resolution effects applied.  Topology  Intrinsic  With 2-hit sep.    With minimal  both          1 mm only     50 keV detection  1-1   7.6%   9.6%      10.2%   12.7% 1-2   21.5%  25.0%     29.4%   33.1% 1-3   12.8%  13.2%     12.1%   11.6% 1-4   4.3%   3.8%      1.9%   1.6% 2-2   15.0%  16.1%     21.2%   21.1% 2-3   17.4%  16.2%     17.3%   14.6% 2-4   6.4%   5.3%      2.7%   2.0% 3-3   5.2%   4.2%      3.5%   2.4% 3-4   3.7%   2.6%      1.2%   0.6% 4-4   0.7%   0.4%      0.1%   0.0%      35 Table 5.1.3 Event topology fractions for double-random events, for the intrinsic case (i.e. before any instrumental resolution limit was applied), and in the scenarios where one or both resolution limits were active.  Worth noting is how the topology fraction became simpler with more resolution effects applied.  Topology  Intrinsic  with 2-hit sep.    with minimal   both          1 mm only     50 keV detection  1-1   7.7   9.9%      9.8%   12.5% 1-2   21.1%  24.7%     28.3%   32.0% 1-3   12.7%  13.1%     12.4%   11.9% 1-4   4.8%   4.3%      2.2%   1.8% 2-2   14.3%  15.2%     20.3%   20.4% 2-3   17.3%  16.2%     18.0%   15.3% 2-4   6.5%   5.3%      3.1%   2.2% 3-3   5.3%   4.3%      3.9%   2.8% 3-4   3.9%   2.8%      1.4%   0.8% 4-4   0.7%   0.5%      0.1%   0.1%  5.2   Acceptance  Based on the Compton kinematics, in principle, we could suppress Scatter and Random contributions by setting a χ2 threshold on acceptable events.  However, this came at a penalty to the True count rate as well, which was not always desirable depending on the objective of the PET scan.  To quantify this, we looked at the fraction of accepted True, Scatter, and Random counts as a function  36 of the χ2 threshold.  Here acceptance was defined as the ratio of the total number of counts below the χ2 threshold to the total number of detected counts when no threshold was in place, for each of the event type, at 1 mCi activity.  The results for different event topologies were plotted in Figures 5.2.1 for 9.4% energy resolution, energy window threshold of 450 keV, and 2-hit separation threshold of 1 mm.  The general trend demonstrated held true for differing energy resolutions and energy window thresholds.  Figure 5.2.1.  Acceptance for True, Scatter, and Random coincidences as a function of χ2 threshold, for a) 1-2, b) 1-3, c) 2-2, d) 2-3 event topologies, with the simulated NEMA phantom.  Data shown here were for the case with 9.4% energy resolution, 1 mm 2-hit separation, 450 keV energy window, and source activity of 1 mCi.  The 1-1 topology was not shown here because it was not applicable under the Compton reconstruction algorithm because it had only 2 interaction sites in total while the algorithm required at least 3.  37  From Figure 5.2.1 it was apparent that we had different True-to-Backgrounds ratios at different χ2 thresholds with different event topologies, where backgrounds were Scatter and Random events combined. In particular, topologies involving doublets, i.e. at least one photon with 2 and only 2 resolvable interaction sites such as in 1-2 and 2-2 topologies, typically have worse True-to-Background ratios than the triplets which had at least one photon having 3 resolvable sites such as in 1-3 and 2-3.  It was also evident that one could reduce the Random fraction while maintaining most of the True events by setting a χ2 threshold.  For example, in the 1-3 case, a threshold of 10 meant only about 55% of Random would be accepted in comparison to 75% of Scatter and nearly 95% of True events accepted.  Note, this meant 100%-55% = 45% reduction to the Random events and 100%-75% = 25% reduction to the Scatter events that were originally accepted by the energy window threshold rejection technique and had passed the LOR check.  Thus the χ2 threshold provided a way to further reduce the backgrounds.  The fact that the triplet topologies had better True-to-Background ratios could be understood as the triplets were, by having more interaction sites, more kinematically constrained than doublet topologies.  In addition, there existed an inherent limitation with the doublet topology that hindered the reconstruction algorithm’s ability to correctly identify the correct sequence.  This property was termed the Double-Site Ambiguity (DSA), and is discussed below.  5.3  Double-Site Ambiguity  The Double-Site Ambiguity (DSA) was an inherent problem of doublet topologies (e.g. 1-2 and 2-2) that prevented the correct identification of the actual interaction sequence in some circumstances.  As a reminder, note that doublets here meant the same double-without-scatter events in simulation (Section 4.2) that ended up with only 2 resolvable interaction sites for one or both of the photons in the pair.  It made little sense to include double-with-scatter and double-random events in this discussion, however, as no LOR identified could be considered “correct” for these events.  38  The DSA applied to the photon that had only 2 resolvable interaction sites; when both of the interaction steps for this photon had comparable energy deposits, as seen under the influence of limited system resolution, then they would have the comparable kinematically expected scattering (4.3.3).  Furthermore, when the energy deposits were comparable, for a doublet it also meant the actual scattering angle was close to 90 0 .  Since the 2 interaction points were separated by a short distance (typically up to a few mm) compared to the baseline distance to the 2nd photon on the opposite side of the detector (100’s of mm in small animal scanner), geometrically this formed nearly an isosceles triangle, which meant the apparent geometric scattering angles associated with both possible trajectories (4.3.3) were also comparable (Figure 5.3.1). Therefore, the numerators of the Compton statistics (4.3.4) for both trajectories were of the same magnitude, and the reconstruction technique failed to favour the correct trajectory and had roughly equal chance of selecting either trajectory in such cases.  Figure 5.3.2 illustrates this effect, where the two complementary dips in identification fractions (the fraction of times when the algorithm selected the correct trajectory), and their associated widths, marked the region of scattering angles where DSA prevented the efficient identification of the true trajectory.      Figure 5.3.1.  Geometric representation of the Double-Site Ambiguity topology, where comparable energy deposits and apparent scatter angles prevented efficient identification of the true trajectory.  The figure is not to scale.  39         Figure 5.3.2.  Identification fraction vs. scattering angle for the 1-2 topology, demonstrating the complimentary nature of the Double-Site Ambiguity event where efficient identification of the true trajectory was hampered around 90 degree scattering angle.  5.4  Background Noise  Due to DSA as discussed above, it was evident that doublet topologies were more “noisy” than triplets. This explained why the acceptance as a function of χ2 plots for doublets had poorer True-to-background ratio for doublets than triplets, as shown in Figures 5.2.1.  By comparing the point spread function, i.e. sinogram profile of the system’s response to a point source in the center of the field of view, the intrinsic different in noise levels between the doublets and the triplets topologies could be demonstrated.  The point spread function was obtained by simulating, processing, and mapping the LORs onto sinogram space as described previously for the LXe detector with a central point source at 1 mCi activity, 0.5 μs deadtime, 6 ns  40 coincidence window, 450 keV energy window, 9.4% energy resolution, a 2-hit separation of 1 mm, and no χ2 threshold limit.  While no specific imaging time span was set a priori, the gathered statistics from the simulation were equivalent to 20 min. of imaging time.  The results are shown in Figure 5.4.1 (normalized to total counts) and 5.4.2 (un-normalized), demonstrating relatively noise content and relative signal strength of each topology.   Figure 5.4.1.  Point spread functions of the different event topologies for a point source simulated in the center of the field of view at 1 mCi activity, 0.5 μs deadtime, 6 ns coincidence window, 9% energy resolution, a 2-hit separation of 1 mm, and a 450 keV energy window with the LXe detector.  Bin width was set at 0.3 mm equal to the intrinsic position resolution of the LXe detector simulated.  Data were normalized within each topology to better demonstrate the noisy tails to the sides of the central peak, in particular how noisy the doublet topologies were.  41  Figure 5.4.2.  Point spread functions of the different event topologies for a point source in the center of the field of view simulated at 1 mCi activity, 0.5 μs deadtime, 6 ns coincidence window, 9% energy resolution, a 2-hit separation of 1 mm, and a 450 keV energy window on the LXe detector.  Bin width was set at 0.3 mm equal to the intrinsic position resolution of the LXe detector simulated.  Data here were un-normalized, intending to demonstrate that the most noisy topology (i.e. 1-2) also accounted for the most statistics.  5.5  Noise Reduction Schemes: Filtering  While acknowledging the inherent trade-offs between good signal-to-noise ratio and larger total data volume, it was nevertheless insightful to discuss and test the various noise reduction schemes available in order to see how they behave, and to get some insight as to when each scheme may be useful.   In general, we have two categories of techniques at our disposal for noise reduction.  The first is the  42 filtering method, where we set some physical limits on the accepted data, such as the χ2 threshold cut.  The effect of the χ2 threshold on event acceptance was described in Section 5.2, where the figures showed the acceptance of True, Scatter, and Random events as functions of the χ2 threshold for different event topologies.  As always, any cuts will reduce True counts and total data rate to some extent.  However, the relative abundance of Random to True events would increase as the source activity increases, a χ2 threshold cut aiming at Random reduction could be quite effective at higher activity if obtaining maximal data rate was not the main imaging objective.  While we almost always have a non-zero energy window threshold for any PET imaging, in essence the energy window qualified as another filtering method, which played a large role in noise control.  In principle, the energy window threshold helped to reduce Scatter events which had some lost energy unaccounted for.  Furthermore, the energy window also helped to reduce some Random events, in particular, those that resulted from single scattered photons or incomplete interaction/escaped photons. However, due to real-life instrumental energy resolution limitations, some otherwise True events may be rejected as well due to random fluctuations.  Table 5.5.1 below tabulates the effects of changing the energy window on the fraction of the number of accepted events, normalized to the case of 9% energy resolution and 250 keV energy window, with the simulated NEMA phantom.  As we can see, in order to maintain most of the True events rates while suppressing the background, it was paramount to have good energy resolution.  For example, at the energy resolution of 18% FWHM, 31% of the True events were lost when using a 450 keV energy window, but only 14% of the True were lost if the energy resolution was 9%.  At the same time, scatter counts were proportionally higher at worse energy resolution, as expected.  Therefore, we required superior energy resolution in order to make good use of the more aggressive energy window setting.    43 Table 5.5.1 Fractions of True, Scatter, and Random events accepted at different energy resolutions for different energy windows.  Counts were normalized to counts at 250 keV at 9% FWHM energy resolution.  2-hit separation of 1 mm and dead time of 0.5 μs were used, with a simulated source activity of 1 mCi for the NEMA phantom.   True Events Scatter Events Random Events Energy Resolution 9% 19% 9% 19% 9% 19%  Energy Window Count Fractions 250 keV 100% 95% 100% 108% 100% 100% 350 keV 91% 87% 75% 83% 74% 74% 450 keV 86% 69% 44% 45% 57% 48%   A filtering method which came at nearly no cost to True data rate was the LOR check, which required that every coincident pair of photon must have at least one viable LOR, among all its possible trajectories, passing through the phantom.  Naturally, this method was particularly useful for Random reduction due to the nature of Random coincidences originate from two independent annihilations.  In the simulation we found that 84% of the Random events accepted by the energy window filter were rejected by the LOR check, compared to 12% for Scatter and 8% for True event, in the case of 450 keV energy window with 9% energy resolution.  The general trend held true for lower energy windows and worse energy resolutions.  5.6  Noise Reduction Schemes: Weighting  The second category of noise reduction scheme is the weighting method.  In contrast to the filtering method, weighting uses all available data instead of rejecting them, by giving different degrees of  44 significance to different events.  In a way, one could think of filtering method as a special case of the weighting scheme, where the weights were either 1 or 0.  There are many possible weighting schemes and scaling functions one may use, more than is possible to list here.  Hence, I will only describe methods which arise naturally out of the different background-to-signal ratio of the different topologies.  First, recall the DSA problems associated with the doublet topologies.  Figure 5.3.1 showed that the efficiency in determining the correct trajectory was dependent on the scattering angle.  In principle then, we could assign a weighting factor for each doublet event based on its apparent scattering angle and the associated identification fraction.  For instance, a weight of 1 may be assigned to scattering angles with identification fraction of 1, and a weight of 0 for the case of 0.5 ID fraction, following a linear scale in-between or use other types of scaling (e.g. quadratic weighting to suppress only the scattering angles with the lowest efficiencies.)  Figure 5.6.1 demonstrates the effects of two different weighting scales, a linear one and a quadratic one, on the 1-2 doublet topologies using the sinogram profiles of a central point source.   Figure 5.6.1.  Normalized point spread function for the 1-2 topology, showing the effects of a) no weighting, b) weighting with a linear function, c) and weighting with a quadratic function on the tail background noise contribution, as well as the reduction in event counts relative to the un-weighted case.  Data was simulated with the point source in the center of the field of view at 1 mCi activity, 0.5 μs deadtime, 6 ns coincidence window, 9% energy resolution, a 2-hit separation of 1 mm, and a 450 keV energy window on the LXe detector.  45  With the weighting approach, the background event counts relative to the central source peak were reduced, which were shown in the figures as relatively smaller background.  This reduction, however, came at the cost of the reduction in the total event count, as noted in the figure labels.  Similarly, this weighting by identification fraction method could be extended for other topologies, and further improvements in the signal-to-background ratio may be obtained, but always with the associated event count cost.  5.7  Noise Reduction Schemes: Remarks  In the previous two sections we discussed briefly some of the weighting schemes attempted with the LXe simulation results.  As mentioned earlier, there were other schemes possible that were not tested due to the time that would be required.  Nonetheless, there was one idea of interest that I would mention next.  Since the two photons coming from electron-positron annihilation are mutually orthogonally polarized, scattering information may be used to select true events against the background [Private communication with D. Bryman].  Since Compton scattering angles are preferentially determined by the polarization, this means the two associated photons from an annihilation would have correlated scattering angles.  Hence, in principle, a polarized scattering-correlation weighting scheme may be constructed to evaluate the likelihood of a trajectory as being the correct one, and by extension how likely the event is a True event.  This, in theory, will help to reduce backgrounds when the photons are either not orthogonally polarized as in Random events, or when the polarization information is lost due to prior scatterings as in Scatter events.  The application of this method would be limited however, as it requires both photons to scatter at least once each, namely requiring 2-2, 2-3, and more complicated topology.  Section 5.1 showed that the 2-2 and higher complexity topologies accounted for only 20% of the total event topology distributions, thus limiting the range of application of this concept.  The main theme of noise reduction, as mentioned and stressed repeatedly, had always been trade-offs. Because instrumental resolution limited the extent of our ability to filter out backgrounds with little cost to  46 desirable signals, eventually any further noise suppression must come with associated signal penalties. Sometimes better noise suppression may be more desirable despite the penalty in signal cost, and sometimes this would not be the case.  In the end, one must always keep the imaging objectives in mind when deciding which, if any, noise reduction scheme to use to best achieve the goals.  47  Chapter 6 Contextualizing the Simulation  In Chapter 4, we detailed how to compute, using simulated data, the counting rates of the detector via a Poisson model.  In Chapter 5, we discussed the simulation analysis aspects of the liquid xenon detector, and the success and shortfalls associated with the Compton reconstruction algorithm.  We also discussed possible methods for further noise suppression, at the cost of reducing good signals and overall statistics.  It was concluded that there was no perfect solution that worked every time.  Instead, for the best results, we had to tailor our noise suppression methods to the imaging objectives intended.  In this chapter, we will compare the simulated results for the liquid xenon detector with the simulated results of the Focus F120 MicroPET detector.  I will present the sensitivity and the point spread function of both simulated detectors, using a central point source.  Also, the simulated scattered fraction and noise-equivalent count rates with the NEMA-like rat size phantom will be presented for both detectors. Finally, the reconstructed images of both detectors imaging the micro-Derenzo contrast phantom will be presented.  Lastly, I will make prediction what the simulated results entail for the potential capability of the liquid xenon technology in practical measurements.  48 6.1  Sensitivity  The sensitivity of a PET detector is defined as the ratio of the attenuation-less True coincidence rate to the total source activity for a point source at the center of the field of view, at low activity where the Random contribution is negligible.  In real measurements, the source must be enclosed by some attenuators to provide containment and annihilation sites for the positrons.  By measuring the True coincidence rates with attenuators of different thickness, one then extrapolates the attenuation-less coincidence rates when attenuator thickness is zero, and uses it to quantify the sensitivity of a detector.  From the simulation stand point, we can bypass the attenuator and simulate a point source directly. This was done for both the Focus 120 detector and our LXe detector, under identical conditions where applicable, and the results are listed in Table 6.1.1.  See section 4.4 for details on how to obtain the True coincidence rate.  While the Focus results were consistent with experimental data [19,20,21], please note that the LXe PET results represented the ideal case where the entire liquid volume was active, and that the liquid volume covered the full detection ring and had no segmentations.  In practice, however, a LXe detector would be segmented into sectors for reasons of modularity and engineering constraints, and would then introduce gaps between sectors and consequently lower the sensitivity.  Conservatively, we determined this geometric constraint could reduce detection efficiency to approximately 9/10.  Additionally, within each sector, a small fraction of the volume would be inefficient for light detection, due to the close proximity to photo-detector arrays.  We estimated this effect could reduce detection efficiency to 5/6.  Combined together, a final detection efficiency of 3/4 = 9/10 * 5/6, or 75% was applied to the simulated results to extrapolate the sensitivity of a real detector, which was reported in Table 6.1.1.  Even with the geometric efficiency reduction factored in, however, the simulation suggested that the LXe detector had approximately 3 times or better sensitivity than the Focus 120, at the same operating conditions.  This was understood as the direct consequence of the LXe scanner having more active detection volume than F120, because the F120  49 contained much dead space between its crystals.  Table 6.1.1 Simulated Sensitivity for Focus 120 and LXe Detector, with the 75% geometric reduction taken into account, for 6 ns coincidence window and 0.5 μs dead time.  Energy Window   LXe PET *   Focus 120  250 keV    10.2%  3.5% 350 keV    9.3%   3.1% 450 keV    8.7%   2.6% *simulated with 1 mm 2-hit separation and energy resolution of 9% FWHM at 511 keV.  6.2  Point Spread Function  As mentioned before, the sinogram profile for a point source located in the center of the field of view is commonly referred to as the Point Spread Function or PSF for short.  PSF characterizes a detector’s response function to a localized point source, and can give a measure of the detector’s position resolution. However, note that in the simulation performed, the detector position resolution was a parameterized factor in the case of the LXe detector, and thus the PSF cannot be used to gauge its position resolution.  For the LXe detector, 0.3 mm bin width was used, equalling the position resolution chosen for the simulated detector based on wire pitch of 1 mm.  For the F120, 0.8 mm bin width was used, equalling to half the crystal pitch. The results are shown in Figure 6.2.1.   50  Figure 6.2.1.  Point spread function for the a) LXe and b) Focus 120 detector, with the point source in the center of the field of view at 1 mCi activity, and using 0.5 μs instrumental deadtime each with a 6 ns coincidence window, at different energy window.  9% energy resolution and 2-hit separation of 1 mm were used for LXe detector, while an energy resolution of 18% was used for the Focus 120.  A χ2 threshold of 1 was set for the LXe detector for noise reduction, no other weighting scheme was used.  The important feature in Figure 6.2.1 was the noise level, which was characterized by the broad tail in both cases.  From the PSF it was evident both detectors had similar noise background (see next section for a quantitative discussion), which may be counterintuitive given LXe’s better energy resolution.  However, in the F120 case the noise was mainly due to Scatter events whereas in the LXe case the noise was mostly due to the impacts of DSA.  The RMS values of the central peaks were not of interest because the scanner position resolutions were parameters used in the simulations.  6.3  Scatter Fraction  One of the main advantages of using LXe was that with the enhanced position and energy resolution, background reduction could be improved.  Indirect evidence to this effect was presented in Chapter 5, where the True-to-Background ratio of the acceptance v.s. χ2 threshold curves (Figure 5.2.1) suggested such  51 capability.  To quantify the system’s performance, simulations of the intrinsic scatter fraction with the NEMA-like rat sized phantom were performed.  No χ2 threshold was set, and no weighting scheme was used.  As mentioned in section 4.4, the scatter fraction was the ratio of background to total event rate, measured at low activity when Random contribution was negligible.  See section 4.4 for further details.  At first glance of Table 6.3.1, the scatter fraction was not improved as significantly as one might expect for the enhanced energy resolution of the LXe.  In reality, the apparent “scattered” events in the case of the LXe at high energy window were largely due to misidentified doublets as the result of DSA (See Section 5.3), rather than Scatter events in the conventional sense.  Although this made no real difference when it came to image reconstructions, it was important to point out that better energy resolution would not significantly reduce this part of background contribution, as these events had not lost energy prior to being detected, and thus were not subject to the energy threshold filtering like actual Scatter events would.  The simulated results for the 250 keV and 350 keV energy windows with the F120 scanner were consistent with what had been measured experimentally [19,20,21], and this provided support for the simulation methodology, and by extension gives credence to the claim that the LXe could perform on par with the existing detectors.  Furthermore, the LXe detector’s improved sensitivity meant that the overall statistics and image quality would be improved further.  To examine the full effect, we need to look at the noise-equivalent count rates of both systems, which is discussed in the next section.         52 Table 6.3.1 Simulated Scattering Fraction for the Focus 120 and the LXe Detector, with 6 ns coincidence window and 0.5 μs dead time, with the NEMA phantom.  Energy Window    LXe PET *     Focus 120  250 keV     31%     35% 350 keV     23%     24% 450 keV     20%     22% *simulated measured with 2-hit separation of 1mm, and energy resolution of 9% FWHM, and a χ2 threshold of 1 was chosen where the acceptance of identifying True coincidence was ~50%.  6.4  Noise Equivalent Count Rates  Noise Equivalent Count Rate (NECR), defined as the square of the True rate divided by the total detected rate, under the assumption that the Random subtraction method is noiseless, is a commonly used performance indicator for PET.  With the heightened sensitivity of the LXe detector, it was expected that the associated NECR rates would be higher compared to the Focus 120 at any given activity; and this was what was found, as shown in Figure 6.4.1.  The simulation showed a dramatic 4-fold improvement in the NECR for the LXe system, with a maximal NECR of 340 kcps at 80 MBq activity for the LXe, in contrast with a maximal NECR of 80 kcps at 80 MBq activity for the Focus detector.  At 250 keV energy threshold, the improvement in NECR was a factor of 3, roughly equal to the increase in sensitivity.  At 450 keV energy threshold, the LXe detector outperformed the F120 by approximately a factor of 4.  This was understood as an improvement due to further Scatter suppression by using the more stringent energy window criteria of 450 keV in place of the 250 keV.  53 Figu re 6.4.1.  Noise equivalent count rate as a function of the source activity for different energy windows for the LXe and Focus 120 detector simulated with the NEMA phantom. Instrumental dead time of 0.5 μs was used, along with a 6 ns coincidence window.  9% energy resolution at 511 keV and a 2-hit separation of 1 mm were used for the LXe detector, while an energy resolution of 18% at 511 keV was used for the Focus 120.  No χ2 threshold and no weighting scheme were used for the LXe detector in order to obtain the maximal rate.  While no χ2 was set in the making of Figure 6.4.1 to obtain the maximal data rate, this we concluded was a fairly reasonable approximation to the condition a real LXe detector may operate under.  We believed the LXe’s 3-fold improvement in sensitivity and therefore statistics should offset the need for further background reduction in most situations.  In the scenarios where further background reduction is desired,  54 however, NECR peak would be reduced as a result but the reduction was not likely to be more than 25%, so the LXe detector should still have considerably higher NECR than existing systems.  6.5  Reconstructed Images  One of the goals of PET scanners is to produce images for diagnosis.  For this purpose, we used the micro-Derenzo image contrast phantom in the simulation, and employed a MATLAB filtered-back projection with a modified RAMP filter with a cut-off at the Nyquist frequency (See Section 4.5 for details). Projection bin widths were 0.8 mm for the Focus, and 0.3 mm for LXe, as discussed previously.  Positron range was simulated with an RMS range of 0.5 mm, corresponding to the range of Fluorine-18, a commonly used isotope in PET.  The acollinearity effect was not simulated.  The resultant image comparison is shown in Figure 6.5.1.  The ring patterns for the Focus 120 were a normal artifact of the filtered back projection algorithm coupled with the discretized coordinate system of the scintillating crystals.  Furthermore, while the Focus detector had trouble resolving rods smaller than 1.6 mm in diameter, the LXe detector was capable of resolving rods close to 1.0 mm diameter.  Due to the usual positron range being comparable to 1 mm or higher for the isotopes used in medical imaging, resolution better than 1 mm would not improve the image further.   55  Figure 6.5.1.  Reconstructed image for the Micro-Derenzo phantom immersed under water for 450 keV energy windows with the Focus 120 and the LXe detectors, using the central 2D slices.  Rod diameters were, oriented counter-clockwise from the bottom, 2.0 mm, 1.8 mm, 1.6 mm, 1.4 mm, 1.2 mm, and 1.0 mm.  Rod-to-rod separation was twice the rod diameters. Event statistics roughly equals to 20 minutes of imaging for a source activity of 1 mCi, and no χ2 threshold was used.  A modified RAMP filter with the usual Nyquist cut-off was applied. See section 4.5 for details on the reconstruction algorithm.  56  Conclusion  In this thesis I discussed the potential capability of the liquid xenon technology as it pertained to the field of positron emission tomographic scans.  Due to liquid xenon’s fast scintillation decay signal, improved timing and position resolutions, as well as high charge and light yield, a liquid xenon time projection chamber promised to provide unprecedented imaging capabilities.  In order to explore the feasibility of using the LXe technology for PET applications, simulation methodologies and count rate model were developed to evaluate the potential of the LXe system.  In doing so, performance testing of a Compton reconstruction algorithm was carried out, and we discussed the inherent difficulties in selecting the correct Lines or Response in some of the interactions due to geometric considerations.  We then simulated a conventional scintillating crystal-based detector, the MicroPET Focus 120, and compared the simulated performance of both the Focus and the LXe scanners.  We found that in many respects the LXe performed better than the Focus detector, most dramatically in the improvements of sensitivity (3 folds) and the noise-equivalent count rate (4 folds).  However, there was no significant improvement to the scatter, but this was found to be the result of the double-site ambiguity, rather than the failing of the LXe system to reject actual Scatter events.  This last point, however, meant that it would be difficult to further improve the scatter fraction without risking the loss of good data via weighting or filtering schemes.  The biggest advantages of the liquid xenon detector were in its improvements to detection sensitivity, energy resolutions, and position resolutions, all leading to improved images.    In addition, it was also possible to further reduce noise via different filtering and weighting techniques, but usually at some cost to the overall statistics.  There was always a game of trade-offs between balancing the noise content and maintaining good statistics, and one which we must always be mindful of in order to optimize our strategies  57 to accomplish the imaging objectives.  In conclusion, this thesis sought to examine the potentials of the liquid xenon technology for medical imaging purposes.  I have found that liquid xenon approach was indeed a promising tool for PET applications, as demonstrated by its enhanced performance in comparison with commercially available scanners, using the commonly used image quality estimators.  58  Citations 1. GE Healthcare.  Medcyclopedia Standard Edition.  http://www.medcyclopedia.com/ 2. C. M. Lederer, J. M. Hollander, I. Perlman.  Table of Isotopes.  1967, 6th edition, New York: Wiley. 3. B. E. Cooke, A. C. Evans, E. O. Fanthome, R. Alarie, and A. M. Sendyk.  1984,  "Performance figure and images from the Therascan 3128 positron emission tomograph" IEEE Trans. Nucl. Sci. 31(1):640-644. 4. M. E. Casey and E. J. Hoffman.  1986.  "Quantitation in Positron Emission Computed Tomography: 7. A technique to reduce noise in accidental coincidence measurements and coincidence efficiency calibration".  J. Comput. Assist. Tomogr. 10,845-850. 5. W. H. Wong, N. A. Mullani, E. A. Philippe, R. Hartz, and K. L. Gould.  “Image Improvement and Design Optimization of the Time-of-Flight PET”.  Journal of Nuclear Medicine, Vol. 24, No. 1, P52-60. 6. V. Astakhov, P. Gumplinger, C, Moisan, T. J. Ruth, V. Sossi.  “Effect of depth of interaction decoding on resolution of PET: a simulation study”.  Nuclear Science Symposium Conference Record, 2002 IEEE, 16 Nov. 2002.  Pages: 965-969, volume 2. 7. T. Yamaya, N. Hagiwara, T. Obi, M. Yamaguchi, K. Kita, N. Ohyama, K. Kitamura, T. Hasegawa, H. Haneishi, H. Muramaya.  “DOI-PET image reconstruction with accurate system model reducing redundancy of imaging system”.  Nuclear Science Symposium Conference Record, 2002 IEEE, 16 Nov. 2002,  Pages: 1226-1230, volume 2. 8. V. N. Solovov, A. Hitachi, V. Chepel, M. I. Lopes, R. Ferreira Marques, A. J. P. L. Policarpo. “Detection of scintillation light of liquid xenon with a LAAPD”.  Nuclear Instruments and Methods in Physics Research, A488 (2002) 572-578.  59 9. E. Conti.  “Liquid Xenon for Detection of Low and Intermediate Energy Gamma-Rays”.  Published in “Lisbon 1999, Calorimetry in high energy physics” 97-104. 10. M. I. Lopes, V. Chepel, V. Solovov, R. Ferreira Marques, A. J. P. L. Policarpo.  “Positron Emission Tomography Instrumentation: Development of a Detector Based on Liquid Xenon”.  LIP-Coimbra, Department of Physics of the University of Coimbra, Portugal, pp. 675-680. 11. E. Aprile, K.L. Giboni, P. Majewski, K. Ni, M. Yamashita.  “Observation of Anti-correlation between Scintillation and Ionization for MeV Gamma-Rays in Liquid Xenon”.  2007arXiv0704.1118A. 12. M. Balcerzyk, M. Moszynski, M. Kapusta, D. Wolski, J. Pawelk, C. L. Melcher.  “YSO, LSO, GSO and LGSO. A Study of Energy Resolution and Nonproportionality.”  IEEE Transactions on Nuclear Science, Vol. 47, No. 4 (2000), P1319-1323. 13. J. Humble.  Physics Simulation with Java™.  KTH, Kurskod: 5A1418.  May 29th, 1999. http://www.student.nada.kth.se/~f93-jhu/phys_sim/compton/Compton.htm 14. U.G. Oberlack, E. Aprile, A. Curioni, V. Egorov, K.L.Giboni.  “Compton scattering sequence reconstruction algorithm for the liquid xenon gamma-ray imaging telescope (LXeGRIT).” arXiv:astro-ph/0012296 v1, 13 Dec, 2000. 15. GEANT4 website.  http://cern.ch/geant4/ 16. ROOT website.  http://root.cern.ch/ 17. MATLAB website.  http://www.mathworks.com/ 18. Siemens Medical Solutions website.  http://www.medical.siemens.com/ 19. Tai et al.  “Performance Evaluation of the microPET Focus: A Third-Generation microPET Scanner Dedicated to Animal Imaging”.  J Nucl Med 2005; 46:455-463. 20. Kim, J.S., et. al., 2007.  Performance Measurement of the microPET Focus 120 Scanner.  J. Nucl. Med. 2007; 48:1527-1535. 21. R. Laforest, D. Longford, S. Siegel, D. F. Newport, and J. Yap.  Performance evaluation of the  60 microPET-Focus-F120.  Nuclear Science Symposium Conference Record, 2004 IEEE Volume 5, Issue , 16-22 Oct. 2004 Page(s): 2965 - 2969 Vol. 5. 22. S. Siegel, M. Eriksson, L. Eriksson, M. Casey, R. Nutt.  “An alternative to polishing the surface of scintillation detectors.”  IEEE Nuclear Science Symposium Conference Record, Vol. 3, P1212-1214, 1999. 23. Bai, B., Ruangma, A., Laforest, R., Tai, Y.-C., Leahy, R. M., 2003.  Positron range modeling for statistical PET image reconstruction.  IEEE Nuclear Science Symposium Conference Record.  Page 2501-2505 Vol 4, Oct. 19-25. 24. National Electrical Manufacturers Association.  “NEMA Standards Publication NU2-2001: Performance Measurements of Positron Emission Tomographs.”  Rosslyn, VA: NEMA; 2001. 25. T. F. Budinger, S. E. Derenzo, R. H. Jagust, W. J. Valk.  High-resolution PET [Positron Emission Tomography] for Medical Science Studies.  Lawrence Berkeley Laboratory (LBL), September 1989. 26. F. Sauli.  Instrumentation in High Energy Physics.  World Scientific Pub Co Inc (June 1992).  ISBN: 978-9810214739. 27. D. Bryman, et al.  “Investigation of liquid xenon detectors for PET: simultaneous reconstruction of light and charge signals from 511 keV photons”.  Conference Proceedings-IEEE Nuclear Science Symposium and Medical Imaging Conference 2007, Submitted for publication. 28. F. Xu.  “Development of a LXe-TPC Compton telescope for gamma-ray astronomy”, dissertation. Columbia University. 29. E. Aprile, et al.  “3D Position Sensitive XeTPC for Dark Matter Search”  7th UCLA Symposium on “Sources and Detection of Dark Matter and Dark Energy in the Universe”, 2006. http://arxiv.org/abs/astro-ph/0609714. 30. C. Grupen, Particle Detectors, Cambridge Press, 1996. 31. L. Le Meunier, F. Mathy, P. D. Fagret.  “Validation of a PET Monte-Carlo simulator with random  61 events and dead time modeling”.  IEEE Transaction on Nuclear Science, Vol. 50, Issue 5, Oct. 2003, P1462-1468. 32. S. E. Boggs, P. Jean.  “Event Reconsruction in high resolution Compton telescopes.” arXiv:astro-ph/0005250 v1, 11 May, 2000. 33. M. E. Daube-Witherspoon, G. Muehllehner.  “Treatment of Axial Data in Three-Dimensional PET”. J Nucl Med 28:1717-1724, 1987. 34. H. Chen, Xu. Lei, D. Yao.  An improved ordered subsets expectation maximization positron emission computerized tomography reconstruction.  Computers in Biology and Medicine, Vol. 37, Issue 12, 1780-1785, 2007. 35. A. P. Dempster, N. M.Laird, D. B. Rubin.  Maximum likelihood from incomplete data via the EM algorithm.  J. Ray, Stat. Soc., Ser. B, 39:1-38, 1977. 36. R. Gordon, R.Bender, G, Herman.  Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and X-ray photography.  J. Theor. Biol. 29:471-481, 1970. 37. A. Kak, and M. Slaney.  “Principles of Computerized Tomographic Imaging”.  IEEE Press, 1988. ISBN 0-87942-198-3. 38. F. Natterer, and F. Wubbeling,  “Mathematical Methods in Image Reconstruction”.  Society for Industrial and Applied Mathematics.  ISBN 0-89871-472-9. 39. W. Akram, S. Gee, C. Gamiz, C. Pan, J. Romberg.  Image Processing Using SPECT Analysis.  Rice University, http://www.owlnet.rice.edu/~elec431/projects96/DSP/index.html

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0066309/manifest

Comment

Related Items