Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Radon Transform in three dimensional image reconstruction from projections Stazyk, Michael Walter 1990

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1990_A7 S72.pdf [ 6.77MB ]
Metadata
JSON: 831-1.0085047.json
JSON-LD: 831-1.0085047-ld.json
RDF/XML (Pretty): 831-1.0085047-rdf.xml
RDF/JSON: 831-1.0085047-rdf.json
Turtle: 831-1.0085047-turtle.txt
N-Triples: 831-1.0085047-rdf-ntriples.txt
Original Record: 831-1.0085047-source.json
Full Text
831-1.0085047-fulltext.txt
Citation
831-1.0085047.ris

Full Text

THE RADON TRANSFORM IN THREE DIMENSIONAL IMAGE RECONSTRUCTION FROM PROJECTIONS By Michael Walter Stazyk B.Eng.Phys., McMaster University, 1982 A T H E S I S S U B M I T T E D I N P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R O F A P P L I E D S C I E N C E in T H E F A C U L T Y O F G R A D U A T E S T U D I E S D E P A R T M E N T O F P H Y S I C S We accept this thesis as conforming to the required standard T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A October 1990 © Michael Walter Stazyk, 1990 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of P h y s i c s The University of British Columbia Vancouver, Canada Date 12 October 1990 DE-6 (2/88) A b s t r a c t This thesis presents an algorithm for image reconstruction from projections intended for use in a new class of volume imaging PET scanners. The algorithm is based on the inversion of the three dimensional Radon Transform as it applies to the truncated cylindrical detector geometry and is derived from the X-ray Transform inversion given by the Orlov recovery operator. The algorithm is tested using Monte Carlo simulations of several phantom geome-tries and employs a single iterative step to include all detected events in the recon-struction. The reconstructed images are good representations of the original objects, however the iterative step is a source of some significant artefacts in the images. Also discussed is the extension of the Radon Transform technique to a non-iterative method for three dimensional image reconstruction using all detected events. n Table of Contents Abstract ii List of Figures v Acknowledgements vii 1 Introduction to P E T > 1 1.1 Fundamentals of PET 1 1.2 Positron Volume Imaging 4 2 Image Reconstruction 6 2.1 Introduction 6 2.2 A Review of Two Dimensional Image Reconstruction 9 2.3 A Review of Three Dimensional Image Reconstruction 12 2.4 The Orlov Recovery Operator 18 3 The Reconstruction Algorithm 22 3.1 The Orlov Recovery Operator and the Radon Transform 22 3.2 The Explicit Radon Transform 28 3.3 The Orlov L Function 32 3.4 Reconstruction Using All Detected Events 35 4 Implementation 43 4.1 Histogramming the Radon Transform 43 iii 4.1.1 Histogram Dimensions 44 4.1.2 Histogram Procedure 45 4.2 The Discrete Reconstruction Algorithm 46 4.2.1 Filtering 47 4.2.2 Aliasing, Sampling and Interpolation 48 4.2.3 Windows 51 4.2.4 Cost of Filtering 61 4.2.5 Backprojection 62 4.2.6 Projection 65 5 Results and Conclusions 67 5.1 Monte Carlo Simulations 67 5.1.1 Reconstruction of Small Objects 67 5.1.2 Reconstruction of Large Objects 69 5.1.3 The Iteration and Image Resolution 81 5.2 Future Work 84 5.2.1 Three Dimensional Filtering 84 5.2.2 Non-iterative Reconstruction Using All Detected Events 88 5.3 Conclusions 92 Bibliography 95 A Relation to Fourier Transform Methods 101 B Program Listings 103 iv List of Figures 2.1 The cylindrical detector in cross section 16 3.2 The coordinates of the two dimensional Radon Transform on E T 24 3.3 The limits of a on G for the plane normal to ii 30 3.4 Determination of the Orlov L function from the detector geometry. . . . 33 3.5 The dependence of the Orlov function L~x on £ 36 3.6 The unmeasured regions on the ST plane 38 4.7 Fourier space filters for RT and XRT inversion 53 4.8 Object space filters for RT and XRT inversion 56 5.9 Central z-profile through the low-statistics sphere 70 5.10 Central y-profile through the low-statistics sphere 71 5.11 Central z-profile through the low-statistics sphere 72 5.12 Superposition of the profiles of the low-statistics image of a sphere. . . . 73 5.13 A central x profile through the iterated image of a sphere 75 5.14 A central z profile through the iterated image of a sphere 76 5.15 Superposition of the line profiles of the iterated image of a sphere . . . . 77 5.16 Surface plot of the central slice of the low-statistics sphere 78 5.17 Surface plot of the central slice of the iterated sphere 79 5.18 Low-statistics reconstruction of an ellipsoidal phantom 82 5.19 Iterated reconstruction of an ellipsoidal phantom 83 5.20 Density plot of the low-statistics ellipsid 85 v 5.21 Density plot of the iterated ellipsid 5.22 Determination of pmax for planes contained entirely within the detector . 90 v i Acknowledgements I would like to thank the following people for their contribution to this thesis. Dr. Joel Rogers, my thesis supervisor, for his clear and patient answers to all my questions about PET, and for sharing his ideas, thereby making this work possible. Dr. R. Johnson, my faculty supervisor, for his helpful advice on many matters. Dr. R. Harrop and Christi Dykstra for valuable discussions on the use of the Radon Transform in PVI. The other members of the PET group at TRIUMF and at UBC Hospital and the people in the UBC Physics department for their support and encouragement. vn Chapter 1 Introduction to P E T 1.1 Fundamentals of P E T Positron Emission Tomography, P E T , is a nuclear medicine imaging technique used to study metabolic processes in vivo. The technique at present is largely research oriented but is coming into increased clinical use. The principle difference between P E T and other commonly used non-invasive imaging modalities is that P E T offers a physiological or functional view of the inside of the body. Techniques such as magnetic resonance imaging, M R I (formerly N M R zeugmatography), X-ray computed tomography, or ultrasonic imaging, give anatomical or structural views of the body based on the variable density of tissues. In P E T a naturally occuring metabolite or metabolic analogue is tagged with a positron emitting isotope (eg. 0 1 5 , F18, Cu). This radioactive tracer is injected in solution into the bloodstream of the subject of the study. (It may also be inhaled in the form of a gaseous compound). The organ/region of interest is surrounded by a ring of detectors. The tracer is selectively absorbed by the organ and decays by positron emision at the organ site, the distribution and relative amount of tracer uptake and its subsequent decay depends on the organ's differential uptake of the tracer. The positrons annihilate with electrons near the decay position. The result is the production of two gamma rays at the site of the positron annihilation. For thermal positron velocities, the resulting gamma-rays each have an energy equivalent to that of the rest-mass of 1 Chapter 1. Introduction to PET 2 the electron, bllKev. Momentum conservation requires that the gamma-rays travel in nearly anti-parallel directions. The line formed by the direction of motion of these two photons is called the event line. The ring of detectors that surrounds the subject will detected some of these events depending on the orientation and position of the line. Those detections that include both gamma rays from a single decay are called true coincidences. The true coincidences are determined by gating detection of two photons that fall within a narrow time window, the coincidence resolving time. The position and orientation of each event line is digitized and stored in a computer. The parameters of the event line may be stored either sequentially, in list-mode, as detections occur, or they may be histogrammed in real time to reduce storage requirements and speed reconstruction of the final image. Once stored in the computer, a reconstruction algorithm is used to generate the initial activity distribution. The PET technique has the added advantage that the re-constructed activity can be quantified by simultaneous measurement of tracer activity in the blood stream by drawing off blood samples from the subject at intervals dur-ing the scan. A normalization curve can be produced which accounts for decrease in serum tracer level as a function of time. This emperical curve is necessary because of biological elimination of the tracer compound in addition to the physical decay rate. This technique allows for the study of the kinetics of metabolic pathways for the tracer containing compounds. Models of tracer uptake account for the known physiological processes that me-tabolize the tracer in question. Simplified models are described by linear first order differential equations whose rate constants model the uptake, release, and elimination of the tracer and its derivatives. These models are only approximations for what are nonlinear processes relating the model parameters. The values for the rate constants are measured by determining the activity in a given region of interest as a function Chapter 1. Introduction to PET 3 of time in images reconstructed as a sequence in time. A statistical database of these measured parameters for normal and diseased subjects can be established to aid in the diagnosis and treatment of patients. Diseases of study in PET include central nervous system disorders such as Parkin-sons, Alzheimers, and Huntingtons diseases. Other studies include blood flow measure-ments in the brain and the heart to show stroke or infarction. The spatial resolution of the PET scanner, its ability to distinguish one source from another separated in space, is limited ultimately by the initial positron range prior to annihilation. The magnitude of this effect depends on the emitting isotope, ranging from sub-millimeter to several millimeters. In commercial tomographs, the resolution is determined by the detectors. There are two components to this resolution limit, the intrinsic detector resolution due to the statistics of the photons produced by a scintillation and subsequently detected, and resolution effects due to the shape, size and arrangement of the detectors which can affect the distribution of the detected photons. The resolution of the commonly used crystal scintillator detectors limits image resolution to about 5mm in commercial PET scanners. A commonly used detector arrangement consists of a block of crystal such as Bis-muth Germanate, BGO, that has been partially saw-cut into smaller units on its ex-posed face[8]. The saw-cuts permit improved resolution over uncut detector blocks because the the light produced at the scintillation position cannot spread as it can in a single large crystal. The depth of sawcuts can be varied to produce a constant resolution across the detector surface. In some PET cameras, sub-nanosecond time resolution in the measurement of pho-ton detection time allows the inclusion of time-of-flight information of the photons to localize the annihilation point, but types of scanners are not in widespread use. Chapter 1. Introduction to PET 4 1.2 Positron Volume Imaging The term PVI has been used to describe a new generation of PET instruments that employ measurements of all event lines[65]. This new technique will provide improved signal-to-noise ratio and more uniform spatial resolution in PVI images over conventional PET images. In conventional slice PET machines, the event lines are collimated by septa to lie within the transaxial planes (normal to the tomograph ring axis). The purpose of the septa is to eliminate out of plane scatter and to reduce singles and random coinci-dences. Detections of only one of the gamma-rays resulting from a given annihilation are called singles, and have an effect on the camera performance as they increase the detector deadtime preventing the measurement of true coincidence events during the 1-2 microseconds that are required to process the single detection. Those singles aris-ing from different annihilations that fall within a coincidence resolving time window become random coincidences and are a source of errors in subsequent processing of the data if they are not corrected. Techniques exist to correct for the loss in image quality due to scatter and random coincidences[l,7]. The septa absorb a large proportion of coincidence events that would otherwise be detected. The motivation of PVI is to measure all of this data by removing the septa. Several PVI camera designs have been proposed [54,64,40]. Simulation studies[55,17,66,65] as well as measurements on modified slice tomographs[67] and a commercially available volume imaging tomograph[63] have been performed to quantify the characteristics of PVI cameras. The result of removing the septa is to increase the number of detected coincidences as well as the number of singles, randoms and scattered events. The in-creased sensitivity depends on the detector acceptance angle and is 5-7 times that of a slice tomograph for typical ring sizes. Regions near the central region of the field Chapter 1. Introduction to PET 5 of view subtend a greater angle at the detector and have enhanced statistics. The number of true coincidences at the axial limit of the field of view remains the same (if the detection rate is not limited by deadtime due to the singles rate) and the increased number of scattered and random events can degrade image quality in those regions. The fraction of the total number of detected events that are scattered can be 50% compared to the 20% obtained in a slice tomograph [67]. The niche for volume imaging is in low count-rate studies, in low activity scans or when using limited uptake tracers such as 18r?-fluorodopa. It is under these conditions that the true coincidence rate is not dominated by single event detection and improved signal-to-noise ratio can be obtained in the central region of the image. The septa in slice tomographs also allow coincidences to occur between adjacent rings in a multi-ring camera. These so called cross-slices have different statistical noise and resolution properties than the so called straight slices or transaxial slices and are presented as intermediate in position between two straight-slices. The two dimensional reconstruction algorithms deal with the cross-slice as though it were measured as a straight-slice. Additional normalization and correction factors are necessary when in-terpreting the image in this slice. In a volume imaging camera with detectors that decode continously for event position, all sections through the image have the same resolution. Reconstruction of images from this data requires different, and much more compu-tationally intensive algorithms than those in slice machines. Chapter 2 describes the methods that have been used for two and three dimensional image reconstruction in PET. This thesis will be concerned with the image reconstruction algorithm only and will be tested on scatter-free Monte Carlo simulated data. Chapter 2 Image Reconstruction 2.1 Introduction In PET or PVI, the process of generating a two or three dimensional image, stored as a matrix in the memory of a computer, from the discrete measurement of the event lines, is called image reconstruction. The application of image reconstruction techniques is found in other fields such as radio-astronomy, geophysics, and electron microscopy and other medical imaging fields such as MRI and X-ray computed tomography. Many of the methods used in PET have been derived from algorithms developed in these other fields. There are three catagories of reconstruction algorithms that have been studied for use in PET. 1. Series Expansion Methods. 2. Iterative Methods. 3. Analytic Methods. Briefly, series solutions are based on the series expansions of orthogonal sets of functions determined from the geometry (number of views, object symmetry) of the inversion problem. The series solutions require many function evaluations of polynomials and are computationally expensive[37,29,45,26]. Direct algebraic reconstruction using matrix methods is possible. Inversion of the large matrices required for three dimensional reconstruction is the limiting factor in the 6 Chapter 2. Image Reconstruction 7 implementation of these methods[5]. Iterative methods have been employed to solve this problem[9]. Iterative methods consist of making successive approximations to the projections of an object. In each iteration, the corrections to the projections must be backprojected through the image and the resulting image must then be reprojected for modification once again. This procedure is expensive in terms of computing time, particularily for three dimensional image reconstruction as the projections are two dimensional functions and the image is a volume. The most widely used methods of iterative reconstruction are based on the maximization of likelihood functions denned by the detector efficiency for measuring a given event-line[60,35]. Iterative algorithms produce noise amplification and edge artifacts as the process converges to a final image. Finding a suitable stopping point for the technique has been the subject of much of the recent literature. This prob-lem may be overcome with new algorithms using statistical measures of the feasibility of images in terms of the resolution and noise properties of the original data[70]. These methods have superior noise performance, and the use of Bayesian methods allows the inclusion of prior information on the object to be used in the reconstruction[41]. Analytic reconstruction methods are based on the discretization of integral inver-sion formulas. These integral inversion formulas can be expressed by various transform methods which may be theoretically equivalent (for a given filter form), but the im-plementation details of which can give rise to significantly different properties in the reconstructed image. Analytic techniques are implemented either by filtered backpro-jection FBP or backprojection followed by filtering. Filtering consists of convolution if the X-ray Transform or Radon Transform are used, or direct multiplication if Fourier Transform methods are used. The X-ray Transform and Radon Transform techniques can be considered as implementations in signal or object space as opposed to opera-tions in the Fourier domain. Direct Fourier Transform space or FBP implementation Chapter 2. Image Reconstruction 8 of the Fourier Transform methods with the Fast Fourier Transform, (FFT), are the most efficient in terms of computing time (for implementation in software), but have been shown to have poorer noise performance than the convolution methods [39]. Fast parallel hardware can favour convolution methods. Convolution methods for the inversion of the X-ray Transform in two and three dimensions have been developed[26,43,23] for a variety of detector geometries, as well as for the Radon Transform in the two dimensional case[27] or in the three dimensional case for a spherical detectorfl 1,58,24]. For X-ray Transform inversion, convolution methods implemented by FBP are faster than the backproject-then-filter methods as the convolution performed in the projections is one dimension less than that needed in the image. For the Radon Transform method, convolution filtering in the projections is one dimensional. Alternately, a simple weighting of the Radon Transform projections prior to backprojection results in an image that can be inverted by a three dimensional convolution with a local operator. Convolution and Fourier Transform methods require linearity and spatial invariance of the detector response function for their theoretical justification[2]. The condition of linearity is just that of superposition. The detector will respond to the sum of two sep-arate inputs with the sum of the two separate responses (measured X-ray Transforms or a backprojected image). Saturation is an example of a non-linear detector response. The condition of spatial or shift invariance requires that the detector response be in-dependent of the absolute position of the object within it, ie. that every point in the detector field-of-view, FOV, subtends the same angle at the detector. Given these two conditions, the detector response can be written as a convolution, an integral operator with a kernal which is invariant as a function of absolute position. For the case of two dimensional image reconstruction, these conditions are always satisfied for a ring detector reconstructing an image in the plane of the the ring. In the case of three Chapter 2. Image Reconstruction 9 dimensional image reconstruction with an incomplete detector (one without a full ATZ steradian coverage), the condition of spatial invariance is not met by some oblique event lines when imaging an extended object (an object whose axial extent is most of the detector height). For the application of convolution or Fourier Transform techniques, these events must be rejected prior to reconstruction and do not contribute to reducing the statistical noise of the image. Solutions to this problem, enabling the use of all detected events in the reconstruc-tion, have been proposed and implemented by Kinahan and Rogers[34,33], Defrise et al.[20], and Cho et al.[13]. These techniques entail the use of one or more iteration steps to complete the unmeasured projections, either in the projections, in the Fourier space of the projections or in the Fourier space of the image itself. 2.2 A Review of Two Dimensional Image Reconstruction The theory of two dimensional image reconstruction from the measurement of projec-tions (the line integrals of an object) was originally discovered by Radon[50], and later described in terms of Fourier Transforms by Bracewell and Riddle [3] for use in radio-astronomy. In two dimensional PET, an unknown activity distribution / (x ) , x = (x, y), is to be recovered from the measurement of its one dimensional projections. In two dimensions, the Radon Transform[50] and the X-ray Transform[62] are mathematically equivalent. Both transforms decribe the set of one dimensional projections, the line integrals, of a two dimensional object. Expressed mathematically, the one dimensional projections g(xT; T ) of / ( x ) are given by where f is a two dimensional unit vector that specifies the direction of integration (the (2.1) Chapter 2. Image Reconstruction 10 projection direction) and xT is a vector in the plane to be reconstructed which has magnitude equal to the perpendicular distance, xT, from the origin to the line through x in the direction r. That is, xT = x — (x • r)r, with xT • r = 0. In the plane, the direction r can be specified by one angle, tp. The integration of all projections back through the image plane is termed backprojection[16] and can be expressed as <r2(x) = J drg(xT;T) (2.2) G where G is the range of angles of measured projections. For a ring detector, G spans 7T radians and the differential dr- is just dip. The backprojected image, cr2(x), is the sum, at any x, of the projection values at the corresponding xT in all the projection directions r. This image has the form [68] <r2(x) = /(x) * * j l (2.3) where ** is the two dimensional convolution operator [2] and u = v * * w is defined by u(x) = J dx'v(x!)w(x-x!). (2.4) The backprojected image has been used as a coarse approximation to the true ob-ject distribution, /(x) [68]. Its present importance is as an intermediate step in an analytically correct reconstruction. In the terminology of image processing[28], the detector response, which will be taken here to include the backprojection operator, has a point-spread-function or P S F which is the detector response to a point source. In Equation 2.3, the function l/|x| is the P S F of the detector and is the backprojected image of a point source. This image, cr2(x), h a s a Fourier Transform S 2 ( i / ) , S 2 ( i / ) = -F2(u) (2.5) Chapter 2. Image Reconstruction 11 where u is the magnitude of the two dimensional frequency space vector, v. The function l/i/, the Fourier Transform of the PSF, l/|x|, is called the modulation transfer function. The correction to E2(«v) to form F2(v), the two dimensional Fourier Transform of /(x), is simply multiplication by the inverse of the modulation transfer function, The desired function /(x) is then obtained by a two dimensional inverse Fourier Trans-form of F2(u). Multiplication by the function v in Equation 2.6 is termed a filtering operation, where components of the Fourier Transform are modified by some function of frequency, called the filter (here just the radius in Fourier space). This filtering operation has the property of increasing high frequency noise and must be regularized by a smoothing or apodising window. In this thesis, the term filter will be used in a general sense to mean the unapodised kernal of the integral recovery operator regard-less of the transform technique used (eg. X-ray Transform, Radon Transform, Fourier Transform). The result given in Equation 2.6 is derived by Bracewell and Riddle[3] for the band-limited condition ^(iv) = 0 for \v\ > vmax and is transformed to the corresponding one dimensional projection space convolution filter. This method has been implemented in PET by convolution with different approximations for the filter and different apodising windows[51,59,56,6] and by Fourier filtering of the projections and direct Fourier fil-tering of T,2{u) [39,61,6]. Radon's original result[50] is equivalent to Equation 2.6 and uses the Hilbert Transform of the first derivative of the Radon Transform, This can be shown to be equivalent to the Fourier Transform result using the property F2(v) = i /E 2 (» . (2.6) (2.7) Chapter 2. Image Reconstruction 12 of the Fourier Transform of a derivative of a function u(x)[2] dnu(x) dxn = (-2Triu)nF„u(x) (2.8) and the backprojection of Equation 2.2[56]. An implementation of this result was given by Horn[27] for arbitrary ray sampling schemes. The convolution in Equation2.7 is also equivalent to convolution with l/|a;T|2 in the projections[4]. These convolution methods and the equivalent Fourier space methods are valid for a slice-oriented tomograph with a full 2ir radian coverage of the image plane. These two dimensional techniques can be used to form three dimensional images by stacking a set of parallel two dimensional slices[5,39,51], but these algorithms do not include oblique event lines and produce images with fewer events and greater noise than true three dimensional reconstruction methods. Algorithms have been devised to combine the two dimensional slice reconstructions at various orientations with appropriate weighting functions [14,13] in order to include oblique events in the reconstruction. These can be rewritten in terms of three dimensional reconstruction algorithms[21]. 2.3 A Review of Three Dimensional Image Reconstruction Fully three dimensional analytic image reconstruction algorithms based on shift invari-ant PSFs, have been implemented in the X-ray, Fourier and Radon transform spaces. The X-ray Transform of the three dimensional function /(x), with x = (x, y, z), is the one dimensional projections of /(x) given by which is a function of the direction vector r and of the two dimensional vector xT, normal to r. The vector x T is the projection of the three dimensional vector x onto the plane normal to r, x T = x — ( X - T ) T , where r is the unit vector specifying the orientation (2.9) Chapter 2. Image Reconstruction 13 of the direction of integration and is parameterized by two angles. In spherical polar coordinates, T = ( 1 A , Y V ) . (2.10) For a spherical detector (or any geometry with 4TT steradian coverage), the backprojec-tion of Equation 2.9 <73*(x) = / dr<7(xT;r) (2.11) G has the form [68] <T3*(X) = /(X) * * * T~J2 ( 2 - 1 2 ) l X l where the detector PSF is now l/|x| 2 and *** represents the three dimensional con-volution operation, (Equation 2.4 with x a three dimensional vector). The Fourier Transform of <T3x(x) has the same form as in the two dimensional case[2], £ 3 * ( i ' ) = with v the magnitude of the three dimensional frequency space vector. The Fourier Transform of the corrected image is F3(u) = -Vzx{y). (2-14) The Radon Transform of an object in three dimensions is the two dimensional projection of the activity distribution of the object[19]. The Radon Transform, denoted by f(p, ii), represents the integrals of /(x) over the planes p = x • fi /(p,n) = J dx/(x)«(p - x • n) (2.15) where 6 is the Dirac delta function and ii is a unit direction vector. The function /(p, ii) is a function of three variables, the distance variable p and the two angles required to specify the vector ii, 6n, and <pn. That is n = ( l , 0 n , V n ) (2-16) Chapter 2. Image Reconstruction 14 in spherical polar coordinates. The backprojection of the Radon Transform over planes which can be defined as * 3 H ( X ) = JKdnf(p,n) (2.17) where H is the hemispherical region on the unit sphere defined by the directions of the normals n to the plane integrals of the Radon Transform, as required for the inversion[19]. This backprojection over the planes of the Radon Transform produces an image different from that of the X-ray Transform and has the form[l 1,58,69] <73fi(x) = /(x) * * * • (2.18) In the Fourier domain, the modulation transfer function is l/ |f | 2 [2], and the corrected image is given by F3(u) = v2 E 3 H(I /) (2.19) from which /(x) can be obtained by a three dimensional inverse Fourier Transform. In image space, the backprojected image of Equation 2.18 can also be inverted using a result from potential theory[30]. Using the fact that l/ |r | is one of a class of Green Function solutions of Poisson's equation, V 2*(r) = - A - K P { T ) (2.20) and assuming the spatial invariance of the detector can be imposed, /(x) can be ob-tained from the following[ll] / ( X ) = " ( 2 ^ V V 3 ( X ) ( 2 - 2 1 ) where d2 d2 d2 dx2 dy2 dz2' This also follows directly from Equation 2.19 using Equation 2.8 in each of three di-m e n s i o n s . Chapter 2. Image Reconstruction 15 Filtering before backprojection consists of taking the second derivative of the trans-form in the radial direction and backprojecting over planes[58,ll] where the integral over n represents the backprojection over a hemisphere of directions, H, and p is evaluated, after differentiation, at p — x • ri. The Radon Transform inver-sion for the spherical detector was derived by Chiu et. al.[ll] for general application in nuclear medicine, optical plasma diagnostics and radar target estimation, and by Shepp[58] for MRI, and was implemented by Shepp in MRI and by Dykstra et. al.[24] in PVI. Image reconstruction for a detector with full 4TT steradian coverage, discussed so far in this section, is not practical for PVI with human subjects. The partial extension to the truncated cylindrical geometry, common to many scanners, was made by Colsher[15] in terms of Fourier Transform methods and Orlov[43] in terms of the signal space methods. Orlov [42] also describes the necessary and sufficient conditions for reconstruction from complete projections for an arbitrary detector geometry. Infinitely many filter forms are possible to recover the object from the redundant projection measurements, as demonstrated by Defrise et. al.[21]. These algorithms require the condition of spatial invariance as described earlier (Section 2.1). In a cylindrical tomograph, spatial invariance is imposed by restricting event-line orientations to a subset of those detected. This can be quantified by consid-ering the diagram of Figure 2.1. In the figure, a cross-section of a cylindrical detector is shown. This detector can be described by the equations H (2.22) x2 + y2 = RD2 (2.23) —H < z < H Chapter 2. Image Reconstruction 16 F i g u r e 2 .1 : T h e c y l i n d r i c a l de tec to r i n cross sec t i on . Chapter 2. Image Reconstruction 17 For this detector, complete projections[53], those projections that have an unrestricted view of the object, will not be measured for any event-line angles, V>, greater than V'max, (the event-line angle, %j>, and V'max being measured from the x-y plane), where V'max is determined by the minimum tangent angle between the surface defined by the FOV and any point on the detector rim (at \z\ = H). In this geometry, the angle V'max is defined by V'max = V'det ~ a r c s i n c o s ^det) (2-24) where V'det = arctan(if/i2D). For a smaller object of known extent, V'max is determined by the minimum tangent angle to the surface of the object. For small objects, spatial invariance can be imposed by excluding fewer of the oblique rays, but for an extended object, such that V'max = 0, all oblique rays must be excluded. (This restriction on the orientation of event lines used in the reconstruction can be overcome by a single iteration step described in Chapter 3.4 and also may be overcome in a new algorithm described in Chapter 5.2.2). The detector PSF, when backprojecting the unweighted X-ray Transform, for a limiting acceptance angle V'max is given by Kinahan[32] P ( x , * ) = ^ n ( ^ ) (2.25) |x| V ^ m a x / where the II(x) function is defined by ' 1 if |z| < 1/2 n(s) = < (2.26) 0 if |x| > 1/2 and 6 is the spherical polar angle defined by the vector x measured from the z-axis. The Fourier Transform of the corrected image, derived by Colsher[15], is Fz(») = I^ ) S 3 x ( ^ ) (2-27) Chapter 2. Image Reconstruction 18 where 6% is the polar angle of the frequency vector u with unit vector fi and L has a functional form that will be discussed in Chapter 3. Colsher also gives expressions for fil-tering the Fourier Transform of the projections. This result follows from Equation 2.27 using the central slice theorem of the Fourier Transform[26]. Fourier Transform meth-ods have been implemented by DeFrise et al.[20] and Townsend et al.[67] for planar and ring tomographs. An alternate object space convolution implementation of the Colsher operator was given by Orlov [43] for use in electron microscopy and interpreted by Kinahan et al.[34] as the inversion of the X-ray Transform for use in P V I . In this thesis, an algorithm is presented that inverts the three dimensional Radon Transform for the geometry of the truncated cylindrical detector. It is derived from the Orlov recovery operator (Section 2.4). 2.4 The Orlov Recovery Operator Previous approaches to F B P in three dimensional image reconstruction for P V I have been based on inverting the X-ray Transform of Equation 2.9 or its Fourier Trans-form. The exception to this is the F B P algorithm of Dykstra et al. which uses the Radon Transform. Continuous mathematics are used for the analytic derivation of the inversion of Equation 2.9. In the actual measurement process, the radioactive decay samples the distribution, /(x), in position and angle. These samples are collimated into projection directions, r, by the coincidence electronics and are usually stored as a discrete form of ^ (x T;r) in a computer. The data need not be stored in the explicit form of the X-ray Transform, but as shown by Dykstra et al.[24], can be stored as the two dimensional projections of the object, the Radon Transform. In this thesis, the relationship between the Radon Transform and the X-ray Transform will be shown. It is therefore useful to descibe the previous work on inversion of the X-ray Transform. Chapter 2. Image Reconstruction 19 The necessary and sufficient conditions for the recovery of /(x), x = (x, y, z), from #(xT; r) are given by Orlov[42] and are restated here: The region G is the region on the unit sphere mapped out by the central direction vectors, r, of complete projections. Complete projections are those two dimensional functions <7(xT; r) that have a complete view of the object in the field-of-view, FOV, of the tomograph. Incomplete projection directions have their view of the object restricted by the limited detector surface. For the truncated cylindrical or truncated spherical detector geometries, the region G is a truncated spherical band about the equator the angular extent of which is given by ipm&x (Equation 2.24). For the recovery of /(x) from #(xT; r), Orlov states that a curve T in the region G, representing a set of measured projection directions, must be such that it is intersected by any great circle on the unit sphere. This condition arises from the uniqueness of the Fourier Transform and the necessity of measuring all Fourier components of /(x). Exceptions to this condition, including reconstruction from limited views of an object or unconventional detector geometries [21] will not be considered here. For truncated spherical or cylindrical detector geometries imaging extended objects, the region G is a one dimensional region, T, the equator of the unit sphere. When viewing objects smaller in axial extent than the cylindrical detector height or when extending the region of complete projections as proposed by Rogers et al.[34], G is a two dimensional region on the unit sphere encompassing the equator. The one dimen-sional region, T, of complete projections, from which the object is to be reconstructed, can take any path in G that satisfies Orlov's condition. Hence the three dimensional reconstruction problem is overdetermined. There are redundant measurements made beyond those required to reconstruct an image, (as it is known the object can be re-constructed from the data measured in a conventional PET machine by stacking slices). Incorporating these additional measurements in the reconstruction will improve the Chapter 2. Image Reconstruction 20 statistics of the data set and reduce the statistical noise level of the final image. Of the infinitely many forms of the filter for the recovery of /(x) from this re-dundant data set, the form given by Orlov is one of the simplest[21] and has been shown to produce the least image variance when reconstructing a sphere from com-plete projections[22]. The equivalence of the filter in the Orlov recovery operator to that of the Fourier space filter derived by Colsher [15] and others[47,57] has been shown by Kinahan et al.[34]. Using the result from potential theory[30] (Equation 2.20), and the assumption of a spatially invariant detector response, Orlov proved the following expression inverts the X-ray Transform where dGT is an area element on the unit sphere, the integral being over the region G (this first integral is the backprojection operation), ST is the X-ray Transform plane (projection plane) normal to the projection vector dS' is an area element on ST corresponding to the dummy integration variable xT', and for a two dimensional region G (2.28) (2.29) G and for a one dimensional region T. L1D = L(T x x T) = Ejfeln-Tfc - l (2.30) TK\ = 1 Chapter 2. Image Reconstruction 21 The word definitions of L are given by Orlov: L is a function only of the direction of its argument. X 2 D is equal to the length of the arc of the great semicircle lying in the region G and passing through the ends of the vectors T and xT. L\r> is the sum of the reciprocal projections of the unit tangent vector, T ^ , to the curve T at all points indexed by k where T intersects the great circle with normal ii. The vector ii is the normal to the plane integrals of the Radon Transform (Equation 2.16). The significance of the function L in the inversion of the Radon Transform is shown in Chapter 3.3. The cartesian coordinates, (lx, ly), in the X-ray Transform plane are given in terms of the basis T* = (2.31) ' Z X T ' l y = f X lX or in terms of the basis of the spherical polar coordinates l x = fir with X T = lx\x + lyly (2-32) The algorithm of Kinahan et al.[34] is one of several possible implementations of Equation 2.28. It employs a two dimensional filtering operation of the measured X-ray Transform in each plane E T with the extended singular filter formed by the direct differentiation of the Orlov filter. In their implementation, the filtering in the E T plane is performed by the convolution flf(xT;r) **V21—j—4 r . |xT|L(r x xT) These modified X-ray Transform projections are then backprojected to form the image. Chapter 3 The Reconstruction Algorithm 3.1 The Orlov Recovery Operator and the Radon Transform The inversion of the three dimensional Radon Transform for a truncated detector can be developed from the Orlov recovery operator and will be derived in this Chapter. (The relation of the Radon Transform method to the Fourier space result of Colsher for the truncated detector is given in Appendix A). The Orlov recovery operator Equation 2.28 has the same form as Equation 2.21, the inversion of the backprojected Radon Trans-form by application of the Laplacian, and as such may be interpreted as the inversion of the Radon Transform. The two dimensional convolution operator The inner integral over the X-ray Transform plane E T in Equation 2.28 can be written obeys the commutation property u(x) * * u(x) = K ( X ) * * v(x). 7 E T = 9(XT)T) * * 1 |xT|L(f x xT)' Then , ff(xT - xT'; f) ' |xT'|L(r x xT') dS' - xT'|L(f x (xT - xT')) 22 Chapter 3. The Reconstruction Algorithm 23 Choose as the primed integration variables on the plane E T , the polar coordinates r' and d'T with their origin at xT, (Figure 3.2). Taking r' = Ir'l dS' = r'dr'd&r Then L(T x r') noting that L(T X r') is an even function of its argument. Changing the angular variable ffr to £ which specifies the angle to the direction of the normal to r', denoted by n, (Figure 3.2), that is, ii = fi(£) = (cosf ,s in£) n± = nj.(0 = (sinf, - c o s £ ) r' = r'ii_L. the integral becomes where ii can be denoted by and L is a function only of the direction of its argument, so that L{T X r') = L(T x ii x) = £(n(£)). Then Figure 3.2: The coordinates of the two dimensional Radon Transform on ST. Chapter 3. The Reconstruction Algorithm 25 Making the substitution s = r' + xT • fix (Figure 3.2), and using xT = pfi + (xT • fix)nj., (3.35) where p = xT • ii. The function /•oo (3.36) HP,&T)= / d5^(pn(£) + anj.(£);r) J—oo is the two dimensional Radon Transform of the X-ray Transform g(xT; r) on ET[19]. Equation 3.36 expresses the line integrals of the function ^(xT;r) parameterized by the angle £ and the distance p. The angle £ is the angle to the line normal direc-tion, fi(£), measured counter-clockwise from lx and the distance p is the perpendicular distance to that line, measured from the origin of the plane E T . The fact that Equation 3.36 is the explicit two dimensional Radon Transform of g(xT; T) is not used in the reconstruction. Instead it is noted that the function h(p, £; r) is a set of plane integrals of the activity distribution of the object. Substituting Equa-tion 2.9 in Equation 3.36 and p = x T-fi = x- fiasr-n = 0. The function h(p, £; r) is the integral of /(x) over the plane given by p = x • n and represents a measurement of the plane integrals which comprise the Radon Transform of /(x). The vector fi can also be expressed with a three dimensional parameterization in addition to its two dimensional parameterization in E T : (3.37) r x nx n \T x n x| = (cos£,sin£) = cos£ 1^  + sin£ 1^  Chapter 3. The Reconstruction Algorithm 26 = —(sinipTcos £ + cos 0Tcos </?Tsin £) x +(cos</?Tcos£ — cos #Tsin <£>Tsin £) y +sin#Tsin£ z or simply by Equation 2.16. The recovery operator (Equation 2.28) is then written G where the function L weights the measurements of h(p, £; r) to form the plane integrals of the Radon Transform (as will be shown in section 3.2). Taking dGT = sin 6T d8T d(pT the triple angular integral over 9T, <pT, and £ is the backprojection over the weighted planes of the Radon Transform. This backprojected image has the form of Equa-tion 2.18 and is inverted to form the true image by the application of the Laplacian in the backprojected image space. In this thesis, the following conventions will be followed. The angles 6T and ipT will be the conventional spherical polar angles, 8T measured from the positive z-axis, and <pT from the x-axis about the z-axis in the right-hand sense. The hemisphere on which the vectors r, Equation 2.10, will be denned is given by the angles V?Te[0,7r), 0TG[O,7r) that is, the hemisphere which has its zenith in the direction y. In this hemisphere, the range of angles 6T is restricted in the region G to lie in the range 7T 7T 2 - '/'max < # T < ^ + V'max (3.39) Chapter 3. The Reconstruction Algorithm 27 where V'max is given by Equation 2.24. The choice of the angle £ to lie in the range ee[o,7r) results in the directions of the normal vectors n to the plane integrals of /(x) lying in the hemisphere which has its zenith in the direction z. In the spherical polar representation of the vector n by Equation 2.16 the values of 6n and <pn lie in the intervals JL 1\ ft a L - L) 2' 2 7 ' n t [ 2' 27 To implement the inversion as a filtered backprojection (to move the three dimen-sional Laplacian filtering in image space to the one dimensional filtering of the projec-tions), the order of integration and differentiation may be interchanged. Note that for any function u(x,...) = it(x • n,...), where . . . represent any number of variables with no x dependence, the Laplacian takes the form d2 V2u(x-n,...) = — u(p,...) |n|2, p = x-n = | ^ ( P > - ) (3-40) Taking the Laplacian of the inner integrand of Equation 3.38, where only the func-tion /i(x • ii, £; r) is dependent on x, the backprojection formula becomes (2?r)2 JO Jemin Jo L(n(£); r) dp2 with p = x • ii. This equation is the explicit analytic form of the backprojection operator presented in this thesis for the two dimensional region of complete projections G . The expression represents the inversion of the three dimensional Radon Transform without the explicit Chapter 3. The Reconstruction Algorithm 28 formation of the three parameter Radon Transform. Here the Radon Transform is represented by the four dimensional function h(p, £; r). It is possible to integrate over an implicit angular dimension to obtain the three parameter representation of the three dimensional Radon Transform. This will be shown in Section 3.2. Equation 3.41 is interpreted as follows: The image value given to any point in the image, with its position given by x, is the sum of the second derivative plane integrals integrated over all directions in the projection plane about the projected position of x in that plane, for all projection planes. This is the FBP inversion of the three dimensional Radon Transform parameterizing the plane integrals of the object by their orientation in the plane E T . 3.2 The Explicit Radon Transform It is useful to rewrite Equation 3.38 to show the explicit form of the three dimensional Radon Transform as it makes clear the relationship of the Orlov X-ray Transform inversion formula discussed in Chapter 2 and the Radon Transform inversion formula used by Shepp[58] and Dykstra et al.[24]. The integration in Equation 3.38 is over all plane normals n perpendicular to a given vector r and then over all vectors T in the region G. Parameterizing the Radon Transform in terms of the X-ray Transform planes with normals r, has its advantages as described later (Chapter 4). If instead, the three dimensional vector fi is fixed and integration is taken over all r' on G T such that r' • n = 0, that is, dGT = S(T • n)dr (3.42) then Equation 3.38 becomes ^ ' - ^ S ^ l ^ ' ^ ' ^ - ^ w r 1 ( 3 - 4 3 ) H G Chapter 3. The Reconstruction Algorithm 29 where, as in Equation 2.17, H is the hemispherical region on the unit sphere that defines the complete set of directions of the normals n to the plane integrals of the Radon Transform. If the inner integral in Equation 3.43 is written in term of the angle a in the plane normal to ii where h(p, a; n) is just the integral over the plane of Equation 3.37 with the direction T' parameterized by a and the vector h± given by n x r' |n x r | The angle a is taken in the direction given by rotation about n in the right-hand sense where a — 0 corresponds to the the projection with 6T = 7r/2 and a has limits a\ and a 2 on G for a given n (Figure 3.3). These limits are independent of p as the region G has been assumed spatially invariant. The dependence of r' on a is of the form da h(p, a; ii) (3.44) 6T' = arccos (sin a sin 9n) and T where the sgn function is defined as sgn(x) = < -1 if |x| < 0 0 if |x| = 0 1 if |x| > 0 Chapter 3. The Reconstruction Algorithm 30 F i g u r e 3.3: T h e l i m i t s o f a o n G fo r the p l a n e n o r m a l to n. Chapter 3. The Reconstruction Algorithm 31 The angle £' can be found in the projections with changing r' according to sin £' sin 8T' = sin £ sin 8T = cos 8n for a given 8n (Equation 2.16). The value of a given plane integral, h(p, a; fi), for a given p and fi, is an independent measurement for each a of /(p, fi), the three dimensional Radon Transform of the function /(x). Note that p da = J dr'6(f' • fi) (3.45) G is just the definition of the Orlov L 2 D function given in Equation 2.29. Then a given plane integral of the Radon Transform, /(p, fi), is obtained by the weighted integration of the redundant measurements of that plane integral, h(p,a; fi), over the angle a: 1 fa2 f(p, n) = I9, = —— / da h(p, a; fi) (3.46) For the cylindrical detector, the limits of a are L(n) In the absence of errors, f(p, fi) = h(p, a; n) for all a for a given p and ii. The inner integral over G in Equation 3.43 is the three dimensional Radon Transform formed by weighting of the line integrals across many X-ray Transform planes and is the average of the independent statistical measurements of the plane integrals of Equation 3.37. The weight function L has the interpretation of being the detector efficiency for measuring a given plane integral of /(x). The number of directions T in G in which the measurement of a given plane integral, h(p,cx; ri), of /(x) is made is L{n). In the case of measured data with statistical error, Equation 3.46 forms an average for /(p, ii). Then Equation 3.43 is just the inversion of the three dimensional Radon Transform, Chapter 3. The Reconstruction Algorithm 32 f(p, n), formed by the staistical average of its measurements, /W = - ^ V 2 / dn/(p,n) H or using the result of Equation 3.40 / ( x ) = "(W/dfi^/(P'fi)' P = X'fi Explicitly '«=-(s? /.* 4 *sin"- /=5 ^  a; fi)- (3-4?) The above Equation (3.47) extends the method of Dykstra et al.[24] to the trun-cated cylindrical geometry with complete projections. Their feasibilty study modelled a spherical detector, in which LID = n and the integral over a covers an angular range of 7T. In their implementation, the integral over a was performed numerically event-by-event during histogramming of the Radon Transform. This histogramming results in a three dimensional representation of the Radon Transform in terms of the variables p, (pn, and $n. The straightforward extension of their algorithm to the truncated detector, is to restrict the histogramming to those events with ij> < V'max a n d weight the result-ing plane integrals, f(p, ii), by 1/L(ii) during filtering. There is a statistical advantage available in the Radon Transform inversion that does not require the strict application of the above restriction on event angles. This will be discussed in Chapter 5.2.2. In the implementation given by Equation 3.41, the integration over a, <pn and 6n is performed implicitly in the image by the integration over £,y>T, and 8T. 3.3 The Orlov L Function For a truncated cylindrical or spherical detector geometry symmetric on reflection through the origin, such that </(xT;r) = p(xT; — r), L(ii) = L(h(6n)). On this two Chapter 3. The Reconstruction Algorithm 33 Figure 3.4: Determination of the Orlov L function from the detector geometry. dimensional region G, and with an acceptance angle limited to V'max, Equation 2.29 becomes J ra2 ' da = 2cx2 = 2arcsind (3.48) where d is the projected distance of the extent of the plane, with normal ii, from the origin to the edge of G when viewed from the projection where 0T = 7r/2 (Figure 3.4). If in Figure 3.4, \6n\ < V'max, then ax = 7r/2 and L(9n) = 7T Chapter 3. The Reconstruction Algorithm 34 If \6n\ > ^max? then from the diagram, | sin0„| = d sin V'max so that That is L(8n) = L(0n; V'max) = 2arcsin S i n ^ m a - K T , if \en\ < A sin y>r] (3.49) 2 arcsin . ymax otherwise |sint/n| The function L(8n) is seen to be the angular factor in the Colsher frequency space filter [15]. Indeed the Radon Transform and the Fourier Transform are related by a one dimensional Fourier Transform with the transform variables p and v along n[19]. In terms of the angles of the E T plane, using cos#n = sin £ sin $T, we have for Z ^ D , 7T, if | sin f| > cos V'max/ sin 0T 2 a r c s i n ( l ^ ) , otherwise < 3 5 0 ) \ V I - ( s i n £ s i n 9 T f When G is a one dimensional region T, L has a different functional form (Equa-tion 2.30). The crude image from which the missing data are projected is obtained by reconstructing the complete projections on the curve r = ( l , ^ , V r ) , for all tpT G [0,TT). (3.51) Then in Equation 2.30, the summation over k reduces to a single point of intersection between T and the plane with normal ii and Tk = h = 4>T-Noting n • l x = cos £. Chapter 3. The Reconstruction Algorithm 35 we have L 1 D = L(Q = looser1. (3-52) The reciprocals of the functions LID and L2D are used in the reconstruction. These functions are shown in Figure 3.5 for various V 'max at $T = 7T/2 a n d for various 0T at V 'max = TT/4. When forming an image by the iteration technique, it is necessary to backproject the transaxial projections twice, once to form the crude image, and once to form part of the iterated image. The top frame in Figure 3.5 shows that for small detector rings (with small V'det a n < ^ therefore small V 'max on iteration), it may be possible to approximate the filter required for G, L2D, by the filter required for T , LID, by a simple scaling. With this approximation, the backprojection of the transaxial projections would be performed only once in the formation of an iterated image reducing the backprojection time. 3.4 Reconstruction Using All Detected Events One of the principal ideas of the work of Rogers et al.[53], implemented by Kinahan and Rogers[33], was to complete the data set by projection of a scaled crude image onto those regions of the X-ray Transform planes shadowed by the missing detector surface. The premise behind this and other iteration techniques[20,13] that are used to complete the data set incrementally for inclusion of all events, is that the iterated image can incorporate all measured lines-of-response and thus improve the signal to noise ratio of the image. This is made possible by using the information in images formed from the transaxial events, low-statistics images, to estimate the missing regions of the X-ray Transform plane or its Fourier Transform or the Fourier Transform in the image volume. The same technique can be used for the Radon Transform. Chapter 3. The Reconstruction Algorithm 36 | (degrees) L _ 1™ ve r sus £ fo r Va r i ous 8 0.2-0 15 30 45 60 75 90 £ (degrees) Figure 3.5: The dependence of the Orlov function L 1 on £. Chapter 3. The Reconstruction Algorithm 37 The crude image is formed by those projections that are measured completely. For V'max = 0, these projections lie on the equatorial curve given by Equation 3.51. Using Equation 3.52, the recovery operator for the transaxial projections is This is the explicit form of the operator used in this thesis to reconstruct the initial crude image, /(x), prior to completing the projections in the iteration step. The term LID = l c o s £ | - 1 *s the correction for the changing solid angle of the measured plane integrals h(p, £,</?T, |) on the polar grid determined by equal intervals in £ and <pT. The angles describing the orientation of the vector fi are Then in Equation 3.53 | cos£| = | sin#„| is just the Jacobian of the spherical polar coordinates representing fi. The incomplete detector coverage of the directions r on the unit hemisphere results in missing data in the projection plane ST[53], (Figure 3.6). The projection of either detector rim traces out an an elliptical region in which no true coincidences can be measured. Of course, the projection of the detector FOV is intersects that region, shown by the hatched regions in Figure 3.6, for a cylindrical FOV with RFOV = RD/2. The area of incomplete measurements grows in size as 0T approaches its limits, (Equation 3.39). The figure shown has a V^ et = 15 degrees, and is projected at an angle 6T = 80 degrees. For a cylindrical detector with radius RD and height H (Figure 2.1), the unmeasured region in ST for a given 9T is given by the ellipses \ cos0T J 2 Chapter 3. The Reconstruction Algorithm 38 Figure 3.6: The unmeasured regions on the S T plane. Chapter 3. The Reconstruction Algorithm 39 Any incomplete plane integrals, h'(p, £; r), determined by line integrals in E T , that have some part of their line integral inside the region given by Equation 3.54 (and inside the projected FOV), cannot be used in the reconstruction. By the spatial invariance criterion of Equation 3.39, all the events at the orientation 9T must be excluded from the reconstruction if any line integral is incompletely measured. This rule can be relaxed to exclude only those h'(p, £; r) which cut the unmeasured elliptical regions (Chapter 5.2.2). However, the events in the incompletely measured line integrals can be used in the reconstruction by adding in the missing parts of the plane integrals, h'(p, £;r), determined by numerical integration over planes in the crude image /(x). In terms of computer time, it is more efficient to perform the numerical integration by line integral. That is, the X-ray Transform, g(xT; r), with x T = (lx, ly), computed from the crude image, /(x), is performed numerically for all x T falling in the ellipses of Equation 3.54. The g(xT; r) are scaled, and added into the incomplete plane integrals h'(p, £; r) in the same manner as the measured events are initially histogrammed into these planes (Chapter 4). There is a scaling factor, K, required to give the numerically computed ^(x T;r) the same weight as the #(xT; r) measured by the number of counts in its statistical measurement. The scale factor can be determined from the measured event count in an X-ray Transform histogram bin, n(lxi,lyj,(pk,9e), which can be denoted by n,-, and the X-ray Transform computed numerically from the crude image, g(lxi,lyj,<pk,9e), which can be denoted These two quantities must be strictly proportional, within the statistical error of the n, and within the accuracy that the crude image /(x) represents the true activity distribution /(x). For example if the image has a DC bias due to filtering, the line integrals <ji will have a component proportional to the length of the (3.55) Chapter 3. The Reconstruction Algorithm 40 path of the integral in the FOV. Assuming any D C bias can be subtracted from the crude image and that statistical noise and artefacts along any line of integration will have a zero or negligible mean, then the following holds: m = K-9i (3.56) The scaling factor K can be determined by averaging the ratio of n,- to g~i over a subset of the index i for which the nj are large (and have least variance, and also where the g~i are non-zero). The constant K can also be determined by taking the sum over i on each side of Equation 3.56. K is then determined by K = (3.57) E, 9i This ratio can be evaluated without the numerical evaluation of the by line integral, if the summation is restricted to the transaxial events, 6T — 7 r / 2 . Noting that for these events at any given <pT, / s / x ^ ( x ^ ^ ) = / F w d x / ( x ) . Approximating the integral over S T by j dxTg(xT, <pT, ^) « AlxAlyJ2J2g(lxi,lyji<pT) = AlxAlyAt fr), V(lxi, tk, fr), Z(lyj)) « 1 k and approximating the integral of / ( x ) over the FOV by J f o v dx/(x) « (Avf x:EE!/;>*k) = (AvfS (3.58) the sum over the g~i for a given <pT can be estimated as Chapter 3. The Reconstruction Algorithm 41 or £ UVr) = ( A t ) 7 S j with 7 the above product of the ratios of the integration intervals. In the actual implementation, the quantity 9 I = A* was used for the projection estimate. Denoting the number of projection directions ipr by Nv, and the sum of the event count in the 9T = TT/2 projections as 7Y|, then the scale constant determined from the transaxial projections, KR, from Equation 3.57, with §i in the denominator, is K, = j^g. (3.59) The iteration procedure can complete all the partially measured plane integrals h'(p,i;;T) in the S T planes up to a maximum acceptance angle V'max now given by the detector acceptance angle V'det- Depending on the noise propagation from the integration of crude image it may not be desirable to extend the V'max U P to V'det ? hut only to some intermediate angle. The effect of noise propagation in the iteration step will not be examined in this thesis. It is possible to improve the statistical quality of the crude image by including all oblique events into the initial image (Chapter 5.2.2). The reconstruction algorithm including the iteration procedure is summarized as follows: 1. Reconstruct the crude image from the transaxial events using Equation 3.53 for an extended object or from all complete projections using Equation 3.41 for an object smaller than the detector. 2. Remove the DC bias of the crude image /(x). Chapter 3. The Reconstruction Algorithm 42 3. Compute the line integrals <JT(X T ;T) , (Equation 3.55), of the crude image in the unmeasured regions of the ST plane given by Equation 3.54. 4. Scale each line integral by the constant Kz. 5. Add the scaled #(xT;r) to the appropriate set of h'(p,£\T) (by the method de-scribed in Chapter 4). 6. Reconstruct the complete projection set according to Equation 3.41 with the detector efficiency function, L 2 D , modified to reflect the increased acceptance angle. The projection step is simplified in this implementation of Equation 3.41 compared to the implementation required for reconsruction according to Equation 3.47. Indexing the redundant plane integral measurements by p, a, <pn, 8n in h(p, a; n) would require that a look-up table be formed that defines where the intersections of the elliptical regions, which are simply written as a function lx, ly, and 6T in Equation 3.54, occur in the set of variables in the argument of h(p, a; fi). While such a look-up table could be used, a choice of the discretization of the variables <pT and 6T would be required in addition to the discretization of ipn and 9n made initially. Also there would be a choice of whether to compute those parts of the incompletely measured plane integrals h(p, a; fi) by a surface integral in the crude image, which would be computationally expensive, or as in the case for the h(p, £; r), to compute only a set of line integrals, #(xT;r), and add them to the incomplete h'(p, a; fi) which would require additional interpolations in the angular variables. Chapter 4 Implementation 4.1 Histogramming the Radon Transform The application of Equation 3.41 or Equation 3.53 requires the formation of the dis-crete four dimensional data set approximating h(p, £;r). This discrete histogram will be denoted by h(p,,£j,<pk,9e), where pt,£,j,ipk,0i are the indices corresponding to the variables p, £, ipT, 6T in the discrete representation of h(p, £, ipT, 8T). Possible histogram-ming methods include: 1. Histogramming directly on an event-by-event basis into the array h(pt, £j,</>fc,^ )-2. Histogramming each array element of the four dimensional histogram of <7(xT; r) indexed by the discretized variables lx, ly, <pT, 9T, followed by: a. Transforming the discretized coordinates lx and ly into p, and and adding the binned <7(xT;r) into the h(p,,£j,(pk,8e). b. Numerically evaluating the line integrals of Equation 3.36 in each plane ST of the histogram of y(xT; r). The first method, that of event-by-event histogramming, was chosen for this thesis, though it is the most computationally intensive method of those listed, because it discretizes the event coordinates only once. Hence there are none of the problems asso-ciated with resampling a discrete function. Essentially, the histogramming consists of assigning a given event characterized by its direction, r, to a series of planes containing 43 Chapter 4. Implementation 44 the event line, ie. to a set of planes with normals, fi, such that r • fi = 0. This set of planes is defined by taking equal steps in angle, £, about the direction r for a discrete set of T on the region G. 4.1.1 Histogram Dimensions The size of the four dimensional histogram in bytes is given by Mb = NpN^N^Ng • ws (4.60) where the variables, 7Y, are the number of elements in the histogram dimension cor-responding to the subscript, and ws is the number of bytes per histogram word. Ex-pressing the sampling in p by the number, np, of samples needed to span RFOV 5 the radius of the field of view of the detector, np = — , (4.61) Np = 2np + l. The angular sampling must result in linear sampling of Ap at a distance of RFOv from the origin[3]. For the angular variables, denoted by <f> Ap . A$ ^ — ripov sm ^ Then . . . 1 1 A<p = 2 a r c s i n Then for £ and ipr on the interval [0,7r) Ni — Nv — NINT = N I N T (""Wp) ( 4- 6 2) where NINT is the integer arithmetic function, nearest-integer. For the cylindrical detector geometry 8T is symmetric about 7r/2 with limits determined by V'max; 7T 7T 2 + V'max < $T < 2 _ '/'max. (4.63) Chapter 4. Implementation 45 In the case of 9T, the bin centres are assigned values in the range given in Equation 4.63 resulting in 2V'max Ne-1 Ne = + 1 (4.64) 'r with A8T = A<f>. Then Ng = 2npV'max + 1 = NINT (2np</> max In terms of the number of samples in p, Equation 4.60 can be written, Mb = (2np + 1)[NINT (7rn p)] 2 NINT (2np</>max + 1) • ws. (4.65) 4.1.2 Histogram Procedure Assume that the coincidence data is presented in list mode by the four parameters lx, ly, ipT, and 8r. An individual event in this list is processed as follows: 1. The projection direction given by (<pT,9T) is discretized to (ipk,9{) by scaling where 6T may be biased by be to have 9i > 0, 7T A9T be = V m a x + 2 H — — • 2. Within the projection (</?*, 0^ ), the discrete position of the line specified by p and £ is evaluated for each in the set of N^ angles according to pt Ap = x T • n(£) = cos £ + s i n £ (4.66) = lxcos(AZQ+ ly sm(A£Q where pt may be biased by —RFOV/Ap to have p, > 0. Chapter 4. Implementation 46 3. The histogram is incremented at the discrete coordinates according to % n £ n <Pk, Ot) = h(p„ ¥>fc, 6t) + 1 for each of the N{ coordinate pairs (p»,£j). The cost of the histogram can be expressed as the number of multiplications in forming the histogram. This will depend on the number of events measured, Ne. In addition to those multiplications use to evaluate lx, ly, <pT and 6T from the original detector coordinates of the event-line, an additional Mh = 2NeNi multiplications are needed assuming that the cosine and sine of the angles £ in Equa-tion 4.66 are precalculated, multiplied by 1/Ap, and tabulated. This expression, Mh, represents an additional computational cost in histogramming the Radon Transform of Equation 3.36 compared to the X-ray Transform. However, this computing cost could be reduced by employing a three dimensional lookup table containing the value of p for each for a fine binning in lx and ly. 4.2 The Discrete Reconstruction Algorithm Any of the recovery operators of Equation 3.38, Equation 3.41, Equation 3.47, or Equation 3.53 can be implemented by evaluating the integrals and derivatives discretely. The integrals can be approximated by discrete summation and the Laplacian or second derivative can be implemented by discrete convolution. Intermediate calculation using the discrete functions can be performed by interpolating the discrete functions. Chapter 4. Implementation 47 4.2.1 Filtering The Laplacian in Equation 3.38 r V 2 (27T)2 and the second derivative with respect to p in Equation 3.41 1 d2 (2TT)2 dp2 have Fourier Transforms i/ 2, from Equation 2.8, where in the first case v is the magni-tude of the frequency space vector, i / , in three dimensions, or in the latter case, v is the frequency space variable in the one dimensional profile. Hence using either formula, the inversion procedure, which in frequency space includes the multiplication by u2, not only amplifies high frequency components of the backprojected image required to remove the blurring caused by the convolution with the P S F , but also enhances high frequency noise. At sufficiently high frequencies there is no signal and the amplified noise will dominate the image unless some method to suppress the noise is employed. The problem is corrected by introducing an apodising window, W(u), which has the property of limiting the band of frequencies on which the filter v2 acts, and also serves to shape the frequency response within that interval. In general, W(u) = U^W\u) (4.67) where the function H(x) is defined in Equation 2.26, and its use here separates the bandlimiting and smoothing functions of W(u). The frequency space filter due to the differentiation can then be expressed as = A\v)W\v) (4.68) Chapter 4. Implementation 48 where A'(u) is the bandlimited v2 function (\u\ < vc)1, and vc is some cut-off frequency. The window function W'(u) has the property that it preserves the shape of the v2 filter at low frequencies and damps high frequencies, up to i / c , which are dominated by noise and aliased Fourier components. 4.2.2 Aliasing, Sampling and Interpolation The discrete nature of the implementation of analytic reconstruction formulas can produce errors at three stages of the reconstruction. 1. In the formation of the projection histogram. 2. In the sampling and windowing of the object space filter. 3. In the backprojection, where interpolation is used to reference the filtered projec-tions and numerical integration estimates the integrals over the angular variables. Aliasing is a consequence of sampling a function that is not bandlimited, or of sam-pling a band-limited function too coarsely [2]. Under these conditions, Fourier spectrum components above the Nyquist frequency, uN, given by " = m ( 4 - 6 9 ) are folded down to frequencies less than vN preventing the sampled function from being exactly recovered by interpolation of its samples. The choices for recovery of such an under-sampled function are to accept the incorrect function with its folded frequency components, or to employ a window W'{v) to reduced the aliased content of the spectrum, and then to recover the modified function by interpolation. Both approaches yield only an approximation to the desired function, but the latter also 1The vertical bars | | will be used to unambiguously represent the magnitude of the one dimensional variable v of the radial profile in three dimensional space or of the three dimensional vector v. Chapter 4. Implementation 49 has the benefit of reducing high frequency noise. To limit the amount of aliasing, the filter cut-off frequency vc in Equation 4.68 should not exceed vN. This also can be approximated by making the smoothing component of the filter, W'(u), small for v > uN. The measured data in PET is sampled statistically by the radioactive decay process prior to detection. The distribution of the activity may have arbitrarily high spatial frequency content. The effects of positron range and intrinsic detector resolution limit the spatial frequency content of the measured data. The radioactive sampling is suf-ficient to recover the bandlimited detected distribution of events with statistical error related to the number of detected events. However, the practical limitation of computer processing speed requires that the data be histogrammed prior to the implementation of inversion. Conventional histogramming consists of the operation of convolving with a rectangular window n(p/Ap) followed by sampling at the interval Ap (essentially integrating over juxtaposed equal intervals). The equivalent operation in Fourier space to the histogramming in object space is multiplication by Ap.smc(7ri/Ap), where the sine function is defined as, . . sinfa;) . M sinc(x) = — ( 4 . 7 0 ) x followed by sampling at intervals in frequency of 1/Ap. Then the histogram formation initially modifies the spectral components of the data by filtering the Fourier Transform of the data with the sine function, which smooths in the pass-band, \u\ < uN, and allows the passing of higher frequency components into the stop band, \u\ > uN, which will then be be aliased to lower frequencies. This effect can be minimized by reducing the histogram bin size at the expense of increased storage requirements. The histogram-ming function n(p/Ap) can also be replaced by a linear or higher order function that has less smoothing within the pass-band and smaller sidelobes in the stop-band. The Chapter 4. Implementation 50 function would distribute the event position in a manner opposite that of interpolating, through a number of histogram positions. These functions will be discussed in terms of interpolating kernals later in this section. Given sufficient computing power, the data need not be stored in a histogram, but could be filtered and backprojected event-by-event through to the image[38]. In this manner, the image can be reconstructed on samples in the image volume separated at a spacing consistent with the Nyquist criterion and would allow for the exact recon-struction of the image (to within statistical error) if an exact interpolating kernal were used. The other component of recovering a sampled function is the interpolating func-tion used. The sine function Equation 4.70 is the interpolating kernal used to exactly reconstruct a correctly sampled bandlimited function [2]. Sampling a function at an interval of Ap, in signal space, causes periodic replication of the Fourier Transform at intervals of 1/Ap in frequency space. The exact interpolator must reject the replicas of the Fourier Transform and reproduce the Fourier Transform centered at v = 0, iden-tically. The corresponding frequency space window is Tl(v/2vN), which has transform (1/Ap) sinc(irp/Ap). When convolved with the sampled function, this function recov-ers all the details of the original prior to sampling. Use of the sine function requires computing the sine and quotient over an extended range of its argument. Because of this, it is not used routinely. Other interpolaters include nearest neighbour, linear and higher order functions of the Langrange interpolation functions [56] which approach the sine function as the order increases. Cubic-spline approximations to the sine function have also been described[44] for use as interpolating functions. Chapter 4. Implementation 51 The most commonly used interpolator is the linear intepolator, A(p), which is of the form - S p - M i f |p| < Ap otherwise A continuous function, u(p), is recovered at positions p from a sampled function, u(p,), by convolving with A(p). The Fourier Transform of A(p) is the function Ap sinc2(iruAp) which although it has a greater amount of smoothing for frequencies less than uN, it does reject stop-band frequencies better than the sine window of the nearest neighbour interpolator n(p/Ap). Both the cubic-spline approximation and the linear interpolator were tested for use in the reconstruction algorithm. The linear interpolator was 5 times faster than the cubic-spline interpolator owing to the cubic polynomial evaluations in the latter. The linear interpolation was adopted for use in the reconstructions as both methods gave very similar image reconstructions. 4.2.3 Windows A smooth roll-off in the function W'(u) toward the cut-off frequency vc reduces ring-ing (the so-called Gibb's phenomenon) which would occur for an abruptly truncated filter[25,10]. The precise functional form of W'(u) should be based on the noise and aliased frequency content of the frequency spectrum. This is described by optimal or Weiner filtering [49] where the filter rolloff is determined by the ratio of the power spectrum of the signal-to-noise ratio. In the absence of a noise model, commonly used unoptimized window functions include the Hanning and Hamming versions of the gen-eralized Hamming window, as well as the Parzen and Butterworth windows[6,25]. Some of the window functions previously reported in the literature for inverting Chapter 4. Implementation 52 the Radon Transform will be compared. In Shepp's implementation of the Radon Transform inversion for MRI, the object space filter employed to approximate the second derivative was the second difference filter aad{p) = (^y((S(P ~ Ap) + 8{p + Ap) - 28p) (4.71) which has a Fourier Transform with unbounded support, Asd{v) = ^ ( 1 - c o s ( 2 ™ A p ) ) . (4.72) This filter is effective as the inversion filter because the offset cosine is proportional to v2 for small u, (from its Taylor series), Asd{v) » (Apv)2, \v\ < 1. However, it fails to roll-off to near zero at uN. Instead it reaches its first of many maxima at \v\ — vN. Figure 4.7 shows the first lobe of the filter, Asd, which continues periodically on either side of v — 0. (The other filters in this figure are discussed later in this section). The smoothing function in Shepp's implementation is performed by the sine2 win-dow of the linear interpolator in the backprojection. Hence with the second difference filter, asd(p), as the inversion filter, there is no explicit bandlimiting performed beyond the approximation of the sine2 to a bandlimiting window. This occurs after the filtering during the backprojection and permits a greater content of aliased frequencies into the image than with an explicit band limit given in Equation 4.68 with vc < vN. The functional form of the object space convolution filter for the case of the band-limited filter, A'(u), (with W'(u) = 1) has been given by Chiu et. al.[ll] for the FBP form of the Radon Transform inversion for the complete spherical detector. This filter would generate ringing in the image via the Gibb's phenomenon if implemented without the smoothing performed by a tapered function W'(v). er 4. Implementation Figure 4.7: Fourier space niters for RT and X R T inversion. Chapter 4. Implementation 54 In the implementation of Dykstra et. al., a comparison was made between the second difference filter, Asd, and the filter of Equation 4.68 with the smoothing function given by the Hanning window W'H{v) = \ (cos (^) + l) . (4.73) Along with the linear interpolation used in the backprojection, the Hanning windowed v2 filter is a stronger filter than the second difference filter, As^. The form of Equa-tion 4.68 also has the explicit bandlimit condition, which could be an advantage over Asd depending on the practical factors that determine the aliased frequency content in the projections. The object space equivalent filter can be determined from Equations 4.68 and 4.73 from the convolution theorem [2] {U(v)V(v)} = u(p) * v(p). (4.74) The Fourier Transform of W'H(v) is «WP) = \ [%) + \ (*(P - + * (P + <4J5> and the Fourier Transform of the bandlimited v2 function, A'(u), is given by AP) = ( ^ J cos(2^cp) + 2 ( 7 r ^ 2 ) 3 " 1 8 1 x 1 ( 2 ^ ) . (4.76) Then using Equation 4.74, the Fourier Transform of A(u) with the Hanning window as the smoothing filter is <P) = 5 + \ («• (P- ir) + {P + )] • (*•") This is the filter form used by Dykstra et. al. for the FBP implementation of the Radon Transform reconstruction for the spherical detector and it is the radial part of the filter adopted for use in this thesis. Chapter 4. Implementation 55 The modification of this filter, or any of the above radial filters, for the truncated cylindrical geometry is to weight the filter a(p) by the detector efficiency function L, where L will be L2D or LID depending on the imaging geometry. For the three parameter Radon Transform, indexed by p and the angles 6n and <pn of Equation 2.16, the filter becomes aL{jp,9n) = L-\6n)a{p) (4.78) and for the four parameter Radon Transform implemented in this thesis, h(p, £;r), indexed by the angles <pT, 9T, £ the filter is aL{p,WT) = L-\WT)a(p). (4.79) This filter is implemented by a one dimensional discrete convolution2, h*(Pt, <Pk, Qt) = HP* ~ Px'i iv Vk, Qt) aL{p%>, Qt) (4.80) t' hence the filter must be sampled prior to performing the convolution. The act of sampling the object space filter at the interval A p replicates the Fourier Transform at intervals in frequency of 1/Ap. The replicated Fourier Transform is not exactly that of Equation 4.68 because the filter must also be windowed in object space, ie. the filter length must be finite. Using the window U(p/(njAp) to truncate the object space filter, results in convolution in the Fourier domain with the function sinc(irvnfAp). As rij increases the sine function approaches a delta function, and the Transform is unmodified by the convolution. For small nf, Fourier domain ringing and smoothing of the replicated tranform will occur. The filter length Nj = 2nj + 1, must be chosen to be large enough to minimize these effects while keeping the filter length as small as possible to minimize filtering time. The radial part of the filter ar,, for vc = vN, given by Equation 4.77, is shown in Figure 4.8 as a(p). The degree to which these ideal curves represent the true filter func-2 A discrete correlation may also be used replacing p, — p,i by p, + p,/, as the filter is even. Chapter 4. Implementation 56 O b j e c t s p a c e f i l t e r s 4 6 8 P (Ap) 10 Figure 4.8: Object space filters for R T and X R T inversion. Chapter 4. Implementation 57 tion depends on the interpolating function. A linear interpolator produces a piecewise linear curve between the discete samples taken where pjAp is integer[51]. For comparison the filter of Chelser et. al.[10], which is the Hanning windowed form of the v filter needed to invert the backprojected X-ray Transform image for a spherical detector (73^(x), of Equation 2.12, is also shown in Figure 4.8 as b(p). It is given by the inverse transform of B(u)=l-^U(^-)W'(u). (4.81) If the smoothing window W'(u) is given by Equation 4.73, the one dimensional Fourier Transform of the radial profile of Equation 4.81 has the form where b'(p) has the form b'(p) = s\n{2™cp) + — ^ ( c o s ^ T r i / e p ) - 1). (4.83) The filter a(p) is more compact, in object space, than its X-ray Transform counterpart, b(p), as shown by the position of the zero-crossing in Figure 4.8. This occurs because the transform of a(p) contains higher frequency components, due to its v2 component, than the transform of b(p), with its factor u, as seen in Figure 4.7. It is a property of the Fourier Transform that widths, which for these filters may be measured as the second moment of the filters, in the two domains are reciprocally related[2]. The three dimensional object space filter is even more compact than its one dimensional profile for a given radial frequency space window. This is due to the additional factor v2 that arises in the Jacobian of the inverse three dimensional Fourier Transform in its spherical polar form (ie. if a three dimensional filter is specified by its radial profile, the weight of the filter increases as v2 with increasing frequency). Chiu et. al. have Chapter 4. Implementation 58 demonstrated this for the case of the one and three dimensional bandlimited filters of This compactness confers a speed advantage in the one dimensional filtering of the FBP implementation of the Radon Transform inversion, and also in the three di-mensional filtering of the backprojected image in the space of the object. The three dimensional filtering will be discussed in Chapter 5.2.1. The frequency space filters discussed above are shown in Figure 4.7. As noted, the filter A(u) is shifted to higher frequencies than the filter B(u). However, the two inversion techniques, for the Radon Transform and the X-ray Transform have the same theoretical noise performance. This is true as it the nature of the backprojection process to produce an backprojected image of the form <r3ft(x), (Equation 2.18), for the Radon Transform and of the form o^x(x), (Equation 2.12), for the X-ray Transform. If it were possible to divide the backprojected image into a true backprojected image and a backprojected noise image, the backprojected noise image alone would have the form, <73^ (x) or <73x(x). The backprojected Radon Transform image has a lower frequency content than the backprojected X-ray Transform image according to their Fourier Transforms 1/u2 and %/u. The correction of these backprojected noise images in the form of Equation 4.81 and 4.68 necessarily corrects the frequency spectra of the two images to produce the same noise image in either case. If the same window function, W'(u), and the same cut-off frequency uc are used then the X-ray Transform inversion and the Radon Transform inversion produce the same noise image. It is interesting to note that the weighted backprojection of the Radon Transform, produces the same functional form of the backprojected image, O3R(X), regardless of the detector geometry, which only changes the statistical noise content of the backprojected v 2 with W'(u) = 1. (4.84) Chapter 4. Implementation 59 images. While the backprojection of the X-ray Transform, Equation 2.11 produces either the backprojected image cr3x(x) for a spherical detector, or for the truncated detector, the backprojected image, cr3X(x) given by the convolution of the desired image /(x) with the PSF, P(x, 6) of Equation 2.25. The difference is due to the fact that for any detector with a region G satisfying the Orlov conditions of sufficiency and necessity discussed in Chapter 2, all the plane integrals of the object will be measured, with different weight, but all line integrals cannot be measured using anything less than a fully spherical detector. In the correction of o3p_(x), the filter function v2 is the used for both the spherical and truncated detector geometries, while in the correction °f 03jr(x), the V/TT filter used for a spherical detector must be replaced by u/L for frequency space filtering of the Fourier Transform of the image o'3X(x.). In the discrete implementation, the filters will differ in their noise performance because of the aliased frequency components found in the sampled Fourier Transforms of both the X-ray Transform and Radon Transform projections. These aliased frequency components will be due to both the signal and noise content of the projections. If for a given smoothing filter W'(u), the region of aliasing and low signal-to-noise ratio extends down to where the ratio of the two filters are significantly different, then there will be different noise propagation into the two images. This argues for the use of the explicit bandlimited form of the filter, A(u), rather than the second difference filter, Asd(v). Note that the ratio of the two filters will vary as a function of orientation (ii) when imaging with a truncated detector. For the spherical detector, the window W'(v) can be modified to bring the tail of the filter A(y) into the same shape as for B(v), but this will produce correspondingly smoother images than the B(v) filter acting on the X-ray Transform image. The aliased content of the projections is the factor determining whether noise propagation will be better or worse in the two methods. To make the noise propagation the same it is Chapter 4. Implementation 60 necessary to ensure that the projections are not significantly aliased during histogram-ming or filtering. This would entail making the correct choices for the histogramming interval, Ap, to account for the Nyquist criterion, and perhaps distributing the events among a number of sample points in the projections as discussed earlier. The radial dependence of the object space filter for the inversion of the Radon Transform is smaller in extent than that required for the inversion of the X-ray Trans-form. The length Nj of the filter can be as small as 3 for the second difference filter and Nj ~ 20 is adequate for a(p). As implemented by Kinahan[34], the two dimen-sional convolution filter required for the object space inversion of the X-ray Transform is on the order of the size of the projection dimensions due to the singular nature of the explicit form of the integrand in Equation 2.28. The compact form of a(p) in the Radon Transform inversion does not imply that the reconstruction is any more local than the for the X-ray Transform. In the Radon Transform h(p,£; r), the events have been extended along lines in ST in the formation of the discretised h(p,£; r) from events, whereas in the X-ray Transform, the events are localized to a point in E T . Then the extended filtering of the local projections in the case of the X-ray Transform has been modified to the local filtering of extended projections in the case of the Radon Transform, effectively shifting some of the computational work load from filtering in the X-ray Transform to histogramming in the Radon Transform. This shift in work load may be advantageous depending on the computing power available and the data acquisition rate. The ceiling in acquisition rate will be determined by the detector deadtime for high activity PET scans, or by the tracer decay rate in low activity stud-ies. If the combination of front-end computing power and acquisition rate is such that the histogramming of the Radon Transform can be performed in real time, then the waiting time between the end of the scan and the output of an image from the recon-struction program will be reduced because the filtering is faster. The filtering speed of Chapter 4. Implementation 61 the Radon Transform and X-ray Transform inversion methods is compared in the next section. 4.2.4 Cost of Filtering In terms of the number of multiplications, the cost of performing the one dimensional convolution filtering with the precomputed filter value, ai(p, £, 6r), of length Nf is ~ NpNf. This has to be performed times for each of the NgNv projections. Then the total number of multiplications needed to filter the 4 parameter discrete Radon Transform, h(Pi,£j,r?k,Qe) is, M / 4 = NgN^NpNj. (4.85) In this expression, the quantities Nv, N^, Np are usually of the same order, Ng is small (an order of magnitude less) in practice because of the small detector acceptance angles in most tomographs. Nj is also of small dimension as discussed previously. In the case of the three parameter histogram for the Radon Transform, f(p, ri), the number of multiplies in the filtering step is given by M / 3 = NgNvNpNf (4.86) where now the three dimensions p, ipn, 9n of the discrete Radon Transform are approx-imately the same. For a choice of linear and angular dimensions the same size as in the h(pt, (fk, @e), the filtering cost is less, but there has been an increased cost in the histogrammming over the angle a (Equation 3.47). The object space implementation of the X-ray Transform inversion of Kinahan et. al., which uses a two dimensional convolution with an extended filter of the same dimensions as the projection, N(x X N(y, has MfX = NeNvNex2Ney2 Chapter 4. Implementation 62 multiplications in its filtering step. The savings in filtering time of the four parameter Radon Transform method over the X-ray Transform method is the ratio M/4/M/x = Nj/Niy2, where ~ Np ~ N^. This factor could be a small as 1% for typical dimensions. As discussed previously, the histogramming of the Radon Transform, which performs the one dimensional integration of Equation 3.36, accounts for this difference by redistributing the computation. In the FFT implementation of the X-ray Transform inversion, the number of multi-plications is MFFT « NeN^N^NeyHog^N^Ney) using the fact that the two dimensional F F T , for a matrix of dimension N x M , requires 4iVM2 log2 NM multiplies[28], and that the FFT has to be inverted after an order NexNty filtering step. The filtering of h(pt, <pk, Oe) is somewhat faster, in terms of multiplies, than the FFT filtering as the ratio MJA/MFFT = Nj/(8 \og2(Nex / \/2)) for a typical choice of dimensions where Nty ~ Ntx/2 and N{x ~ Np ~ N^. For example, for Nf = 10 and Ntx = 128, the ratio is ~20%. 4.2.5 Backprojection The filtered discrete Radon Transform, h*(pt,£v<pk,Qt), can be backprojected over the planes defined by its indices to form a reconstructed image. The backprojection of Equation 3.41 is the integration over the angles £, ipT, and 8T, of all the filtered Radon Transform plane integrals that pass through a given point in the image. One method of implementation of this backprojection is to project each image point, x, onto the plane ST at xT, and then perform the integration over £ about xT. This method, called voxel-driven backprojection, has the advantage that the image can be reconstructed over any set of sample points in the FOV, whether a line, a surface, or a volume. The Chapter 4. Implementation 63 reconstruction of the image does not depend on the shape or distribution of the voxels, apart from the need to meet the Nyquist criterion. For each discrete direction r , the projected position x T = (lx, ly) of the image point x is given by lx=X-lx, ly = X-\y (4.87) where lx and 1^  are given by Equation 2.31 and are functions of the projection angles (pk, 61. The integration over £ involves the computation of p = x T • n for each £ and then a one dimensional interpolation of the discrete filtered Radon Transform, h*(p„ ipk, dt) in the linear variable p. If Nm is the number of multiplies in the one dimensional interpolation step3, then the number of multiplies required for the straightforward implementation of the backprojection is Mb4 = N2NeNvNzN1D where N3 is the number of discrete positions, x, in the image, at which the backpro-jection is evaluated. This is a larger number of multiplies than that required for the three parameter Radon Transform backprojection by a factor of Ng because of the added histogram dimension. The cost of the backprojection was reduced by implementing an interme-diate interpolation step in the E T plane. The integration over £ was evaluated on a regular tesselation of ST. This integration is performed as above by calculating for each discrete angle £, the value of p = x T • n and using a one dimensional interpolation in p, but now for a discrete set of x T l J = (lxt, ly^). Projected image points x are found at real positions x T , from Equation 4.87. A two dimensional interpolation of the surrounding 4 discrete positions x T t ) J , is then used to calculate the integrated fo*(p,,£j, ipk, 9() at the position x T . This value is added to the image at x, and this procedure is then repeated 3NID— 2 multiplies, for linear interpolation. Chapter 4. Implementation 64 for all discrete positions in the image volume and for each projection direction (jpkiOt)-The number of multiplies for the backprojection by this method is M ' M = N*N9NVN2D + N^NoNyNtNtD where N2D is the number of multiplies in a two dimensional interpolation4, and where N{x2 has been taken as the number of discrete positions in the E T plane at which the integration in £ is performed. The tesselation of the E T plane and that of the image volume should both use the sampling interval given by the Nyquist criterion. As such Nex2 ~ Nv2. This intermediate interpolation saves about an order of magnitude in the backprojection time. This number of multiplies is comparable to that of the backpro-jection of the three parameter Radon Transform as well as to the backprojection of the X-ray Transform. The additional interpolation step may smooth the reconstructed image if the lower order interpolating functions are used. This same procedure can also be applied to the three parameter Radon Transform to reduce its backprojection time. The ratio of M M to M / 4 shows the backprojection is approximately an order of magnitude greater computation than the filtering. Backprojection is the bottleneck in all reconstruction methods, including Radon Transform inversion. The process of integrating h*(pt, ipk, Og) over £ in the E T plane forms a discrete filtered X-ray Transform, g*(lxt, lyj, ifk, Ot) (Appendix A). This filtered two dimensional function on the E T plane is a discrete form of the inner integral in Equation 2.28 and can be backprojected by means normally used for the X-ray Transform, including hardware backprojectors[31], or Fourier Transform inversion methods in the transform space of the image. 4For a bi-linear interpolation, N2D = 8 multiplies and 4 additions, or 4 multiplies and 8 additions. Chapter 4. Implementation 65 4.2.6 Projection Projection is required for the iteration step of the image reconstruction described in Chapter 3.4. The projection step is the analogue of the initial measurement procedure represented by the line integrals of Equation 2.9. As in the backprojection described in the previous section (4.2.5), the discrete line integrals are evaluated on a regular tesselation in ST in those regions of the plane where the data was not measured (Equa-tion 3.54). The line integrals g(xT; r) = f* dt / ( x 0 + tr) (4.88) are approximated by 9(lx„ly3,<PkA) ~ £ /(xo + < m A * f ) (4.89) where the limits of the integral, t\ to t2, represent the distances from the point Xo to the limits of the FOV along the direction f, (ipkfle), and Xo can be any point within the FOV on the line given by the coordinates (lXl, lV], <pk, Oe)- For convenience, the point xo = (^0) Vo, zo) is taken to be Xo — Xy — lx l;p ~\~ ly ly The value of / at any point given by its argument in Equation 4.89 is determined by a three dimensional interpolation procedure in the image volume5. The interpolation is performed at approximately Nv positions in the image for each line integral (if At = Av, the image sampling interval), but this is evaluated only for a subset of the Ngx2 discrete positions in the ST plane, those in the elliptical regions of Equation 3.54. The number of multiplications for each ST plane is Mp(0e) ~ [Ntx'{Ot)f NVN3D  5For a tri-linear interpolation, there are JV3£> = 24 multiplies for each interpolation. Chapter 4. Implementation 66 where [Ntx'(0t)]2 is the number of line integral evaluations in the ST plane at the orientation de. The total number of multiplications is determined by summing Mp(8t) over the Ng'Nv X-ray Transform planes, ST, in which the projection is performed. This implementation of the projection has two parameters, the interval At, for the step size in the integration variable, and the interval Alx, the step size in the E T plane on which the line integrals are evaluated. Chapter 5 Results and Conclusions 5.1 Monte Carlo Simulations Monte Carlo simulations of phantom objects were used to test the reconstruction algo-rithm. All simulations began with geometrically denned volumes of positron emitter. A list of event line coordinates was generated from simulated annihilations within the volumes. For the purposes of this thesis, the simulations omitted the effects of scattered gamma rays. The simulated data modelled photon statistics, but not the scatter, at-tenuation, singles or random coincidences described in Chapter 1. The detector model used for most of the studies had the geometry of the commercial ECAT 953B volume imaging tomograph. This tomograph has a ring diameter RD = 38cm, and a height of H = 5.4cm, giving it an acceptance angle of V'det = 8.1 degrees. For some of the simulations, detector resolution functions were also modelled to determine the effect of the discrete detection coordinates on the image reconstruction. The simulations were generated by the PHANTOM program developed at TRIUMF. 5.1.1 Reconstruction of Small Objects The results of reconstructing objects of small size compared to the detector height have been presented previously[55], and will not be discussed in detail. A heart-size ellipsoidal phantom with cold spots was first used to analyze the arte-fact content of an image reconstructed according to Equation 3.41. Because of the 67 Chapter 5. Results and Conclusions 68 small size of the phantom, 90% of the detected events could be included in the re-construction without the use of the iteration step. The reconstruction using the three dimensional Radon Transform algorithm, was shown to have significantly improved signal-to-noise ratio compared to two dimensional slice reconstructions computed from transaxial events (using approximately 30% of the simulated data). There were no major artefacts produced by the reconstrucion algorithm. The heart phantom was positioned near the tomograph center and was not ideally suited to show artefacts out near the edge of the F O V . In addition to the small ellip-soidal phantom, simulations were run with a transaxially extended, axially thin, offset elliptical cylinder. The elliptical cylinder was made axially thin (along the z-axis) to allow the study of axial spreading of the reconstructed images. Again, the reconstruction of this phantom was compared with the reconstructions obtained with a conventional two dimensional slice reconstruction algorithm and a modified two dimensional algorithm, one that histograms the three dimensional event data in a form such that it can be used by two dimensional reconstruction programs[18]. With this phantom, the same improvement in signal-to-noise ratio seen in the heart phantom over the conventional slice reconstruction method was observed. The modified two dimensional algorithm exhibited a severe transaxial artefact, a false drop in the reconstructed activity in the disk near the center of the tomograph. The artefact was not present in the three dimensional reconstruction. Simulations were also carried out that included the detector resolution function due to the interaction length of the gamma rays in the crystal and to the geometry of the saw-cuts isolating crystals in a single detector block. In these studies, a concentric ring artefact was found in the three dimensional images. This is due to the initial discretization of the event-line orientations by the 6mm square saw-cut crystals, and the interference pattern generated by rebinning this discrete function in the angular Chapter 5. Results and Conclusions 69 variable 6T. The problem is not fundamental to the reconstruction algorithm, but is due to the coarse discretization of event position in the crystals. The E C A T 953B is inherently a slice imaging tomograph that has been converted to a volume imager by giving it the feature of retractable septa. The detector resolution can be improved beyond that of the current machine[52], and if implemented these improvements will improve the performance of three dimensional reconstruction algorithms in general. In the interim, the effects of the coarse discretization of the detected coordinates in this type of scanner has been minimized by a judicious choice of angular bin size and the use of axial smoothing filters. 5.1.2 Reconstruction of Large Objects The phantom chosen to test the implementation of the single iteration step to include all oblique events was that of an off-center sphere. The simulated sphere had a radius R = 4cm and its center at X o = (3,3, l)cm. The sphere was offset to keep the integrated activity high while keeping V'max ~ 0. For this sphere V'max = -54 degrees, which was much less than the angular bin size, A<j> ~ 2.5 degrees. The offset of the phantom from the tomograph center also ensures there is no accidental cancelation of symmetric artefacts. For the purpose of comparison to other reconstruction images, the detected event count from the sphere can be expressed in terms of the number of events detected in the transaxial direction per unit volume of the sphere. For this simulation, the number of events in the transaxial direction was NK = 1.14 x 106, and the integrated transaxial in all projections was NT = 4.70 x 106, so that the three dimensional reconstruction produced by the iteration has on average four times the efficiency in terms of utilizing the data. (The efficiency will of course be higher in the center of the F O V than at unit volume was oi = 4.24 x 103< cm 3 . The total number of detected events 2 Chapter 5. Results and Conclusions 70 Cent ra l Line Prof i les — Low Sta t i s t i cs Sphere -20 -15 -10 -5 0 5 10 15 x (cm) Figure 5.9: A central line parallel to the a?-axis, through the initial low-statistics re-construction of a sphere of radius 4cm. \z\ « H where it will be equal to the transaxial efficiency as discussed in Chapter 1.2). The reconstruction of the crude image from the transaxial events was obtained using Equation 3.53. The low statistics reconstruction of the sphere exhibits no major artefacts. Line profiles through the three central axes of the sphere, parallel to the coordinate axes directions, are shown in Figures 5.9, 5.10 and 5.11, and are overlayed in Figure 5.12. These figures show the reconstruction has very little noise outside the sphere. The image bias was determined by averaging the pixels in the zero activity regions, a Chapter 5. Results and Conclusions 71 Cent ra l Line Prof i les — Low Sta t i s t i cs Sphere ' -1 1 1 1 1 1 1 r - 2 0 - 1 5 - 1 0 - 5 0 5 10 15 y ( c m ) Figure 5.10: As in Figure 5.9 with a central line parallel to the y-axis. Chapter 5. Results and Conclusions 72 Centra l Line Prof i les - Low Sta t i s t i cs Sphere - 1 5 - 1 0 - 5 0 5 10 15 z ( c m ) Figure 5.11: A s in Figure 5.9 with a central line parallel to the z-axis. Chapter 5. Results and Conclusions 73 Cent ra l Line Prof i les - Low Sta t i s t i cs Sphere - 2 0 - 1 5 - 1 0 - 5 0 5 1 0 1 5 Posi t ion (cm) Figure 5.12: All of the line profiles in the previous three figures are superimposed in this figure. Chapter 5. Results and Conclusions 74 convolution length or more away from edge of the sphere. This bias was then subtracted from every point in the image. The projection of the crude image into the incomplete projections was carried out concurrently with the reconstruction of the image according to Equation 3.41. The reconstruction only used the projections in the central 5 of 7 bins in 9T. The most extreme bins in 9T were excluded because they contain very little statistical information and add another Nv set of projections to process, increasing reconstruction time and causing additional smoothing of the image. Line profiles obtained through the same x and z axes as those for the low-statistics sphere are shown in Figures 5.13, 5.14, and 5.15 for the iterated reconstruction. These iterated image profiles have a significantly higher artefact content than those of the initial images. The standard deviation was computed for both images as a per-centage of the mean over the central 3cm radius of the sphere. The standard deviation for the low-statistics image was computed to be 1.7% and was 5.6% for the iterated image. The artefacts created by the iteration dominate any statistical signal-to-noise ratio improvement produced by the increased statistical content of the images. The x — 3cm plane of both reconstructed spheres are shown in Figures 5.16, and 5.17. These surface plots show the crest that forms in the iterated image due to the projection step. The artefacts in the iterated image are due to the following factors: 1. The principle error is due to the finite angular bin size in ipT and 9T. This gives rise to an overrun of measured data into the ellipses defined by Equation 3.54. This overrun adds to the projected value in the ellipses in a manner that cannot be accurately removed. The result is a higher reconstructed activity away from the centre of the sphere. Attempts to correct this error might include altering the shape of the ellipses, by using a fitting procedure, so as not to allow any overlap, Chapter 5. Results and Conclusions 75 Centra l Line Prof i les — Iterated Sphere - 2 0 - 1 5 - 1 0 - 5 0 5 1 0 1 5 x ( c m ) F i g u r e 5 .13: A c e n t r a l l i ne p a r a l l e l t o the x - a x i s t h r o u g h the i t e r a t e d r e c o n s t r u c t i o n of a sphere o f r a d i u s 4 c m . Chapter 5. Results and Conclusions 76 co Centra l Line Prof i les — Iterated Sphere 4 0 - ^ - J i i i i i L 3 0 § 2 0 a> 1 10 H a) 0 1 0 x = 3 c m y = 3 c m - 1 5 - 1 0 - 5 0 ( c m ) 5 10 1 5 Figure 5.14: As in Figure 5.13, with a central line parallel to the z-axis. This profile shows a crest that arises due to the projection. Chapter 5. Results and Conclusions 77 F i g u r e 5.15: T h e two p r e v i o u s l i ne prof i les are s h o w n toge the r i n th i s figure. Chapter 5. Results and Conclusions 78 Figure 5.16: Surface plot of the central slice of the low-statistics sphere. Chapter 5. Results and Conclusions 79 Figure 5.17: Surface plot of the central slice of the iterated sphere showing the formation of a ridge-like artefact. Chapter 5. Results and Conclusions 80 or by list-mode subtraction, during histogramming, of the measured data that falls in the theoretically unmeasured region of the ST plane. As the events are actually misplaced from their true coordinates by the histogram bin size, neither of these approaches is an ideal solution. The amount of the overlap will become smaller as the bin sizes are reduced in size. For the simulations presented in this thesis, the bin sizes were ultimately determined by the memory limitations in the histogram computer, a microVAX-II. The available physical memory size was 4 Mbytes and it determined the histogram dimensions according to Equation 4.65. When implemented on a computer with greater resources, the projection artefact problem can be reduced by using a larger histogram space. 2. The assumptions that went into the calculation of the scale factor may not be accurate. The scale constant is correct only if the image noise and artefacts outside the crude image are near zero. The approximation that any DC bias can be corrected by subtracting a constant may be too coarse. Some modelling of the noise as a function of position in the FOV may be required. 3. There is also an effect due to the step size on which the #(xT; T) are evaluated within the elliptical regions. The grid in lx and ly used to evaluate the line integrals in the ellipses was a rectangular one. As such, a finer spacing of the grid near the edges of the ellipses, with a corresponding modification of the scale constant, may be required if the shape of the projected area is to be properly sampled. It may be possible to see improved statistical signal-to-noise ratio in the iterated image when compared to the low-statistics image, even in the presence of added artefacts, Chapter 5. Results and Conclusions 81 but only if the original data set has a high statistical error due to a small total number of events. The reconstruction of the sphere constitutes a high-statistics image. For the chosen histogram dimensions of Np = 47, Ni = Nv = 72 and backprojecting using linear interpolation at A^3 = 6.1 x 105 image positions, the C P U time required for the formation of the intitial image on a V A X 8650 is 14.7 minutes. The reconstruction of the iterated image using 5 of the 9T bins requires 160 min-utes, including the projection step. The projection takes approximately 15% of the reconstruction time. Methods exist for optimizing the backprojection[48,12], but these have not been implemented here. 5.1.3 The Iteration and Image Resolution An extended ellipsoidal phantom was used to examine the effect of the iteration tech-nique on the resolution of the reconstructed image. The phantom consisted of a large, offset ellipsid, with major axes, a = 6cm, b = 3cm, c = 7cm and its center at x 0 = (1,4,2)cm. The phantom contained 10 small spheres of radius R = 0.75cm with one half of the unit activity of the ellipse. A line profile through the centers of three of the cold spheres in the z = 2cm plane of the ellipse is shown in Figure 5.18 for the reconstruction of the crude image for two different filters: the Hanning windowed object space filter, a^ (p, £, 0r) of Equation 4.79 with three cut-off frequencies, the maximum permissible vc = vN, vc = .9§vN and vc — .80 N^, and the second difference filter, L~xas<i of Equation 4.71. The second difference filter produces the deepest minima at the position of the cold spheres, but also has the greatest noise outside the large ellipse. The rectangular profile shows the actual activity distribution in that profile, (z = 2cm, y = 3.25cm), corresponding to Chapter 5. Results and Conclusions 82 Reso lu t i on f o r the L o w — S t a t i s t i c s . Image x (cm) Figure 5.18: Reconstruction of the ellipsoidal cold-spot phantom from transaxial events with the second difference filter and with the Hanning window using different cutoff frequencies vc. the activity in the phantom. The same line profiles are also shown in Figure 5.19 for the iterated image projecting from each of the differently filtered crude images. The filter form used in the final stage of reconstruction for all images was the Hanning windowed filter a^p, £, 9T) with vc = vN. There is very little difference between the profiles of the iterated images. These figures show that the resolution of the final image is being dominated by smoothing that occurs as a result of the backprojection of the projections filtered with the final filter, rather than the filtering used in the initial crude image. The cold spheres are coarsely Chapter 5. Results and Conclusions 83 Reso lu t i on f o r the I terated Image x (cm) Figure 5.19: Reconstruction of the ellipsoidal cold-spot phantom from all detected events using a single iteration step. The four different crude images produce essen-tially the same final image. The reconstruction is dominated by the smoothing in the projection and backprojection steps. Chapter 5. Results and Conclusions 84 sampled in the projections with only 2 or 3 sample points across the their diameters. As such, the oblique backprojection at different orientations, r , strongly smooths these small features. There will also be a smoothing effect due to the integration that occurs as part of the projection step along with its tri-linear interpolation in the image volume. Further work needs to be done to examine the effect of the histogram bin size and the interpolation techniques used on the loss of resolution in the iteration step with a further optimized projection technique. Density plots are presented to show the level of noise in the images and the difference between the resolution of the sharpest low-statistics image and the iterated images. Figure 5.20 is the image at the z = 2cm plane of the extended ellipsoid reconstructed from transaxial events using the second difference filter. For comparison, Figure 5.21 shows the result of iterating the three dimensional volume image for the same slice shown in Figure 5.20. The result is a much smoother image, (the Hanning windowed filter with a cut-off frequency of uN was used), with less noise outside the phantom and less mottle within it. This smoothing is largely due to the interpolation and filtering operations, not to the improvement in the statistical signal-to-noise ratio. 5.2 Future Work 5.2.1 Three Dimensional Filtering The recovery operator of Equation 3.38 is the backproject-then-filter implementation of the inversion of the three dimensional Radon Transform. The backprojected image, ^^(x) °f Equation 2.18 is produced by the backprojection of Equation 4.84, or by Equation 2.17 where f(p, n) is given by Equation 3.46, and is corrected in image space by the application of the three dimensional Laplacian. Chapter 5. Results and Conclusions 85 L o w - S t a t i s t i c s R e c o n s t r u c t i o n —r~ ~1 i i r - 1 0 - 5 0 5 10 x (cm) Figure 5.20: A two dimensional section (z =2) of the ellipsoidal phantom reconstructed using only transaxial events and the second difference filter. The fine scale noise is due to the dithering algorithm used to plot the figure. Chapter 5. Results and Conclusions I t e r a t e d R e c o n s t r u c t i o n . . . I . . . . , „ . i„ ... i .. j . 5 -3 o -- 5 -- 1 0 -i i i i i ' - 1 0 - 5 0 5 10 x ( c m ) Figure 5.21: The same two dimensional section of the ellipsoidal phantom as in ure 5.20, reconstructed with all detected events by iteration. Chapter 5. Results and Conclusions 87 This differentiation can be performed by a three dimensional convolution opera-tion in the image space with a windowed object space filter which has an Fourier Transform of the form in Equation 4.68. A three dimensional convolution is generally a computationally expensive procedure, requiring on the order of M/(3£>) ~ Nv3Nf3 multiplications. For large Nf, MJ(ZD) becomes prohibitively large. This is one of the reasons for using FBP in analytic reconstruction algorithms as convolution filtering of the lesser dimension projections is usually faster. In the Radon Transform inversion the filter length Nf is smaller than with the X-ray Transform filters. While FFT methods can be used to speed the three dimensional convolution, it is also possible to simplify the convolution in image space by the appropriate choice of apodising window, W'(u), in Equation 4.68. Taking a lesson from two dimensional image processing, the edge detection operator of Marr and Hildreth[36] employs the two dimensional Gaussian to regularize the noise enhancing differentiation of the Laplacian operator. The unique features of the Gaussian, which is its own Fourier Transform, are that it is the function that is most compact in both the Fourier and image domains, that it is rotationally symmetric and that it can be written as a sum of separable functions. Separability allows a saving in the number of multiplications used in the discrete convolution. Both the Gaussian and the Laplacian are linear operators and may be applied in any order to the image or combined for use as a single operator. These properties also apply to the three dimensional Gaussian, a fact that has been exploited by Bowmans et. al.[52] in their work on three dimensional image segmentation in MRI. The three dimensional convolution can be reduced to 8 separate one dimensional convolutions (applied simultaneously to the Nv2 elements of the projection of the Nv3 image positions in the x, y, and z directions), reducing the number of multiplications required to MfQ ~ 8Nv3Nf. (Or by approximating the Laplacian of the Gaussian by the difference of Gaussians, just 6 one dimensional convolutions need be performed). Chapter 5. Results and Conclusions 88 This represents the same order of multiplications for the filtering of the h(pt, (pk, Qi) enumerated in Equation 4.85, and about Nj times that required for the filtering of the three parameter Radon Transform of Equation 4.86. The advantage of using the three dimensional post-backprojection filtering of the image when comparing the effects of filter cut-off frequency on the image quality is that the computationally expensive backprojection need only be performed once. The backprojected image can be quickly filtered with various filters having different uc and without any additional loss of accuracy that would occur using image processing techniques to modify a previously filtered image. 5.2.2 Non-iterative Reconstruction Using All Detected Events The interpretation of the Radon Transform in P V I , that of counting events not as samples of line integrals of the object but as the partial statistical samples of plane integrals (which can be described as line integrals in ST), is distinct from the X-ray Transform where events are localized to a point, in the plane ST, and from the Fourier Transform where a point in the transform plane has a contribution from every point in ST. This property allows a distinct feature in Radon Transform inversion, that of including all measured event lines in the reconstruction regardless of their orientation with respect to the limiting angle V'max required for spatial invariance. All events can be histogrammed so as to increase the statistical accuracy of a set of plane integrals prior to inversion. Any event, regardless of its orientation, belongs to a subset of planes of /(p, n) that are entirely contained within the detector. The events in these planes can be histogrammed with a weight of 1/ir which reflects the statistical averaging over a unit semi-circle as described for the spherical detector in section 3.2. As such, it is possible to decrease the statistical noise of the reconstructed image by inverting a Radon Transform which contains all events. Chapter 5. Results and Conclusions 89 The interpretation of the Orlov L function in Equation 4.84 as the detector efficiency for measuring a given plane integral of the object allows the straightforward extension of the definition of L to include the completely measured line integrals h(p, £; r) in the X-ray Transform planes ST that do not meet the spatial invariance criterion of Equation 3.39. L2D now becomes a function of the spatial variable p. Modifying L2D such that L(p, en)-x = - i i \ 8 n \ < V>det and p < P m a x ( 9 n ) (5.90) where pmax(dn) is the limiting value of p for planes with normals having polar angle 6 n that are entirely contained within the detector (Figure 5.22). From the figure, p m a x ( 8 n ) is given by PmaxiQn) — \HcOfidn - RD | SHI 8 n \ \. (5.91) Otherwise, L2D~X has the form L(P,en)-1 = < 2arcs in S , i n .^ m ¥ 1 sin 8, n | if \8T - | | < i /> m a x (5.92) 0 otherwise. In the above expressions for the spatially varying L function, the polar angle 8 n of the plane orientation vector n can be related to the angles in the E T plane by cos 8 n = sin £ sin r and sint?n = yjl — (sin£ sin#T)2 in order to express the modified detector efficiency function as L(p,£,8T). With the above detector efficiency function, essentially all the events are histogrammed into the set of plane integrals that is entirely contained within the detector. This allows the inclusion of all detected events into the reconstructed image without the need for Chapter 5. Results and Conclusions 90 Figure 5.22: The determination of the limiting value for p, pmax, for any plane ori-entation such that it is entirely contained within the detector. The large rectangle represents the cross-section of a cylindrical detector. Intersecting the detector at its rim is a plane, seen edge-on, with normal vector ii. Chapter 5. Results and Conclusions 91 the iteration. The same method can be used for the inversion of extended objects with LID and the inversion of the three parameter Radon Transform. After backprojecting the h(p, £ ; T ) weighted by the modified L function, the image can be inverting using the three dimensional filtering of the image described in the last section (5.2.1). Note that the number of plane integrals of f(p, ii) with enhanced statistics depends on the detector acceptance angle ipfet. The statistical improvement in the reconstructed image will increase as the detector ring acceptance increases, even if an extended object with m^ax = 0 is being imaged. A similar method was employed by Clack et al.[14] following Cho et al.[13]. In the method of Clack, a weighted two dimensional reconstruction of planes entirely contained within the detector was performed. The set of plane integrals that were reconstructed was a limited set of the Restricted Radon Transform[29]. The Restricted Radon Transform is that set of plane integrals derived from the X-ray Transform by taking line integrals parallel to 1 .^ Clack et al. use only two directons <pn to define these planes in their planar detector system and backprojected all events with weights corresponding to the two dimensional reconstruction of those planes. The convolution filtering of the two dimensional slices occurs across the line in-tegrals within a given slice, whereas for the three dimensional Radon Transform, the line integrals comprising a plane integral added together and the convolution filtering occurs across these planes. The Clack algorithm results in the need for a singular weighting function which requires the use of damping functions to correct it. The three dimensional Radon Transform method has a nonsingular weight function 1/L for all event orientations. Given that values from low-statistics or crude images are projected and added into the incomplete projections, the crude image with the least variance will produce the iterated image with the highest signal-to-noise ratio. Then an image formed from Chapter 5. Results and Conclusions 92 the inversion of Radon Transform including all oblique events is the best possible low statistics image. In this way, the three dimensional Radon Transform offers better statistical noise performance than other image reconstruction methods. This averaging does not necessarily preclude the use of the iteration technique which is implemented to include all events in all planes- or lines-of-response to which they would be added if the detector were shift invariant. Whether it is necessary to use both techniques will depend on the noise propagation characteristics of these techniques. It may be that the image formed with the non-iterative technique using all oblique events is statistically the best possible image. 5 .3 Conclusions A new algorithm for three dimensional image reconstruction from projections in PVI has been developed and tested using Monte Carlo simulations. The algorithm is an analytic technique based on the inversion of the three dimensional Radon Transform which employs object space convolution methods rather than Fourier space multipli-cations using the F F T . This means that the inaccuracy in using FFT methods can be avoided. The Radon Transform algorithm has been shown to be theoretically equiv-alent to Fourier Transform methods in current use. This algorithm for the first time extends the three dimensional Radon Transform inversion to the practical truncated detector geometries used in PET scanners. The algorithm is based on the Orlov recovery operator which has been implemented previously as the inversion of the X-ray Transform [34]. In the Radon Transform in-terpretation of the Orlov operator, there is a simplification in the mathematics of the filter used for inversion over that of the extended singular filter used in the case of the X-ray Transform. The analysis leads to the interpretation of the angular weighting Chapter 5. Results and Conclusions 93 function L found in the X-ray Transform and Fourier Transform inversion methods as the detector efficiency for measuring a given plane integral of the Radon Transform. The results of the simulations of both extended and central sources have shown that the algorithm gives artefact free reconstructions for first pass image reconstruc-tions using events that obey the spatial invariance criterion. Problems were found with increased levels of artefact in the iterated reconstruction that uses all of the detected events, but these are shown to be due to inaccuracy in the implementation of the iter-ation step, not the reconstruction algorithm itself. Further work is needed to optimize the iteration to reduce the level of artefact and some approaches have been described to achieve this goal. The results of comparison of the computation involved in the various stages of the reconstruction show that the Radon Transform technique, compared to the X-ray Transform technique, essentially moves some of the computation in the filtering step ahead into the histogram formation. This can be done to various degrees within the Radon Transform inversion technique depending on whether a three parameter Radon Transform is used (as in Dykstra et al.[24]) or a four parameter Radon Transform is histogrammed (described in this thesis). This may be advantageous in a flexible PET system where an existing high speed processor designated to image reconstruction can be programmed to perform the histogramming task as well, making optimum use of the processor and improving overall reconstruction time. A method has been proposed (but not implemented) which includes all of the de-tected events in a single pass reconstruction without the need for an iterative step. Whether this technique will replace the iteration procedure will depend on the statis-tical noise properties of both methods, which have yet to be studied. The advantage of a single pass reconstruction is the savings in computing time as all projections need only be backprojected once and the need for the projection calculations is eliminated. Chapter 5. Results and Conclusions Th is is believed to be the most promising original aspect of this thesis. Bibliography [1] M Bergstrom, L Eriksson, C Bohm, G Blomquist, and J Litton. Corrections for scattered radiation in a ring detector positron camera by integral transformation of the projections. Journal of Computer Assisted Tomography, 7, 1986. [2] R N Bracewell. The Fourier Transform and its Applications. McGraw-Hill, second edition, 1978. [3] R N Bracewell and A C Riddle. Inversion of fan-beam scans in radio astronomy. Astophysics Journal, 150:427-434, 1967. [4] R A Brooks and G DiChiro. Principles of computer assisted tomography (CAT) in radiographic and radioisotopic imaging. Physics in Medicine and Biology, 32(5):689-732, 1976. [5] T F Budinger and G T Gulberg. Three-dimensional reconstruction in nuclear medicine emission imaging. IEEE Transactions on Nuclear Science, NS-21(3):2-20, 1974. [6] T F Budinger, G T Gullberg, and R H Huesman. Emission Computed Tomography, volume 32 of Topics in Applied Physics, pages 147-245. Springer-Verlag, 1979. G T Herman, editor. [7] M E Casey and E J Hoffman. Quantitation in positron emission tomography:7. A technique to reduce noise in accidental coincidence measurements and coincidence efficiency calibration. Journal of Computer Assisted Tomography, 10(5):845-850, 1986. [8] M E Casey and R Nutt. A multi-crystal two dimensional BGO detector sys-tem for positron emission tomography. IEEE Transactions on Nuclear Science, 335(l):460-463, 1986. [9] Y Censor. Finite series-expansion reconstruction methods. Proceedings of the IEEE, 71(3):409-419, 1983. [10] D A Chesler and S J Riederer. Ripple suppression during reconstruction in trans-verse tomography. Physics in Medicine and Biology, 20(4):632-636, 1975. 95 Bibliography 96 [11] M Y Chiu, H H Barrett, and R G Simpson. Three-dimensional image recon-struction from planar projections. Journal of the Optical Society of America, 70(7):755-762, 1980. [12] Z H Cho and C M Chen. Incremental algorithm - A new fast backprojection scheme for parallel ray geometries. IEEE Transactions on Medical Imaging, MI-9(2):207-217, 1984. [13] Z H Cho, J B Ra, and S K Hilal. True three-dimensional reconstruction (TTR) -Application of algorithm toward full utilization of oblique rays. IEEE Transactions on Medical Imaging, MI-2(1):6-18, 1983. [14] R Clack, D Townsend, and M Defrise. An algorithm for three-dimensional recon-struction incorporating cross-plane rays. IEEE Transactions on Medical Imaging, MI-8(l):32-42, March 1989. [15] J G Colsher. Fully three-dimensional positron emission tomography. Physics in Medicine and Biology, 25(1):103-105, 1980. [16] R A Crowthier, D J DeRosier, and A Klug. The reconstruction of a three-dimensional structure from projections and its application to electron microscopy. Proceedings of the Royal Society of London, Series A, 317:319-340, 1970. [17] M Dahlbom, L Eriksson, G Rosenqvist, and C Bohm. A study of the possibility of using multi-slice PET systems for 3D imaging. IEEE Transactions on Nuclear Science, NS-36(1):1066-1071, 1989. [18] M E Daube-Witherspoon and G Muehllehner. Treatment of axial data in three-dimensional PET. Journal of Nuclear Medicine, 28(11):1717-1724, 1987. [19] S R Deans. The Radon Transform and some of its applications. John Wiley and Sons, 1983. [20] M Defrise, S Kuijk, and F Deconinick. A new three dimensional reconstruction method for positron cameras using plane detectors. Physics in Medicine and Bi-ology, 33(1):43-51, 1988. [21] M Defrise, D W Townsend, and R Clack. Three-dimensional image reconstruction from complete projections. Physics in Medicine and Biology, 34(5):573-587, 1989. [22] M Defrise, D W Townsend, and F Deconinck. Statistical noise in three-dimensional positron tomography. Physics in Medicine and Biology, 35(1):131-138, 1990. Bibliography 97 [23] R V Denton, B Friedlander, and A J Rockmore. Direct three-dimensional image reconstruction from divergent rays. IEEE Transactions on Nuclear Science, NS-26(5):4695-4703, 1979. [24] C J Dykstra, R Harrop, J G Rogers, and P E Kinahan. The Radon Transform and Positron Volume Imaging. To appear in: IEEE Transactions on Medical Imaging, 1990. [25] R W Hamming. Digital Filters. Prentice Hall, 1983. [26] G T Herman. Image Reconstruction from Projections. Academic Press, 1980. [27] B K P Horn. Density reconstruction using arbitrary ray-sampling schemes. Pro-ceedings of the IEEE, 66(5):551-562, 1978. [28] B K P Horn. Robot Vision. McGraw-Hill Book Co., 1986. [29] A Imiya and H Ogawa. Direct reconstruction of three dimensional image from its line integrals. In SPIE vol.671: Physics and Engineering of Computerized Multidimensional Imaging and Procesing, pages 42-49, 1986. [30] J D Jackson. Classical Electrodynamics, chapter 1. John Wiley and Sons, 1975. [31] W F Jones, L Byars, and M Casey. Design of a super fast three-dimensional pro-jection system for positron emission tomography. IEEE Transactions on Nuclear Science, NS-37(2):800-805, 1990. [32] P E Kinahan. Analytic three dimensional image reconstruction from projections. Master's thesis, University of British Columbia, 1988. [33] P E Kinahan and J G Rogers. Analytic 3D image reconstruction using all detected events. IEEE Transactions on Nuclear Science, 36(l):964-968, 1989. [34] P E Kinahan, J G Rogers, R Harrop, and R R Johnson. Three-dimensional image reconstruction in object space. IEEE Transactions on Nuclear Science, 35(1):635-638, 1988. [35] J Llacer, A Veklerov, and E Veklerov. Towards a practical implementation of the MLE algorithm for positron emission tomography. IEEE Transactions on Nuclear Science, NS-33(l):468-477, 1986. [36] D Marr and H Hildreth. Theory of edge detection. Proceedings of the Royal Society of London, Series B, 207:187-217, 1980. Bi bliography 98 R B Marr. On the reconstruction of a function on a circular domain from a sampling of its line integrals. Journal of Mathematical Analysis and Applications, 45:357-374, 1974. J A Mclntyre. Computer assisted tomography without a computer. IEEE Trans-actions on Nuclear Science, NS-28(1):171-173, 1981. R M Mersereau. Direct fourier transform techniques in 3-D image reconstruction. Computers in Medicine and Biology, 6:247-258, 1976. G Muehllehner, J S Karp, D A Mankoff, D Beerbohm, and C E Ordonez. Design and performance of a new positron emission tomograph. IEEE Transactions on Nuclear Science, 35(l):670-674, 1988. J Nunez and J Llacer. A fast Bayesian reconstruction algorithm for emmission tomography with entropy prior converging to feasible images. IEEE Transactions on Medical Imaging, MI-9(2):159-171, 1990. S S Orlov. Theory of three dimensional reconstruction. I conditions for a complete set of projections. Soviet Physics Crystallography, 20(2):312-314, 1976. S S Orlov. Theory of three dimensional reconstruction. II the recovery operator. Soviet Physics Crystallography, 20(4):429-433, 1976. J A Parker, R V Kenyon, and D E Troxel. Comparison of interpolating methods for image resampling. IEEE Transactions on Medical Imaging, MI-2(l):31-39, 1983. L M Pecora. 3D tomographic reconstruction from 2D data using spherical har-monics. IEEE Transactions on Nuclear Science, NS-34(2):642-650, 1987. N J Pelc. A Generalized Filtered Backprojection Algorithm for Three Dimensional Reconstruction. PhD thesis, Harvard School of Public Health, 1979. N J Pelc and D A Chesler. Utilization of cross-plane rays for three-dimensional reconstruction by filtered back-projection. Journal of Computer Assisted Tomog-raphy, 3(3):385-395, 1979. T M Peters. Algorithms for fast back- and re-projection in computed tomography. IEEE Transactions on Nuclear Science, NS-28(4):3641-3650, 1981. W H Press, B P Flannery, S A Teukolsky, and W T Vetterling. Numerical Recipes. Cambridge University Press, 1986. Bibliography 99 [50] J Radon. On the determination of functions from their integral values along certain manifolds. IEEE Transactions on Medical Imaging, MI-5(5):170-176, 1986. This is a translation by P C Parks of the original German text by Johann Radon published in Berichte der Sachsischen Akadamie der Wissenschaft, Vol. 69, pp. 262-277, 1917. (Session of April 30,1917). [51] G N Ramachandran and A V Lakshminarayanan. Three-dimensional reconstruc-tion from radiographs and electron micrographs: Applications of convolutions in-stead of fourier transforms. Proceedings of the National Academy of Science of USA, 68(9):2236-2240, 1971. [52] J G Rogers. Private communication. [53] J G Rogers, R Harrop, and P E Kinahan. The theory of three-dimensional image reconstruction for PET. IEEE Transactions on Medical Imaging, MI-6(3):239-243, September 1987. [54] J G Rogers, R Harrop, P E Kinahan, N A Wilkinson, P W Doherty, and D P Saylor. Conceptual design of a whole body PET machine. IEEE Transactions on Nuclear Science, 35(l):680-684, 1988. [55] J G Rogers, M Stazyk, R Harrop, C J Dykstra, J S Barney, M S Atkins, and P E Kinahan. Toward the design of a positron volume imaging camera. IEEE Transactions on Nuclear Science, 37(2):789-794, 1990. [56] S W Rowland. Computer implementation of image reconstruction formulas, vol-ume 32 of Topics in Applied Physics, pages 7-79. Springer-Verlag, 1979. G T Herman, editor. [57] B Schorr, D Townsend, and R Clack. A general method for three-dimensional filter computation. Physics in Medicine and Biology, 28(9):1009-1019, 1983. [58] L A Shepp. Computerized tomography and nuclear magnetic resonance. Journal of Computer Assisted Tomography, 4(1):94-107, 1980. [59] L A Shepp and B F Logan. The Fourier reconstruction of a head section. IEEE Transactions on Nuclear Science, NS-21:21-43, 1974. [60] L A Shepp and Y Vardi. Maximum likely hood reconstruction for emission tomog-raphy. IEEE Transactions on Medical Imaging, MI-1(2):113-122, 1982. [61] P R Smith, T M Peters, and R H T Bates. Image reconstruction from finite numbers of projections. Journal of Physics, Series A, 6:361-382, 1973. Bibliography 100 [62] D C Solmon. The X-ray transform. Journal of Mathematical Analysis and Appli-cations, 56:61-83, 1983. [63] T J Spinks, T Jones, M C Gilardi, and J D Heather. Physical performance of the latest generation of commercial positron scanners. IEEE Transactions on Nuclear Science, 35(l):721-725, 1988. [64] C W Stearns, C A Burnham, D A Chesler, and G L Brownell. Simulation stud-ies for cylindrical positron tomography. IEEE Transactions on Nuclear Science, 35(1):708-711, 1988. [65] C J Thompson. The effect of collimation on scatter fraction in multi-slice PET. IEEE Transactions on Nuclear Science, 35(l):598-602, 1988. [66] C J Thompson. The effect of collimation on singles rate in multi-slice PET. IEEE Transactions on Nuclear Science, NS-36(1):1072-1077, 1988. [67] D W Townsend, T Spinks, T Jones, A Geissbuhler, M Defrise, M S Gilardi, and J Heather. Three dimensional reconstruction of PET data from a multi-ring cam-era. IEEE Transactions on Nuclear Science, NS-36(1):1056-1065, 1989. [68] B K Vainshtein. Synthesis of projecting functions. Soviet Physics - Crystallogra-phy, 16(5):66-69, 1971. [69] B K Vainshtein and S S Orlov. Theory of the recovery of functions from their projections. Soviet Physics - Crystallography, 17(2):213-216, 1972. [70] E Veklerov and J LLacer. Stopping rule for the MLE algorithm based on statisti-cal hypothesis testing. IEEE Transactions on Medical Imaging, MI-6(4):313-319, 1987. Appendix A Relation to Fourier Transform Methods Kinahan et al. [34] have shown the relationship between the X-ray Transform convo-lution filter derived in the Orlov recovery operator [43] and the frequency space filter derived by Colsher [15], Schorr et al. [57], Pelc et al.[46], and Defrise et al.[21]. The following relates the three dimensional Radon Transform interpretation of the Orlov result to the Fourier Transform method. Consider the recovery operator, Equation 3.41, to be of the form /(x) = J dGTg*(xT;r) (A.93) G where Ju^)Wh(»i-f) (A'94) Denning the one dimensional Fourier Transform operator, T\D and its inverse by FID(U(P)) = [°° dpu(p)e-I2™P (A.95) J—oo ^D(U(U))= [°° dvU(u)eI2NPU (A.96) J—oo Introducing the identity operator I = J-^J'XD-, M Equation A.94, then <7*(xT;f-) can be rewritten 101 Appendix A. Relation to Fourier Transform Methods 102 using the relationship for the Fourier Transform of a derivative Equation 2.8[2]. Defining v = vxi and with p = xT • ii, the product pu = v • xT and Equation A.97 can be written = ^ { i ( n « ) ; f ) ' F l D ' ' ( p , f l ? ) } using the polar form of the two dimensional inverse Fourier Transform. It can be shown [19] that the one dimensional Fourier Transform of the n-dimensional, (n > 1), Radon Transform with respect to the distance variable p is the n-dimensional Fourier Transform of the function. Then FID Kp, i; f) = T2D g(xT; f). (A.98) And ?2Dfif*(xT;f) = T ( " c . g(xT;f) (A.99) The function v/L is the Colsher Fourier space filter. The angular component of the filter is seen to be the detector efficiency function for measuring a given plane integral of the Radon Transform. Appendix B Program Listings The FORTRAN code for the reconstruction algorithm is presented in this appendix. The version presented is the one which iterates the low statistics image to produce a final image which uses all detected events in forming the image according to Equation 3.41. The code used to produce the low-statistics image from either Equation 3.53, if V'max = 0, or Equation 3.41, otherwise, can be obtained from the given code by modifying the L function and removing the call to the projection routine. 103 Appendix B. Program Listings 104 program recon_3D C The main c a l l i n g routine l o r the reconstruction. I t sets up the array space C and c a l l s the reconloop subroutine, and then writes the f i n a l image. impl i c t integer*4 (a-z) l o g i c a l exists integer np,nxi,ntheta,nphi,nix,nly,nc,nop,nx,nz,nf,m/4/, + tmax,rtmax,t sr/1/,Hi,Ri,RFi,Npiby2, + dsize,daddr,spe_dsize,spe_daddr, + 1s i z e,laddr,caddr,cs i z e,pdaddr,pds i z e, + l s _ i s ize,ls_iaddr,isize,iaddr,th_size,cth_addr,sth_addr, + xi_size,cxi_addr,sxi.addr,ph_size,cph_addr,sph_addr, + status,wordsize/4/,int_wordsize/2/,lunl/l/, + lib$get_vm,lib$free_vm,lib$init_timer,lib$show_timer + lib$free_timer,timer_addr/0/ integer f ile_origin,max_sn,spe_number,spe_length,spe_type, + spe_word_size,spe_origin r e a l psi_det,filter,fc,L,image_sum, + delta_p_inv,ratio_pv/l.0/,ratio_vs/l.0/, + H/5.4/.R/38.0/.RF character*50 spe_datafile,ls_imagefile.imagefile include 'cspectra.edi' common fi l t e r ( - 3 0 : 3 0 ) /det_parms/H,Hi,Ri,RFi + /ratios/delta_p_inv,ratio_pv,ratio_vs write(*,*) ' data.spe f i l e : ' read(*.'(A50)') spe_datafile inquire(file=spe_datafile,exist=exists) i f ( . n o t . e x i s t s ) c a l l quit('No data f i l e ' ) write(*,*) ' l s _ i m a g e f i l e : ' read(*,'(A50)') l s _ i m a g e f i l e inquire(file=ls_imagefile,exist=exists) i f ( . n o t . e x i s t s ) c a l l quit('No image f i l e ' ) write(*,*) ' number of t r a n s a x i a l events detected:' read(*,*) Npiby2 write(*,*) ' d i s c r e t e sum of Is image:' read(*,*) image_sum write(*,*) ' new image f i l e : ' read(*,'(A50)') imagefile C These routines are not l i s t e d here. They read the dimensions of the C histogram which was generated i n another program (also not included, see C the routine project_data f o r the histogramming code). c a l l open_spectrum(spe_datafile.file_origin,max_sn) spe_number = 2 ! the radon array i s stored i n SN #2 c a l l read_spe_description(spe_number,file_origin,spe_length, + spe_type,spe_word_size,spe_origin) np = (spxdim(2)-l)/2 delta_p_inv = spxscale(2) nxi = spydim(2) nphi = spzdim(2) ntheta = spadim(2) psi_det = atan(H/R) write(*,*) ' f c : ' read(*,*) f c nf = 11 nc = np + nf-1 ncp = nc + 4 Appendix B. Program Listings 105 Hi = int(H*delta_p_inv+float(m)) ! RF = sqrt(float(nc*nc-Hi*Hi)) ! nly = int(Hi*cos(psi_det)+RF*sin(psi_det))+2 Hi = int(Hi) RFi = int(RF) nix = RFi+2 ! to allow f o r i n t e r p o l a t i o n beyond plane edge Ri = R*delta_p_inv nz = int(Hi*ratio_pv) ! delta_p/delta_v nx = int(RFi*ratio_pv) write(*,*) ' np,nc,nlx,nly,nx,nz:',np,nc,nlx,nly,nx,nz tmax = (ntheta-l)/2 rtmax = tmax-tsr i s i z e = wordsize*(2*nz+l)*(2*nx+l)**2 status = lib$get_vm(isize,iaddr) i f (.not. status) c a l l lib$signal ( / C v a l(status)) c a l l zeroimage(nx,nz ,%val(iaddr)) write(*,*) ' i s i z e : ' , isize/512.0, ' blocks' l s _ i s i z e = wordsize*(2*nz+l)*(2*nx+l)**2 status = lib$get_vm(ls_isize,ls_iaddr) i f (.not. status) c a l l lib$signal('/,val(status)) write(*,*) ' l s _ i s i z e : ' , ls_isize/512.0, ' blocks' spe_dsize = ((2*np+l)*nxi)*int_wordsize status = lib$get_vm(spe_dsize,spe_daddr) i f ( . n o t . s t a t u s ) c a l l lib$signal('/,val(status)) write(*,*) ' spe_dsize:', spe_dsize/512.0, ' blocks' dsize = ((2*nc+l)*nxi)*wordsize status = lib$get_vm(dsize,daddr) i f ( .not. status) c a l l lib$signal('/,val(status)) write(*,*) ' dsize:', dsize/512.0, ' blocks' pdsize = ((2*(nc+nf-l)+l)*nxi)*wordsize status = lib$get_vm(pdsize,pdaddr) i f (.not. status) c a l l lib$signal('/,val(status)) write(*,*) ' pdsize:', pdsize/512.0, ' blocks' c s i z e = (2*ncp+l)*nxi*wordsize status = lib$get_vm(csize,caddr) i f ( .not.status) c a l l lib$signal('/,val(status) ) write(*,*) ' c s i z e : ' , csize/512.0, ' blocks' I s i z e = nxi*(rtmax+i)*(2*nf+i)*wordsize status = lib$get_vm(lsize,Iaddr) i f ( .not. status) c a l l lib$signal('/,val(status)) x i _ s i z e = nxi*wordsize status = lib$get_vm(xi_size,cxi_addr) i f ( .not.status) c a l l lib$signal('/.val(status)) status = lib$get_vm(xi_size,sxi_addr) i f ( .not.status) c a l l lib$signal('/,val(status)) ph_size = nphi*wordsize status = lib$get_vm(ph_size,cph_addr) i f ( .not. status) c a l l lib$signal('/,val(status)) status = lib$get_vm(ph_size,sph_addr) i f ( .not. status) c a l l lib$signal('/,val(status)) t h _ s i z e = ntheta*wordsize status = lib$get_vm(th_size,cth_addr) i f (.not. status) c a l l lib$signal('/,val(status) ) status = lib$get_vm(th_size,sth_addr) i f ( .not. status) c a l l lib$signal('/,val(status)) Appendix B. Program Listings 106 open(lunl,file=ls_imagefile,status='old', + form='unformatted') c a l l read_image(nx,nz,'/,val(ls_iaddr),lunl) c l o s e ( l u n l ) st atus =1ib$ i n i t _ t imer(timer_addr) i f ( . n o t . s t a t u s ) c a l l lib$signal('/,val(status)) c a l l f i l t e r g e n ( n f , f c ) c a l l reconloop_3D(nx,nz,np,nxi,nphi,ntheta,nix,nly,nf,nc,ncp, + psi_det,delta_p_inv,tmax,tsr,spe_origin, + '/.val(daddr) ,'/,val(spe_daddr) ,'/,val(ls_iaddr), + '/.val(laddr) ,'/.val(caddr) ,*/.val(pdaddr) , + '/,val(cxi_addr) ,'/,val(sxi_addr) ,y.val(cph_addr) ,%val(sph_addr) , + V.val (cth_addr) , 'Aval (sth_addr) , Npiby2, image_sum,*/,val (iaddr) ) type '(a)',' Program s t a t i s t i c s ' status=lib$show_timer(timer_addr) i f ( .not. status) c a l l lib$signal('/.val(status)) status=lib$free_timer(timer_addr) i f (.not. status) c a l l lib$signal('/,val(status) ) open(lunl,file=imagefile,status='new', + form='unformatted') c a l l copy_image(nx,nz,'/,val(iaddr) ,lunl) c l o s e ( l u n l ) status = l i b $ f r e e _ v m ( l s _ i s i z e , l s _ i a d d r ) i f ( .not. status) c a l l lib$signal('/,val(status)) status = lib$free_vm(isize,iaddr) i f ( .not. status) c a l l lib$signal('/,val(status)) end subroutine quit(outstring) i m p l i c i t none character*(*) outstring type '(2x,a)', outstring c a l l e x i t end subroutine zeroimage(nx,nz,array) i m p l i c i t none integer nx,nz,i,j,k r e a l array(-nx:nx,-nx:nx,-nz:nz) do k= -nz, nz do j= -nx, nx do i= -nx, nx array(i,j,k)=0.0 enddo enddo enddo end subroutine read_image(nx,nz,array,lun) i m p l i c i t none integer lun.nx.nz,i,j,k r e a l array(-nx:nx,-nx:nx,-nz:nz) read(lun) array return end subroutine copy_image(nx,nz,array,lun) i m p l i c i t none integer lun.nx.nz,i,j,k Appendix B. Program Listings 107 real array(-nx:nx,-nx:nx,-nz:nz) write(lun) array return end Appendix B. Program Listings 108 subroutine f i l t e r g e n ( n f , f c ) C This routine generates the r a d i a l part of the f i l t e r . r e a l r n , p i , f c , k , f i l t e r , i n t e g r a l integer nf,n common f i l t e r ( - 3 0 : 3 0 ) pi=acos(-l.0) i n t e g r a l =0.0 do n = 1-nf, nf-1 rn = f l o a t ( n ) f i l t e r ( n ) = -2.0*(pi**2)*(k(rn,fc) + +(k(rn-1.0/(2.0*fc),fc) + + k(rn+1.0/(2.0*fc),fc))/2.0) i n t e g r a l = i n t e g r a l + f i l t e r ( n ) end do return end r e a l function k(rn,fc) r e a l r n . f c . p i p i = acos(-l.O) i f (rn .eq. 0.0) then k = 2./3.*fc**3 else k = fc/(pi*rn)**2*C0S(2*pi*fc*rn)+ + (fc**2*2*(pi*rn)**2-l)*SIN(2*pi*fc*rn) + /(2*pi**3*(rn)**3) end i f return end Appendix B. Program Listings 109 subroutine reconloop_3D(nx,nz,np,nxi,nphi,ntheta,nix,nly,nf,nc, + ncp,psi_det,delta_p_inv,tmax,tsr,spe_origin, + data,spe_data,ls_image,L,conv,pdata, + csxi,snxi,csphi,snphi,csth,snth, + Npiby2,image_sum,image) C This routine loops through the XRT projections processing a C 2D section of the data at a time. F i r s t the l i m i t s of the C l i n e i n t e g r a l s are determined i n the unmeasured e l l i p s e s . C The 2D data set i s then completed by projection. C The f i l t e r i n g and backprojection routines routines are c a l l e d and C the f i n a l image i s then passed back to recon_3d. i m p l i c i t none integer nx,nz,np,nxi,nphi.ntheta,nix,nly,nf,nc,ncp,tmax,rtmax,tsr, + spe_origin,twod_len,twod_origin, + pps ize,ppaddr,status,pindex,t index, + sl_addr,s_size,done,totpj,pcount,Npiby2, + 1ib$get_vm,int _words ize/2/,words ize/4/ integer*2 spe_data(-np:np,0:(nxi-1)) r e a l data(-nc:nc,0:(nxi-i)), + pdata(-(nc+nf-l):(nc+nf-l),0:(nxi-i)), + ls_image(-nx:nx,-nx:nx,-nz:nz),conv(-ncp:ncp,0:(nxi-1)), + L(-nf:nf,0:(nxi-1),0:tmax),filter,image(-nx:nx,-nx:nx,-nz:nz), + csxi(0:(nxi-1)),snxi(0:(nxi-1)), + csth(-(tmax-tsr):(tmax-tsr)),snth(-(tmax-tsr):(tmax-tsr)), + csphi(0:(nphi-1)),snphi(0:(nphi-1)), + delta_phi,delta_theta,delta_xi,delta_p_inv,sum, + psi_det,psi_max,C,pi,scale,image_sum,f,ratio_pv common fi l t e r ( - 3 0 : 3 0 ) / r a t i o s / r a t i o _ p v p i = acos(-l.O) rtmax = tmax-tsr ppsize = (2*nlx+l)*(2*nly+l)*wordsize status = lib$get_vm(ppsize,ppaddr) i f (.not. status) c a l l lib$signal( ,/,val(status) ) s_size = 2*(nlx+l)*(nly+l)*rtmax*wordsize status = lib$get_vm(s_size,sl_addr) i f ( .not.status) c a l l lib$signal('/val(status)) delta_theta = psi_det/float(tmax) C = -(2*psi_det*delta_p_inv**2)/float(4*nphi*nxi*(ntheta-l)) image_sum = image_sum*nphi/ratio_pv**2 scale = Npiby2/image_sum c a l l trigarray(nxi,nphi,rtmax,delta_theta, + csxi,snxi,csphi,snphi,csth,snth) psi_max = psi_det*(ntheta-l-2*tsr)/(ntheta-l) c a l l lfncn(nf,nxi,rtmax,snxi,snth,psi_max,L) twod_len = (2*np+l)*nxi*int_wordsize twod_origin = spe_origin + tsr*nphi*twod_len do tindex = 1, rtmax c a l l proj _limits(nix,nly,rtmax,tindex,snth(tindex), + csth(tindex) ,'/,val(sl_addr) .pcount) type ' ( a , i 4 , a , i 4 ) ' , ' pcount = ', pcount, + ' at tindex = ', tindex end do totpj = (2*rtmax+l)*nphi type '(a,i4,a)',' There are ' , t o t p j , ' projections to do' done = 1 type ' ( a , i 4 ) ' , ' doing projection '.done do tindex = -rtmax, rtmax Appendix B. Program Listings do pindex = 0, nphi-1 type '(a,i4)','+ doing projection '.done done = done + 1 c a l l read_spectrum(twod_origin,twod_len,spe_data) c a l l zerodata(nc,nxi,data) c a l l float_data(np,nc,nxi,spe_data,data) i f ( t index.ne.0)then c a l l proj ect_data(nc,nxi,nlx,nly,nx,nz,rtraax,tindex, + '/,val(sl_addr),scale,csxi,snxi,csphi(pindex) , + snphi(pindex).csth(tindex).snth(tindex),ls_image,data,sum) end i f c a l l zerodata(ncp,nxi,conv) c a l l zerodata(nc+nf-l,nxi,pdata) c a l l f ilterdata(nf,np,nxi,nc,ncp,data,conv,pdata,rtmax, + tindex,L) c a l l zeropparray (nix, nly,V.val (ppaddr) ) c a l l integrate(nxi, nix,nly,nc,ncp, conv, '/.val (ppaddr) , c s x i , snxi) c a l l backproj ect (nix, nly ,'/,val (ppaddr) ,C,nx,nz, image, + csphi(pindex),snphi(pindex), + csth(tindex),snth(tindex)) enddo enddo type '(a)','+ Finished ' w r i t e ( * , * ) ' Sum of projections = ', sum end subroutine zeropparray(nl,n2,array) i m p l i c i t none integer n l , n 2 , i , j r e a l array(-ni:ni,-n2:n2) do j=-n2,n2 do i= - n l , n l array(i,j)=0.0 enddo enddo end subroutine zerodata(nl,n2,array) i m p l i c i t none integer n l , n 2 , i , j r e a l array(-nl:nl,0:(n2-l)) do j=0,n2-l do i=-nl,nl array(i,j)=0.0 enddo enddo end subroutine float_data(np,nc,nxi,spe_data,data) i m p l i c i t none integer np,nc,nxi,i,j integer*2 spe_data(-np:np,0:(nxi-i)) r e a l data(-nc:nc,0:(nxi-1)) do j =0, nxi-1 do i = -np, np d a t a ( i . j ) = f l o a t ( s p e _ d a t a ( i , j ) ) end do end do return end Appendix B. Program Listings 111 subroutine trigarray(nxi,nphi,tmax,delta_theta, + csxi,snxi,csphi,snphi,csth,snth) i m p l i c i t none integer nxi,nphi,tmax,i r e a l delta_phi,delta_theta,delta_xi,pi,theta, + csxi(0:nxi-l),snxi(0:nxi-1), + csphi(0:nphi-l),snphi(0:nphi-l), + csth(-tmax:tmax),snth(-tmax:tmax) p i = acos(-l.O) d e l t a _ x i = p i / f l o a t ( n x i ) do i = 0, nxi-1 c s x i ( i ) = c o s ( f l o a t ( i ) * d e l t a _ x i ) s n x i ( i ) = s i n ( f l o a t ( i ) * d e l t a _ x i ) end do delta_phi = p i / f l o a t ( n p h i ) do i = 0, nphi-1 cs p h i ( i ) = c o s ( f l o a t ( i ) * d e l t a _ p h i ) snphi(i) = s i n ( f l o a t ( i ) * d e l t a _ p h i ) end do do i = -tmax, tmax theta = p i / 2 . 0 - f l o a t ( i ) * d e l t a _ t h e t a c s t h ( i ) = cos(theta) snth(i) = sin(theta) end do return end Appendix B. Program Listings 112 r e a l function lfncn(nf,nxi,tmax,snxi,snth,psi,L) C This routine returns (1/L)*a_p, L as defined by Orlov. i m p l i c i t none integer nf,fi,nxi,tmax,xi,tindex r e a l L(-nf:nf,0:(nxi-1),0:tmax), + L0(0:(nxi-1),0:tmax), + f i l t e r ( - n f : n f ) , + p s i , p i , r a t i o , c s p s i . s n p s i , + snxi(0:(nxi-1)),snth(-tmax:tmax) common f i l t e r p i = acos(-l.O) cs p s i = cos(psi) snpsi = s i n ( p s i ) do tindex = 0, tmax r a t i o = cspsi/snth(tindex) do x i = 0, nxi-1 i f ( a b s ( s n x i ( x i ) ) . I t . r a t i o ) then L0(xi,tindex)=1.0/(2*asin(snpsi)/ + sqrt(1.0-(snxi(xi)*snth(tindex))**2)) else L0(xi,tindex) = 1.0/pi end i f end do end do do tindex = 0, tmax do x i = 0, nxi-1 do f i = -nf, nf L ( f i , x i , t i n d e x ) = f i l t e r ( f i ) * L 0 ( x i , t i n d e x ) end do end do end do return end Appendix B. Program Listings subroutine proj_limits(nlx,nly,rtmax,tindex,snth,csth,s_limits, + pcount) C This routine f i n d s the l i m i t s of the l i n e integrals through C a c y l i n d r i c a l f i e l d of view. i m p l i c i t none in t eger nix,nly,rtmax,t index,pcount,lx,ly,Hi,RD i , R F i , i r e a l csth,snth,s_limits(2,0:nix,0:nly,rtmax),RDsq,RFsq,H, + HsRF,HsRD,sqrRF,csqrRF,rly,A,B,rlxsq,rat io_pv,ratio_vs, + delta_p_inv,ratio,rRFi.rHi common /det_parms/H,Hi,RDi,RFi + /ratios/delta_p_inv,ratio_pv,ratio_vs pcount = 0 r a t i o = ratio_pv*ratio_vs rRFi = float(RFi)-1.0 r H i = f l o a t ( H i ) - . 5 RDsq = float(RDi*RDi) RFsq = rRFi*rRFi HsRD = H*snth*delta_p_inv HsRF = rHi*snth do l y = 0, nly-2 r l y = f l o a t ( l y ) do l x = 0, int( r R F i ) r l x s q = f l o a t ( l x * l x ) i f (rly.gt.(HsRD-csth*sqrt(RDsq-rlxsq))) then sqrRF = sqrt(RFsq-rlxsq) csqrRF = csth*sqrRF i f (rly.lt.(HsRF+csqrRF)) then A = r a t i o * r l y * c s t h / s n t h B = ratio*sqrRF/snth s _ l i m i t s ( l , l x , l y , t i n d e x ) = A-B i f (rly.lt.(HsRF-csqrRF)) then s_ l i m i t s ( 2 , l x . l y , t i n d e x ) = A+B else s . l i m i t s ( 2 , l x . l y , t i n d e x ) = r a t i o * + (rHi-rly*snth)/csth end i f pcount = pcount+1 end i f end i f end do end do return end Appendix B. Program Listings 114 subroutine proj ect_data(nc,nxi,nix,nly,nx,nz,rtmax,tindex,s_limits, + scale,csxi,snxi,csphi,snphi,csth,snth,image,data,sum) C This routine computes the line integrals of the Is image on a C regular grid in the XT plane. implicit none int eger nc,nxi,nix,nly,nx,nz,rtmax,nlxp,nlyp,alx,lx,aly,ly, + xi,giltp,tindex,atindex,i,j real data(-nc:nc,0:(nxi-1)), + image(-nx:nx,-nx:nx,-nz:nz), + s_liraits(2,0:nix,0:nly,rtmax),s1,s2, + csxi(0:(nxi-1)),snxi(0:(nxi-l)), + csphi,snphi,csth,snth,xl,yl,zl,p, + event,integral,scale,sum,dp,tau_x,tau_y,tau_z, + ratio_pv,duml,dum2, + pplane(-29:29,-18:18) common /ratios/duml,ratio_pv,dum2 nlxp = nlx-2 nlyp = nly-2 tau_x = snth*csphi tau_y = snth*snphi tau_z = csth atindex = abs(tindex) do ly = -nlyp, nlyp aly = abs(ly) do lx = -nlxp, nlxp alx = abs(lx) i f (ly*tindex.ge.O) then si = s_limits(l,alx,aly,atindex) s2 = s_limits(2,alx,aly.atindex) else si = -s_limits(2,alx,aly,atindex) s2 = -s_limits(l,alx,aly,atindex) end i f C Compute the line integrals in the e l l i p t i c a l regions i f (sl.lt.s2) then xl = -(lx*snphi+ly*csphi*csth)*ratio_pv y l = (lx*csphi-ly*csth*snphi)*ratio_pv z l = ly*snth*ratio_pv event = scale*snth*integral(xl,yl,zl,tau_x,tau_y,tau_z, + nx.nz,image,sl,s2) C Add the scaled line integral into the incomplete C plane integrals using linear interpolation, do xi = 0, nxi-1 p = lx*csxi(xi)+ly*snxi(xi) if(abs(p).le.(nc-1)) then if(p.lt.O.O) then giltp = int(p)-l else giltp = int(p) end i f dp = p-giltp data(giltp,xi) = data(giltp,xi)+event*(l-dp) data(giltp+l,xi) = data(giltp+l,xi)+event*dp end i f end do ! xi end i f end do end do return end Appendix B. Program Listings 115 r e a l function integralCxl,yl,zl,tau_x,tau_y,tau_z,nx,nz,image, + sl,s2) C This function computes the l i n e i n t e g r a l by t r i - l i n e a r i n t e r p o l a t i o n C i n the image volume i m p l i c i t none integer nx,nz r e a l image(-nx:nx,-nx:nx,-nz:nz), + sl,s2,s,ds/1.0/,dsx,dsy,dsz, + x l , y l , z l , x , y , z , g i l t x , g i l t y , g i l t z , d x , d y , d z , + t au_x,t au_y,t au_z i n t e g r a l = 0.0 s — s i dsx = ds*tau_y dsy = ds*tau_z dsz = ds*tau_x x = xl+s*tau_x y = xl+s*tau_y z = xl+s*tau_z do while (s.le.s2) if(x.lt.O.O) then g i l t x = i n t ( x ) - l else g i l t x = int(x) end i f if(y . l t . 0 . 0 ) then g i l t y = i n t ( y ) - l else g i l t y = int(y) end i f i f ( z . l t . 0 . 0 ) then g i l t z = i n t C z ) - l else g i l t z = intCz) end i f dx = x - g i l t x d y = y - g i i t y dz = z - g i l t z i n t e g r a l = i n t e g r a l + +Ci•0-dx)* C1•0-dy)* C1.0-dz)*image C g i l t x , g i l t y , g i l t z ) + +dx*Cl.0-dy)*Cl.0-dz)*iraageCgiltx+l,gilty,giltz) + + C1.0-dx)*dy* C1•0-dz)*imageCgiltx,gilty+1,giltz) + +dx*dy*Ci.0-dz)*imageCgiltx+l,gilty+1,giltz) + +Cl.0-dx)*Cl.0-dy)*dz*imageCgiltx,gilty,giltz+1) + +dx* C1.0-dy)*dz*image Cgiltx+1,gilty,giltz+1) + + C1.0-dx)*dy*dz*imageCgiltx,gilty+1,giltz+1) + +dx*dy*dz*imageCgiltx+l,gilty+1,giltz+1) x = x+dsx y = y+dsy z = z+dsz end do end Appendix B. Program Listings 116 subroutine f ilterdata(nf,np,nxi,nc,ncp,data,conv,pdata,tmax, + tindex,L) C This routine performs the convolution f i l t e r i n g i m p l i c i t none integer nf,np,nxi,nc,ncp,tmax,tindex,p,xi.fi,i,j r e a l data(-nc:nc , 0 :(nxi - 1 ) ) , + pdata(-(nc+nf-l):(nc+nf -1) ,0:(nxi -1)) r e a l L(-nf:nf , 0 :(nxi - 1 ) , 0:tmax),conv(-ncp:ncp,0 :(nxi - 1 ) ).filter do x i = 0, nxi - 1 do p = -nc, nc pdata(p,xi)=data(p,xi) end do end do do x i = 0, nxi - 1 do p = -nc , nc do f i = 1-nf, nf -1 conv(p.xi) = conv(p.xi) + + pdata(p+fi,xi)*L(fi.xi.abs(tindex)) end do ! f i end do ! p end do ! x i return end Appendix B. Program Listings 117 subroutine integrate(nxi,nix,nly,nc,ncp,conv,pplane, + csxi,snxi) C This routine performs the integration i n x i on a regular g r i d C i n the XRT plane using a ID l i n e a r i n t e r p o l a t i o n i m p l i c i t none integer n x i , x i , n l x , n l y , n c , n c p , i , j , k , g i l t p r e a l p,pi,delta_xi,dp, + conv(-ncp:ncp,0 :(nxi-i)), + pplane(-nix:nix,-nly:nly), + csxi ( 0:nxi-l),snxi ( 0:nxi-l) p i = acos(-l.O) do j = -nly,nly do i = -nix,nix do x i = 0, nxi-1 p = f l o a t ( i ) * c s x i ( x i ) + f l o a t ( j ) * s n x i ( x i ) i f (abs(p).le.nc) then i f (p.lt . 0 . 0 ) then g i l t p = int(p) - 1 else g i l t p = int(p) end i f dp = p - g i l t p pplane(i.j) = pplane(i,j) + (l-dp)*conv(giltp,xi) + + dp*conv(giltp+l,xi) end i f end do ! x i end do end do return end Appendix B. Program Listings subroutine backproj ect(nix,nly.pplane,C,nx,nz,image, + csphi,snphi,csth,snth) C This routine backprojects the f i l t e r e d XRT plane using a b i - l i n e a r C i n t e r p o l a t i o n . i m p l i c i t none integer n l x , n l y , n x , n z , n x s q , g i l t l x , g i l t l y , i , j , k r e a l pplane(-nix:nix,-nly:nly), + image(-nx:nx,-nx:nx,-nz:nz), + C,lx,ly,dx,dy,csphi,snphi,csth,snth, + ratio_vp ratio_vp = 1.0 nxsq = nx*nx do k = -nz, nz do j = -nx, nx do i= -nx, nx i f ( ( i * i + j * j ) . l e . n x s q ) then l x = ratio_vp*(j*csphi-i*snphi) l y = ratio_vp*(-i*csth*csphi-j*csth*snphi+k*snth) i f ( l x . l t . 0 . 0 ) then g i l t l x = i n t ( l x ) - l else g i l t l x = i n t ( l x ) end i f i f ( l y . l t . 0 . 0 ) then g i l t l y = i n t ( l y ) - l else g i l t l y = i n t ( l y ) end i f dx = l x - g i l t l x dy = l y - g i l t l y image(i,j,k) = image(i,j,k) + + ( ( 1 . 0 - d x ) * ( 1 . 0 - d y ) * p p l a n e ( g i l t l x , g i l t l y ) + + ( d x ) * ( l . 0 - d y ) * p p l a n e ( g i l t l x + l , g i l t l y ) + + ( 1 . 0 - d x ) * ( d y ) * p p l a n e ( g i l t l x , g i l t l y + l ) + + (dx ) * ( d y ) * p p l a n e ( g i l t l x + l , g i l t l y + l ) ) * C end i f enddo enddo enddo return end Publications Toward the Design of a Positron Volume Imaging (PVI) Camera, J.G. Rogers, M. Stazyk, R. Harrop, C.J. Dykstra, J.S. Barney, M.S. Atkins, P. E. Kinahan, IEEE Transcations on Nuclear Science, A p r i l 90, Vol. 37, No. 2. Design of a Volume-Imaging Positron Emission Tomograph, J.G Rogers, R. Harrop, G.H. Coombes, N.A. Wilkinson, M.S. Atkins, B.D. Pate, K.S. Morrison, M. Stazyk, C.J. Dykstra, J.S. Barney, P.W. Doherty, D.P. Saylor, IEEE Transactions on Nuclear Science, Feb 89, Vol. 36, No. 1. Reactive Ion Etching Damage to n-type GaAs, S. Dzioba, T. Lester, M. Stazyk, C. Miner, Presented May 1986, State-of-the-Art Program on Compound Semiconductors. S l i p - f r e e , Capless Rapid Thermal Annealing of S i Implanted GaAS, T. Lester, M. Stazyk, R. Streater, Presented June 1986, Electronic Materials Conference, University of Massachusetts, Amherst, Mass. A Simple Diagnostic Technique for Detecting Fine Lock-up Patterns i n E l e c t r o s t a t i c a l l y Scanned Ion Implanters, T. Lester, M. Stazyk, JVSTb5(2), March/April 1987. Awards 1978 McMaster University Alumni Scholarship 4 Year Entrance Scholarship 1979 Association of Professional Engineers of Ontario Entrance Scholarship (Declined) 1979 L. Squire Scholarship 1980 Association of Professional Engineers of Ontario Scholarship 1980 McMaster University Scholarship 1981 McMaster University Scholarship 1989 NSERC PGS-2 1989 U. B. C. University Graduate Fellowship (Declined) 1989 Science Council of B. C. G.R.E.A.T. Award 1990 NSERC PGS-3 1990 U. B. C. University Graduate Fellowship (Declined) 1990 Science Council of B. C. G.R.E.A.T. Award 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0085047/manifest

Comment

Related Items