The Open Collections website will be unavailable July 27 from 2100-2200 PST ahead of planned usability and performance enhancements on July 28. More information here.

Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

An electronic portal image-based monte carlo approach to in-vivo dosimetry for intensity modulated radiation… Atwal, Parmveer Singh 2009

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2009_fall_atwal_parmveer.pdf [ 2.94MB ]
Metadata
JSON: 24-1.0067746.json
JSON-LD: 24-1.0067746-ld.json
RDF/XML (Pretty): 24-1.0067746-rdf.xml
RDF/JSON: 24-1.0067746-rdf.json
Turtle: 24-1.0067746-turtle.txt
N-Triples: 24-1.0067746-rdf-ntriples.txt
Original Record: 24-1.0067746-source.json
Full Text
24-1.0067746-fulltext.txt
Citation
24-1.0067746.ris

Full Text

An Electronic Portal Image-based Monte Carlo Approach to In-Vivo Dosimetry for Intensity Modulated Radiation Therapy by Parmveer Singh Atwal B.Sc., The University of British Columbia, 2007 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty of Graduate Studies (Physics) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) September 2009 c⃝ Parmveer Singh Atwal 2009 Abstract Radical radiotherapy aims to concurrently achieve high tumour control prob- ability and low normal tissue complication probability. Intensity-modulated radiation therapy (IMRT) provides the highly localized radiation dose de- livery necessary to reach this goal. As highly conformal techniques be- come more prevalent, the importance of determining, and accounting for, treatment-planning, patient-setup, and delivery errors, which result in dis- crepancies between the calculated and actual delivered dose, also increases. Accurate Monte Carlo-based modeling of the equipment overcomes some of these deficiencies. Unfortunately, some sources of delivery errors, such as mis-calibration of the beam-modulating system, cannot be easily incorpo- rated in the model. Use of the amorphous-silicon detector (or EPID, for Electronic Portal Imaging Device), available on many linear accelerators, provides a solution. We hypothesize that non-transit dose images from the EPID provide us with information regarding certain delivery errors. To obtain this information, we first capture non-transit EPID dose images of the treatment field. Next, removal of intra-EPID scatter via iterative Richardson-Lucy deconvolution converts the dose image to a fluence matrix. Projected back to the height of the beam-modulating system, this matrix can be used to modulate the statistical weight of photons in a phase-space file simulating the linear ac- celerator from the source to this height. The modulated phase-space can be used to run Monte Carlo calculations through simulated phantoms. Assumptions regarding the EPID’s electromechanical behaviour, as well as regarding beam divergence, were validated. This method was compared and validated against our centre’s treatment planning system, for various configurations of the beam-modulating system, in two non-patient phan- ii Abstract toms (water and anthropomorphic). The new procedure matched well with film measurements, consistently providing a higher percentage (∼ 10% − 15% higher) of pixels with Gamma-Dose (3mm Distance-To-Agreement, 3% Dose-Difference criteria) less than 1, versus the TPS-based dose distribu- tions. This indicates that the EPID-based fluence is more accurate than the TPS-based fluence. This so-called MCEF (M onte C arlo with EPID-based F luence) procedure can be extended by utilizing Cone-Beam CT (CBCT) to account for any setup errors or physiological changes in the patient. By coupling the EPID-based fluence with CBCT-based phantoms, we believe this method will accurately mimic true 3D in-vivo dosimetry. iii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . x Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3 STEP 1 – EPID Dose Image to Fluence Matrix Conversion 12 3.1 Description of Beam Geometries Used to Obtain EPID Dose Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 Establishing Consistency of EPID Origin . . . . . . . . . . . 14 3.3 Establishing Need for Projection of Fluence . . . . . . . . . . 15 3.4 Renormalization of Dose Image Based on MUs delivered . . 16 3.5 Removing EPID Scatter from Dose Image to Obtain Fluence 19 4 STEP 2 – Modulation of Particle Phase-Space Using EPID- based Fluence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.1 Validating the Assumption of Geometric Divergence . . . . . 23 iv Table of Contents 4.2 Modulating the Phase-Space Based on the Fluence at the MLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5 STEP 3 – Dose Calculation Based on Modulated Photon Particle Phase-Space . . . . . . . . . . . . . . . . . . . . . . . . 29 5.1 Monte Carlo Calculation with Modulated Phase-Space . . . 29 5.2 Post-Processing of the Calculated Dose Distribution . . . . . 30 6 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 6.1 Dose Distributions Obtained via TPS and Film, for Compar- ison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 6.2 Gamma-Dose Index . . . . . . . . . . . . . . . . . . . . . . . 36 6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 7.0.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . 52 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Appendices A Electronic Portal Imaging Device . . . . . . . . . . . . . . . . 62 A.1 Behaviour Characteristics . . . . . . . . . . . . . . . . . . . . 62 A.2 Portal Dose Image Prediction Algorithm . . . . . . . . . . . 63 B Expectation Maximization: Richardson-Lucy Algorithm . 67 v List of Tables 3.1 Table of brightest pixel coordinates (x,y), at various SADs, for verification of EPID alignment. The measurements were made beginning at SAD 105cm and continued cyclically (in- dicated by the blue arrows) for two cycles . . . . . . . . . . . 15 6.1 Percentage of the Gamma-Dose with a value less than one, for the water phantom. Gamma-Dose Index criteria: DTA 3mm, DD 3% . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 6.2 Percentage of Gamma-Dose with value less than one, for the prostate field (Plan 5) delivered on an anthropomorphic phan- tom. Gamma-Dose Index criteria: DTA 3mm, DD 3% . . . . 48 vi List of Figures 1.1 Possible underdosage (a) and overdosage (b) scenarios, due to the tongue-and-groove design of Varian MLC leaves . . . . 3 2.1 Overview of the MCEF (M onte C arlo with EPID F luence) Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Details of how the MCEF Procedure was established . . . . . 11 3.1 The MLC setup for the initial test patterns . . . . . . . . . . 13 3.2 Crosslines for Plan 4 at SSD 92.7cm, indicating that the EPID cannot be reliably used at this height, or higher . . . . . . . . 17 3.3 Crosslines for Plan 5, the prostate field, at SSD 92.7cm, indi- cating that the EPID cannot be reliably used at this height, or higher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.4 Here we present the normalized percentage of Gamma-Dose Indices < 1 as a function of 휐 for various 휂. The normal- ization is relative to the percentage of Gamma-Dose Indices < 1 when (휂, 휐) = (10, 0), i.e. the ordinate is %[Γ(휂, 휐) < 1] / %[Γ(10, 0) < 1]. In the legend “full” refers to the analysis over the full field, while “partial” refers to analysis over the region around the in-beam MLC leaves only. The insert shows the film measurement used, with the distribution influenced by the in-beam MLC leaves boxed in gray. . . . . . . . . . . . 21 vii List of Figures 3.5 Representative curves through the dose distributions for Plan 4, using (휂, 휐) = (30, 5) in the simulation. The dose peak fur- thest to the right is very much reduced in the simulation, rel- ative to the measured peak; we hesitate to postulate why this may be so. Comparing these profiles to those of Fig.6.5(d), in which the simulation of Plan 4 uses (휂, 휐) = (20, 0), we see that the match between simulated and measured profiles is much better in the latter case. This indicates that the results presented in Fig.3.4 regarding the choice of 휂, 휐 values are plan-specific. . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.1 Comparison of crossline (along long axis) and inline (along short axis) for images taken at different SSDs . . . . . . . . . 25 4.2 Gamma-Dose Analysis for the same two images, with > 95% of pixels with Γ < 1 . . . . . . . . . . . . . . . . . . . . . . . 26 6.1 Gafchromic R⃝ EBT film and EPSON Perfection v700 Photo scanner related data . . . . . . . . . . . . . . . . . . . . . . . 37 6.2 One-dimensional (n=1) illustration of the gamma index as a metric from the reference dose distribution. The radius indi- cated by the green arrow is the gamma index of (푦⃗, 퐷) with respect to the evaluation distribution 푑 = 퐷퐸(푥⃗), i.e. this is the closest ‘distance’ of the dose point (푦⃗, 퐷) to any point which defines the evaluation distribution 푑 = 퐷퐸(푥⃗) (rep- resented here as a continuous and smooth maroon-coloured curve). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 6.3 Various dose comparisons between MCEF-simulated (blue) and ion-chamber-measured (red) dose distributions for a 10푥10푐푚2 open field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 6.4 Comparison of dose profiles from measurement, TPS, and Monte Carlo simulation, for plans 1-4 . . . . . . . . . . . . . 44 6.5 The Gamma-Dose comparisons for the initial test plans . . . 46 viii List of Figures 6.6 2D dose distributions, attained various ways, at the isoplane, for the dynamic-MLC prostate field (Plan 5) delivered on an anthropomorphic phantom. . . . . . . . . . . . . . . . . . . . 49 6.7 Gamma-Dose distributions, at the isoplane, for the prostate field (Plan 5) delivered on an anthropomorphic phantom . . . 50 ix Acknowledgements Dr. Agronivich was able to obtain funding for this work from Sanofi Aventis Canada, and I am truly grateful for the opportunity they provided me to do research in medical physics. I am especially thankful to the physicists and physics assistants of the Fraser Valley and Abbotsford Cancer Centres who were free with their time and knowledge and helped me a great deal. Steven Thomas not only provided the original idea around which this thesis is based, he also showed me how to think sharp and quick, as a medical physicist should. Stansislaw Szpala provided a lot of help with the films, without which I would still be trying to figure out why I seem to have fog, no matter how careful I am! Iulian Badragan provided mentoring about medical physics in general and I would be remiss not to acknowledge the importance of his words to my thinking about my plans for the future. Finally, I would like to single out two particular gentlemen and express my gratitude to them. Bilal Shahine had the immense patience to put up with me, and quietly and deftly fixed my many many errors, both in the physical thesis and in my thoughts regarding the project. He also provided friendship, for which no words suffice, so simply I say ‘thank you’. My supervisor, Ramani Ramaseshan, has provided me with not only a lot of guidance and expertise on this thesis, his upbeat persona made it a true pleasure to work with him. The encouragement he provided when I was struggling and unsure meant more to me than I think he realized. My deepest appreciations go to him. Last on paper, but first in mind, I thank my family and I thank God. x Dedication I dedicate this work to my bewildering girlfriend G. S. D., and to P. S. M., who appreciates the worth of a good conversation. xi Chapter 1 Introduction In simplest terms, the purpose of radical radiotherapy is to eradicate can- cerous cells while minimizing damage to cells which are non-cancerous. In radiobiological terms this would be restated as the need to concurrently achieve a high tumour control probability (TCP) and a low normal tissue complication probability (NTCP) [13]. In order to choose the most appro- priate treatment plan to meet this goal, it is necessary to know how much radiation will be delivered to the various tissues surrounding the cancerous cells as well as to the cancerous cells within the patient. Such a distribution of absorbed radiation is termed the in-vivo dose. The difficulty of accu- rately determining this in-vivo dosimetry is exacerbated by treatment plan- ning and delivery uncertainties, such as uncertainties in patient physiology, target setup, and calibration of various machine components. Ideally, we would determine the dose delivered to the patient with the use of detectors strategically placed within the body at the time of delivery. Realistically, of course, this is not possible. The purpose of this study was to create a procedure which is equivalent to this in-vivo dosimetry for a given patient, which means incorporating the types of discrepancies mentioned above when calculating the dose for pretreatment verification. To better meet the criteria of high TCP and low NTCP, more conformal (and complex) techniques are becoming prevalent. The use of such tech- niques makes it increasingly important to accurately determine whether the dose we wish to deliver and the dose which we actually delivered can be considered, within acceptable margins, to be equivalent. ICRU Report 50 specifies that the dose delivered should be between +7% and -5% of the pre- scribed dose [26]. Due to the steep slope of the TCP-dose and NTCP-dose relationships, a dose error of 5% might lead to a TCP change of 10%−20%, 1 Chapter 1. Introduction and to even larger NTCP changes [10, 27]; Clinical effects have been found to be noticeable for dose errors of 7% [27]. A good example of the type of radiotherapy technique for which such accuracy is important is intensity- modulated radiation therapy (IMRT). IMRT uses a multileaf collimator (MLC), a device comprised of many in- dividual tungsten alloy plates machined to fit tightly one against the other, yet each able to independently move and modulate a segment of the radiation beam, providing highly localized radiation dose delivery. Such localization better allows medical physicists to reach their stated goal of reduced NTCP and increased TCP. However, a resultant characteristic of these highly con- formal IMRT fields is the presence of large dose-gradients. As the modality becomes clinically more prevalent, this same characteristic of the gradients, as well as the necessary complexity of IMRT-capable linear accelerators, poses new challenges to proper determination, and accounting, of treatment planning errors and patient setup errors, which may otherwise result in dis- crepancies between the calculated and actual delivered dose [14]. In addition to the possible errors mentioned above, there are a host of other issues which have an effect on the dose distribution, including changes in a patient’s phys- iology, which includes weight loss and organ motion, MLC miscalibration, and the tongue-and-groove effect. The tongue-and-groove effect, described graphically in Fig.1, is a partic- ularly illustrative example of the difficulty in trying to model the effect of the complex geometry of the MLC leaves. The tongue-and-groove design of beam-modulating MLCs, meant to minimize hot spots from inter-leaf leak- age, has been known to cause overdosage and underdosage [4, 11, 52]. Cur- rent treatment planning systems (TPS), such as the Eclipse TPS, Platform 8.1, are unable to incorporate the tongue-and-groove effect [43] and conse- quently do not model the fluence appropriately. Many studies have been conducted on understanding and quantifying this effect, and have noted that the tongue-and-groove effect can result in an underdose of as much as 10%−15% [4, 11, 52]. This can result in underdosage to the target, resulting in poor local tumour control [6]. Monte Carlo (MC) techniques have the potential to provide the most ac- 2 Chapter 1. Introduction Figure 1.1: Possible underdosage (a) and overdosage (b) scenarios, due to the tongue-and-groove design of Varian MLC leaves (a) In the first row we have a side view of two MLC leafs abutting, differenti- ated by colour. The second and third rows present illustrations of the adjacent leaves and their associated fields, as well as their superposition. The region between the two leaves, coloured grey in the images of the second row, is where the tongue of one leaf fits into the groove of the other leaf. Due to exponen- tial attenuation (and ignoring effects such as inter-leaf transmission for the moment), so long as the two leaves are not moved synchronously throughout their entire duration in-beam, this greyed region will be underdosed, relative to the distribution provided via the TPS, as shown in the final image of the third row. 3 Chapter 1. Introduction (b) While similar to the previous image, here we present how the tongue-and- groove design can allow for overdosage, not just underdosage. The first column presents the leaves with their tongue-and-groove design incorporated, whereas the second column presents a simplified design, which is often assumed in treatment planning systems. The ellipses indicate that the leaf-configuration shown continues in that direction beyond our region of interest. At point B, the leftmost edge of the in-beam MLC leaves, the final row shows that the first leaf design causes an overdosage, as compared with the second, simpler design. This is due to the fact that for the first design, the left-most edge is the ‘tongue’ of the leaf,whereas for the second design this is the edge of the full leaf. Relative to the scenario of the second column of images, with the more complex design of the leaves the tongue only partially attenuates the photon beam. 4 Chapter 1. Introduction curate calculated dose distributions [30], so long as the calculation engine is provided an accurate model of the system through with the radiation is be- ing transported. MC algorithms simulate the radiation transport and energy deposition in a probabilistic manner, converging to a solution by repeating the start-to-end physical process (history) millions of times. So long as the models for individual possible interactions are accurate, the final result will be accurate, albeit containing statistical noise. By running more histories this noise can be reduced. If the Monte Carlo system is provided an accurate model of the MLC, its calculations will display the tongue-and-groove effect [37] missing in present TPS-algorithms. Additionally, depending on the MC calculation engine, the state of the simulation can be preserved (scored) at various planes in the simulation geometry in files known as phase-space files [17, 32]. These files contain characteristic information, such as energy and direction of travel, for each particle as it crossed the scoring plane, and allow for more complex MC experiments in which parts of the simulation geometry are altered, or particles are classified and studied based on various properties recorded. Despite such sophistication, current MC techniques have difficulty accounting for delivery errors, e.g. MLC mis-calibration, performance degra- dation due to inter-leaf friction, etc., because such information is not easily provided to the calculation engine. We hypothesize [1] that non-transit dose images from the amorphous- silicon detector (referred to as an EPID, short for Electron Portal Imaging Device) can provide us with this information regarding delivery errors. In- deed the tongue-and-groove effect is also captured by the EPID [43]. Re- moving intra-EPID scatter we can derive the corresponding fluence matrices from these images. A previously simulated phase space file, scored at the level of the MLC, can be modulated to incorporate the information con- tained in such a fluence matrix. This avoids the difficult task of accurately modeling the MLC structure, as well as the time-consuming simulation of particle transport therein, required to attain accurate statistics. A Cone- Beam CT (CBCT) reconstruction of the patient near delivery time captures various setup discrepancies (e.g. changes in patient physiology). Together, the EPID-based fluence and CBCT reconstruction provide a more accurate 5 Chapter 1. Introduction model of the patient-linac system as it is during plan delivery versus how it was during treatment planning. Use of this more accurate model allows for dose determination which is closer to true in-vivo dosimetry. In the following chapters we describe the methodology and results of this so-called MCEF (M onte C arlo with EPID-based F luence) procedure. We end the paper with a discussion of possible future work to extend this method. 6 Chapter 2 Overview In this chapter we present a very brief, purely descriptive outline of our method, including which assumptions require validation, and the order in which such material will be presented in the rest of this paper. Fig.2.1 outlines the steps of the MCEF (M onte C arlo with EPID-based F luence) procedure. The step regarding Cone-Beam CT is distinguished from the other steps by the colour of its box in the figure. This is to highlight the fact that this step, while important to the overall process, is beyond the scope of the present project. The purpose of obtaining these Cone-Beam CT images of the patient is to be able to incorporate patient physiology information into the Monte Carlo simulation of the dose distribution. We wish to minimize the discrepancy between the calculated dose distribution and the actual dose delivered to the patient, keeping in mind that the dose delivered should be between +7% and -5% of the prescribed dose [26]. Obtaining the CBCT images as near as possible to delivery time is one way to ensure that this similarity is achieved. Further discussion of this step will be postponed until the end of the paper. Fig.2.2 presents a more detailed chart of the steps which have been im- plemented and which will each be discussed in detail in its own chapter: ∙ EPID Dose Image to Fluence Matrix Conversion ∙ Phase-Space Modulation via EPID-based Fluence ∙ Monte Carlo Dose Calculation via Modulated Phase-Space In step 1, before we can obtain the non-transit (i.e., without a patient) EPID dose image of the MLC configuration, we need to decide at which height the EPID will be located. If the EPID is moved up to the height of 7 Chapter 2. Overview Figure 2.1: Overview of the MCEF (M onte C arlo with EPID F luence) Procedure 8 Chapter 2. Overview the MLC then there will be no need to project the fluence from the surface of the EPID back up to this height at a later point. Unfortunately, such a scenario is not possible for two reasons. First, it is physically impossible to move the EPID to a Source-to-Surface Distance (SSD) of less than 92.7cm without dismantling the E-arm, the component of the linac which supports and orients the EPID. This is clinically infeasible. Second, even if it were possible to easily move the EPID to the height of the MLC, it would be undesirable to do so because the EPID response is no longer dose, nor dose- rate, independent. We decided that it would be best to keep the EPID at SSD 105cm, the same height at which it was kept during calibration (Appendix A). We must also verify that the EPID origin is accurate. That is to say, the beam centre and the EPID’s physical centre should be aligned. If not verified the simulated radiation beam may not be in the same position, relative to the simulated phantom, as the real radiation beam relative to the real phantom. We found the EPID positioning to be accurate, and no offset needed to be accounted for. To convert the dose image to a fluence matrix requires deconvolution of the dose image using the iterative Richardson-Lucy (RL) algorithm. Two parameters, 휂 (specifies maximum iterations for all pixels) and 휐 (provides bounds on fluence value on a per-pixel basis) must be determined for this algorithm. The merits and determination of these values is discussed in Chapter 3. The non-transit EPID dose image can now be obtained for the various MLC configurations under consideration. The linearity of the EPID response with respect to MUs (Monitor Units) delivered is used to renormalize the EPID response. Finally, the effect of intra-EPID scatter on the image is removed via deconvolution using the Richardson-Lucy algorithm, for which we have already pre-determined the parameter values, and using the blur kernel, a function characterizing this scatter which was provided to us during EPID calibration. In step 2, the fluence matrix we derived above is used to modulate a phase-space file. In order to begin this modulation, we first need to deter- mine how we will project back the values of the fluence matrix from EPID 9 Chapter 2. Overview height to MLC height, through air. A couple of simple tests confirm that the assumption of geometric beam divergence between the two heights is sufficient for this task. The phase-space file we will modulate was created by running a sim- ulation of the linac we are using, from the source down to the position where the MLC would normally be located, at which plane all the particles are scored. For this simulation we are using the EGSnrc particle-transport software, with BEAMnrc user-code to make linac simulation simpler. Now, interpreting the elements of the projected fluence matrix as being related to a particular spatial discretization, the statistical weight of each phase-space photon, depending on its spatial coordinate, needs to be adjusted. The sta- tistical weight of all particles other than photons, having much shorter path lengths, cannot be related to the elements of the fluence matrix at the EPID surface. Thus the statistical weights of these particles cannot be related to the elements of the projected fluence matrix. The statistical weight, a mea- sure of the importance of a photon to the overall simulation, is equated with the corresponding element of the fluence matrix. For all other particles the statistical weight is set to 0, i.e., these particles will not be run through the Monte Carlo simulation. Now the phase-space is modulated to incorporate the effects of the MLC configuration, as captured by the EPID. In step 3 we use this modulated phase-space file to run a Monte Carlo simulation and calculate a dose dis- tribution. The computation is done using a code known as DOSXYZnrc. Current data is for non-patient phantoms, in particular water cubes and an anthropomorphic phantom simulated from CT images. However, in the future, the calculations could be run through a patient simulation, created using images captured via CBCT. After having calculated the in-vivo dose we convert it, using stopping- power ratios, from being dose-in-medium to dose-in-water. Finally, this dose is de-noised, so as to have smoother profiles to compare with other dose calculation techniques, without having to run the simulation for a longer period of time. 10 Chapter 2. Overview Figure 2.2: Details of how the MCEF Procedure was established 11 Chapter 3 STEP 1 – EPID Dose Image to Fluence Matrix Conversion 3.1 Description of Beam Geometries Used to Obtain EPID Dose Images The MCEF (M onte C arlo with EPID-based F luence) algorithm was tested with 6 different MLC configurations. The clinac (clinical linear accelerator) used is a Varian Clinac R⃝ iX, with an IAS3 R⃝ aSi-1000 amorphous silicon detector (EPID), the Exact-Arm R⃝ imager support arm, and the 120-leaf MillenniumTM MLC. The first configuration tested was an open 10푥10푐푚2 field (Note: all field-sizes are defined at SAD 100cm). By ‘open’ we mean that there are no MLC leaves in-beam, i.e. the beam is unmodulated by a tertiary collimator. Next we have four test plans with MLC leaves in-beam for various simple configurations, as shown in Fig.3.1: Plan One has only one leaf in-beam, while, on the other extreme, Plan Four presents a “fence” structure via highly-interdigitated MLC leaves, and is meant to act as the limiting case for representing the sorts of situations (e.g., high gradients) which arise in IMRT plans. Plan Two and Plan Three are similar, each having two leaves in-beam. Plan Two has these two leaves separated by a distance of ∼ 1cm (two retracted leaves) while Plan One has these two leaves separated by a distance of ∼ 0.5cm (one retracted leaf). This was done to see how well various dose-determination systems would handle the dose distribution 12 3.1. Description of Beam Geometries Used to Obtain EPID Dose Images between the leaves, due to the increasing complication in local geometry, as discussed in the introductory chapter. So far, the plans mentioned used a static MLC configuration. Plan 5, however, the final plan we tested, was a complex dynamic-MLC plan, meant, much like Plan Four, to be a limiting case for the sorts of plans which may be implemented clinically. As it is a dynamic plan, a figure such as in Fig.3.1 cannot be presented. Plan 5 was delivered with a 10푥10푐푚2 field on an anthropomorphic phantom (ATOM R⃝ Phantom, Model 701, by CIRS Tissue Simulation & Phantom Technology ; Norfolk, VI, U.S.A), and is also referred to as the prostate field in this paper. Figure 3.1: The MLC setup for the initial test patterns (a) Plan 1 (b) Plan 2 (c) Plan 3 (d) Plan 4 13 3.2. Establishing Consistency of EPID Origin To obtain the EPID dose image for each MLC configuration, the portal imager was kept at zero gantry angle and a height of 105cm (the same height which the manufacturer, Varian, recommends for calibration of the EPID for obtaining correct dose images – Appendix A). In the next few sections of this chapter we discuss step 1 in more detail, particularly the assumptions we are required to make when obtaining the dose image, as well as the algorithm required to convert this image to the appropriate fluence map. 3.2 Establishing Consistency of EPID Origin During beam-image capture via the EPID we need to know the spatial rela- tionship between the photon beam delivered and the image captured by the imager. This is to ensure that an offset is not introduced into the orientation of the photon beam simulated by the Monte Carlo algorithm relative to the physical beam (and, ultimately, between the simulated beam and the patient geometry). For this we assume that there is no shift between the centre of the EPID surface and the physical beam’s central axis (CAX). However, this presupposes that the coordinates provided by the positioning system when orienting the EPID are accurate. Normally, QA is done monthly at our cen- tre to test this aspect of the positioning system. However, the test is done for clinically relevant heights (푧 = 140푐푚, 푧 = 160푐푚), whereas for the MCEF procedure we are using the EPID at calibration height (푧퐸푃퐼퐷 = 105푐푚); we felt it wise to err on the side of caution and redo the QA at 푧퐸푃퐼퐷. The QA procedure consists of aligning the cross-hairs of the clinac light- field with those on a specially created tray (“Light Field Cross Hair Marker”, Ref. 72247, by Standard Imaging, Inc.) and capturing a beam image. The cross-hairs of the tray appear as dark lines on the image. In the centre of the cross-hairs is embedded a spherical marker, which shows up on the image as a bright spot. For the 1024푥768 element pixelation of the EPID, the spherical marker, if correctly aligned with the light hairs, should be at pixel value (512, 384). Beginning at 푧 = 105푐푚 the EPID was moved cyclically to 푧 = 115푐푚, 125푐푚, 140푐푚, and then backwards, to see how consistent is the orientation readout. A 6MV photon beam provided 35 MU, at 400 14 3.3. Establishing Need for Projection of Fluence MU/min, and the field-size was set to 20푥20푐푚2. The images were analyzed using the AMMaintenance software (version 7.1.2005.628, c⃝2004). The results are presented below in Table 3.1. They are consistent and agree with the central pixel value within the Abbotsford Cancer Centre’s acceptable error criteria for this QA of ±1mm suggesting that the EPID alignment indicated is accurate. Table 3.1: Table of brightest pixel coordinates (x,y), at various SADs, for verification of EPID alignment. The measurements were made beginning at SAD 105cm and continued cyclically (indicated by the blue arrows) for two cycles 3.3 Establishing Need for Projection of Fluence We have mentioned previously that once the fluence is obtained at the sur- face of the EPID it must be somehow projected back to 푧푀퐿퐶 , the height where the MLC is located, in order to modify the phase-space file scored there. But determining the relationship between 퐹퐸푃퐼퐷, the fluence matrix defined at 푧퐸푃퐼퐷, and 퐹푃퐻푆푃 , the fluence matrix defined at 푧푀퐿퐶 , requires correctly modeling particle transport between the two planes. It would be simpler if no projection of any sort were required, which would be the case if 푧푀퐿퐶 and 푧퐸푃퐼퐷 were the same, i.e. the EPID was placed directly below 15 3.4. Renormalization of Dose Image Based on MUs delivered the MLC ∗. Unfortunately, it is not possible to move the EPID up to this height without dismantling the E-Arm and manually shifting the EPID to the desired height; we tried to do this and concluded that such a procedure is difficult and time consuming, making it clinically impractical, and so not worth pursuing further in this project. The highest that the EPID can be easily raised via the E-Arm is to an SSD of 92.7cm. In Fig.3.2 and Fig.3.3 we present crosslines (one-dimensional profiles) of the EPID output, before any processing by us, for Plan 4 and Plan 5, respectively, at SSD 92.7cm, delivering 100 MU per plan, at various dose rates. Quantitative analysis of any sort is not required; the obvious non-uniformity across dose-rates of the crosslines indicates that, even at this moderate height, the EPID exhibits dose-rate dependence and can no longer be used as desired without increased complication. For this reason, we decided it best to follow the vendor’s suggestion of obtaining the EPID images at the calibration height of 105cm, which is within the range of heights where the EPID response has been tested and found to be dose and dose-rate independent [48, 49]. 3.4 Renormalization of Dose Image Based on MUs delivered The EPID response is linear with respect to the range of Monitor Units (MUs) used for beam output [48]. We renormalized the image pixel values to MUs based on this linear relationship. While such MU-based renormaliza- tion is not essential for comparison of the resultant Monte Carlo-computed dose with measured distributions, as a single point-dose renormalization be- tween the two sets would be sufficient, this MU-based renormalization allows for easier comparison of two Monte Carlo-computed distributions, should the need arise, without recourse to matching with measured values. ∗Actually, this would require scoring the phase-space at a slightly further distance from the source, redefining 푧푀퐿퐶 , but this is a relatively simple matter 16 3.4. Renormalization of Dose Image Based on MUs delivered Figure 3.2: Crosslines for Plan 4 at SSD 92.7cm, indicating that the EPID cannot be reliably used at this height, or higher 17 3.4. Renormalization of Dose Image Based on MUs delivered Figure 3.3: Crosslines for Plan 5, the prostate field, at SSD 92.7cm, indicat- ing that the EPID cannot be reliably used at this height, or higher 18 3.5. Removing EPID Scatter from Dose Image to Obtain Fluence 3.5 Removing EPID Scatter from Dose Image to Obtain Fluence Up to this stage we have obtained the non-transit dosimetric image of an given beam configuration; that is to say, the beam image without patient- induced attenuation or scatter, but incorporating intensity modulation and scatter effects due to the various collimator systems, as well as scatter due to transit through the EPID. Now we will derive 퐹퐸푃퐼퐷, the fluence at the surface of the EPID, from the captured EPID dose image. Naively, we could assume that the dose deposited within the EPID is pro- portional to the fluence at the surface, but we are not in air any longer, and previous studies have shown that characterizing the scatter within the EPID requires a more complex model than simply assuming geometric divergence [5, 29, 33, 47, 48, 53, 54]. Following some of this work, the EPID manufac- turer Varian has implemented a calibration process meant to analytically capture this intra-EPID scatter behaviour as a sum of gaussians known as the blur kernel, 휅 [43]. Details of the calibration, which we conducted, are presented in Appendix A. Convolving 퐹퐸푃퐼퐷 with 휅 would recreate the captured dose image; to remove the effect on the radiation of transit through the components of the EPID requires working backwards, via deconvolution. We implemented this process with the Richardson-Lucy (RL) algorithm (Appendix B). While there are various techniques for deconvolution, in the case of a known blur kernel the iterative RL algorithm, also known as the Expectation Maximiza- tion (EM) algorithm, is considered superior and is commonly employed [8]. It became particularly prominent in the 1990’s, when it was used to correct optical-image aberration in the newly launched Hubble Space Telescope [8]. In this study we used the function deconvlucy, a coding of the RL algorithm found in MATLAB’s Image Processing Toolbox. deconvlucy requires two user-defined parameters, 휂 and 휐 [2]. 휂 defines the maximum number of iterations of the algorithm, per pixel. Based on the original pixel value 휐 provides a unique set of upper and lower bounds on the possible value of the fluence obtained through iterative deconvolution via 19 3.5. Removing EPID Scatter from Dose Image to Obtain Fluence the RL algorithm. These bounds prevent large variances in the calculated values, which is why 휐 is also known as the damping parameter [2]. Unfortunately, it is notoriously difficult to assess which values to assign these parameters in any given situation [2, 8]. To determine which combi- nation of these values we should use for our simulations we ran two tests, the results being presented in this section. We took Plan 3 (see Fig.3.2(c)) and ran simulations using various combinations of (휂, 휐). These were then compared to EDR2 film measurements (Chapter 6.1), taken at SAD 100cm, 5cm depth in water, using the Gamma-Dose Index∗, Γ (Chapter 6.2). For Plan 3 the majority of the field is uniform and unmodulated and so it becomes difficult to determine whether there was any effect of the 휂, 휐 parameters used. To overcome this limitation, we redid the analysis with Gamma-Dose Index for only a segment of the dose-distribution, around the in-field MLC leaves, where we thought the effect of changing the parameters would be emphasized. The results are presented in Fig.3.4. This second set of Gamma values indicates (휂, 휐) = (30, 5) provided the greatest dose agreement†. A simulation of Plan 4 (see Fig.3.2(d)) with these parameters indicates that the choice of 휂, 휐 is plan-specific (see Fig.3.5). Hence, we are left with the initial dilemma of choosing values for these parameters un- satisfactorily resolved. We decided that until a more universal optimization scheme/objective function could be devised, the best thing to do would be to not have any damping, as this introduces more complex behaviour into the deconvolution algorithm than simply altering the total number of itera- tions for all pixels. Only the benefits of more iterations, i.e., the parameter 휂, would be incorporated. Hence, in this paper, for all instances in which ∗The Gamma-Dose Index, Γ, is a type of distance measure applied to each pixel of a dose distribution, comparing its similarity to another distribution [21]. This is done in a way which accounts separately for regions of high dose-gradient and regions of low dose-gradient. A value of less than one at a point is taken to mean that the two dose distributions are identical there. The higher the percentage of pixels in a given distribution with this index less than one, i.e. with a larger value for %[Γ < 1], the more certain we can be that the two dose distributions are identical †From the graph the function is locally monotonic, so the “optimum” values may be higher. However, this was not investigated further as it leads us beyond the scope of the current study 20 3.5. Removing EPID Scatter from Dose Image to Obtain Fluence the RL algorithm was required and unless otherwise indicated, (휂, 휐)=(20, 0) is to be assumed. Figure 3.4: Here we present the normalized percentage of Gamma-Dose Indices < 1 as a function of 휐 for various 휂. The normalization is relative to the percentage of Gamma-Dose Indices < 1 when (휂, 휐) = (10, 0), i.e. the ordinate is %[Γ(휂, 휐) < 1] / %[Γ(10, 0) < 1]. In the legend “full” refers to the analysis over the full field, while “partial” refers to analysis over the region around the in-beam MLC leaves only. The insert shows the film measurement used, with the distribution influenced by the in-beam MLC leaves boxed in gray. 21 3.5. Removing EPID Scatter from Dose Image to Obtain Fluence Figure 3.5: Representative curves through the dose distributions for Plan 4, using (휂, 휐) = (30, 5) in the simulation. The dose peak furthest to the right is very much reduced in the simulation, relative to the measured peak; we hesitate to postulate why this may be so. Comparing these profiles to those of Fig.6.5(d), in which the simulation of Plan 4 uses (휂, 휐) = (20, 0), we see that the match between simulated and measured profiles is much better in the latter case. This indicates that the results presented in Fig.3.4 regarding the choice of 휂, 휐 values are plan-specific. 22 Chapter 4 STEP 2 – Modulation of Particle Phase-Space Using EPID-based Fluence 4.1 Validating the Assumption of Geometric Divergence The particle fluence matrix obtained through step 1 should contain all the relevant information regarding the beam geometry used for the patient plan. Such information includes gross geometry due to primary/secondary/tertiary collimators, as well as such effects as head-scatter, collimator scatter, inter- leaf leakage, and the tongue-and-groove effect. However, this information is relevant at the SSD (Source-to-Surface D istance) of the previous step, whereas the phase-space file to be modulated is situated at the SMLCD∗ (Source-to-M ultiLeaf-C ollimator D istance). Also, there is a limitation on the spatial precision of the information obtained: each pixel of the EPID is 0.0392푐푚2 and so signal variations on a length scale less than this will be lost. We did not consider this pixelation to be an issue, as previous Monte Carlo studies using larger voxels [6, 37] have been able to adequately capture important dose characteristics, such as the tongue-and-groove effect. As previously noted we have been doing non-transit dosimetry and, be- cause of this fact, now we do not need to be concerned with issues such as patient scatter and attenuation as we use 퐹퐸푃퐼퐷, the fluence matrix at the ∗numerical information regarding linac construction provided in confidence by Varian Oncology Systems, Palo Alto, California, U.S.A. 23 4.1. Validating the Assumption of Geometric Divergence height of the EPID to create 퐹푃퐻푆푃 , the fluence matrix at the height of the scored phase-space. The photon beam has traveled through air only, and so the matrices are related in either one of two ways, the choice between them being dependent upon the validity of the assumption of geometric divergence between the SMLCD and the SSD: 1. If the assumption is valid then there is a simple analytical relationship 퐹푃퐻푆푃 (퐷 ⋅ 푥,퐷 ⋅ 푦) = 퐹퐸푃퐼퐷(푥, 푦), ∀(푥, 푦) ∈ D(퐹푆) 퐷 = 푆푀퐿퐶퐷/푆푆퐷 FS is the Field-Size, defined for the given plan, projected to 푧퐸푃퐼퐷. D(퐹푆) is a subset of ℜ2 interpreted as the set of Cartesian coordinates encompassed by the given FS and with some specified offset from the origin (0, 0). From chapter 3.2 we know that this offset can be taken as (0, 0). 2. If the assumption is invalid, then the way to relate 퐹퐸푃퐼퐷 and 퐹푃퐻푆푃 is to assume a straight trajectory for each particle defined by the particle’s direction-cosines, from the height defined by the SMLCD (푧푆푀퐿퐶퐷) to the height defined by the SSD (푧푆푆퐷), and relate the el- ements of the fluence matrices upon the assumption that each particle will travel such a path from 푧푆푀퐿퐶퐷 to 푧푆푆퐷, i.e., 퐹푃퐻푆푃 (푥푖, 푦푖) = 퐹퐸푃퐼퐷(푥, 푦) 푥푖 = 푥−Δ푙 ⋅ 푈 푦푖 = 푦 −Δ푙 ⋅ 푉 Δ푙 = Δ푧 푊 Δ푧 = 푧퐸푃퐼퐷 − 푧푃퐻푆푃 푊 = √ 1− 푈2 − 푉 2 24 4.1. Validating the Assumption of Geometric Divergence U,V are the x,y direction cosines, respectively, and are uniquely pro- vided for each particle in the phase-space created at 푧푃퐻푆푃 by the Monte Carlo particle transport engine EGSnrc [17]. As with relation 1, (푥, 푦) ∈ D(퐹푆). Relationship 1, that of simple geometric divergence of the beam, not only seems like a reasonable assumption, but is used elsewhere [43]. To provide justification that this assumption is valid within air, two beam images were captured via EPID, both defined for a field size of 10cm at SAD 100cm, 200MU being delivered for each beam. Image one was captured at SSD 100cm while image two was captured at SSD 160cm. Next, image two was geometrically projected back to SSD 100cm and, for comparison, both images at SSD 100cm were renormalized by 80% of beam width. The close match of the penumbras (difference of< 0.2 pixels, or 0.0784mm, at FWHM) in Fig.4.1 indicates that the assumption that each particle’s true path can be replaced with a fan-line (ray parallel to beam divergence) at that point is not unreasonable. As well, Fig.4.2 shows a gamma-dose analysis for the two images at SSD 100cm, with %[Γ < 1] = 95.2. Figure 4.1: Comparison of crossline (along long axis) and inline (along short axis) for images taken at different SSDs 25 4.2. Modulating the Phase-Space Based on the Fluence at the MLC Figure 4.2: Gamma-Dose Analysis for the same two images, with > 95% of pixels with Γ < 1 4.2 Modulating the Phase-Space Based on the Fluence at the MLC Having created the matrix 퐹푃퐻푆푃 , we would like to use this information to run a Monte Carlo simulation through the phantom. Two steps are required for this process. First, we must simulate radiation transport through the Varian iX Clinac, from the source down to the position where the MLC would be located, and score the phase-space at this height, 푧푃퐻푆푃 . Let us call this the original phase-space, 푃퐻푆푃푂. For this, and all other Monte Carlo calculations, we used the particle transport engine EGSnrc (version 0.9) [17]. In particular, the clinac was simulated via BEAMnrc, a user-code created to extend the functionality of EGSnrc [32]. BEAMnrc essentially sim- plifies the process of simulating a clinac into the ordering of predefined components, and assigning them the appropriate dimensions, which in this case were provided, in confidence, by the manufacturer, Varian. EGSnrc and BEAMnrc are freely available, via the Internet, from the National Research Council of Canada. 26 4.2. Modulating the Phase-Space Based on the Fluence at the MLC Now that we have created 푃퐻푆푃푂, the phase-space file with the original linac configuration, we need to incorporate the information captured via the EPID which indicates how the photon beam was modulated by the MLC. This will give us 푃퐻푆푃푀 , the modulated phase-space file. This incorpora- tion is done by adjusting the statistical weights (WT) of the particles in the phase-space file. This is because the statistical weights are a measure of the relative importance of each particle to the simulation [17]; a particle with a large WT will have more interactions than a particle with a smaller WT, and so its influence on the overall dose distribution will be greater. Let us present a slightly more detailed example. Supposing we have a region of the EPID dose image with a large value. This will lead to the corresponding elements of the fluence matrix 퐹퐸푃퐼퐷, and thus also of 퐹푃퐻푆푃 , having large values. Using this information to appropriately adjust the statistical weights of the particles within the phase-space file will lead to variations in dose deposition at that location, allowing us to recreate the measured high-dose region via Monte Carlo techniques. To adjust the statistical weights is to manipulate the phase-space file. Aside from an initial header of the phase-space file, which contains infor- mation pertaining to the entire simulation, each 28-byte entry (32 bytes if zlast is included) of the phase-space file consists of information required to uniquely specify a single particle of the simulation as it crossed the scoring plane ( e.g., type of particle, energy, statistical weight, etc.) [32]. Having created this file we now proceed through it, 28 bytes at a time, extracting the spatial coordinates (푥푖, 푦푖) for each particle. The spatial discretization of 퐹푃퐻푆푃 , which follows from the natural spatial discretization of 0.0392푐푚 2 per element of 퐹퐸푃퐼퐷 (chapter 4.1, Appendix A) means that there will be a one-to-many relationship between the elements of 퐹푃퐻푆푃 and the statistical weights of particles, i.e. the fluence value within a given pixel (i, j)of 퐹푃퐻푆푃 adjusts the statistical weight, WT, of all particles within a corresponding subset of ℜ2. For photons, this can be written mathematically as 푊푇 (푥푖, 푦푖) = 퐹푃퐻푆푃 (푥푖, 푦푖), ∀푊푇 ∈ 푃ℎ푎푠푒푆푝푎푐푒 27 4.2. Modulating the Phase-Space Based on the Fluence at the MLC However, the statistical weight of all particles other than photons, having much shorter path lengths [15], cannot be related to the elements of the fluence matrix at the EPID surface. Thus the statistical weights of these particles cannot be related to the elements of the projected fluence matrix. Thus, for these particles the statistical weight is set to 0, i.e., these particles will not be run through the Monte Carlo simulation. This must be remem- bered when analyzing MCEF calculated dose distributions, because dose near the phantom surface will be missing electron/positron contributions. 28 Chapter 5 STEP 3 – Dose Calculation Based on Modulated Photon Particle Phase-Space 5.1 Monte Carlo Calculation with Modulated Phase-Space Now that the photon particles have been modified to incorporate the geo- metric effects of traveling through the MLC, we can simulate particle transit from 푧푃퐻푆푃 through whatever phantom is desired, so long as the phantom can be simulated via the Monte Carlo software. For this study we used DOSXYZnrc [51], an add-on to EGSnrc for the purposes of tracking energy deposition within various media. DOSXYZnrc-generated cubic water phan- toms suspended in air were used. As well, the EGSnrc user-code CTCREATE generated a voxelated phantom based on Computed Tomography (CT) scans of the prostate region of an anthropomorphic phantom. For all phantoms voxelation was 0.125푐푚3. Due to the renormalization of the EPID image mentioned in chapter 3.4, and depending upon the type of plan being implemented, the distribution of statistical weights amongst the particles is unlikely to be uniform. A uniform distribution for the statistical weights of those particles whose direction cosines direct them toward the FOI (Field of Interest - in the present context this would be the phantom region intersecting the beam) allows for more efficient Monte Carlo simulation [36]. To increase the efficiency, which we may have reduced, DOSXYZnrc’s variance reduction techniques (VRTs) of 29 5.2. Post-Processing of the Calculated Dose Distribution Russian Roulette and Photon Splitting were implemented [17, 18, 36, 51]. All simulations were run so that the maximum error for each voxel in the FOI was < 4% of the dose therein. Dose-denoising, discussed next, reduced this error further by, at least, a factor of 2 [16]. 5.2 Post-Processing of the Calculated Dose Distribution While not a part of the MCEF procedure proper, in the sense that we have, by this point, calculated the required dose distribution, there is some post- processing that we apply to the data, and so we mention it here. First, the dose distribution is converted from dose-to-medium (퐷푚푒푑푖푢푚) to dose- to-water (퐷푤푎푡푒푟) [16]. Secondly, the dose distribution is smoothed via a de-noising technique described by I. Kawrakow [16]. If required, the data is suitably interpolated at this stage. While the dose-conversion procedure may be necessary, depending on how much of the anatomy in question differs from soft tissue, the smoothing is employed purely as a means of more quickly converging, albeit artificially, to the solution. The same result would be achieved via longer simulation time [16]. Given a phantom comprised of various materials and a phase-space file for the radiation beam, DOSXYZnrc will calculate the dose distribution within the phantom. However, in order to properly compare the Monte Carlo derived dose distributions with film measurements it is necessary to con- vert this calculated distribution from 퐷푚푒푑푖푢푚 to 퐷푤푎푡푒푟. The dose con- version method we used was developed and tested by Siebers et al [38] for 6MV and 18MV photon beams. Using Bragg-Gray cavity theory [15], the absorbed dose to water is related to the absorbed dose to medium by 퐷푤푎푡푒푟 = 푆푤푎푡푒푟,푚푒푑푖푢푚 ⋅퐷푚푒푑푖푢푚, with 푆푤푎푡푒푟,푚푒푑푖푢푚 the unrestricted water- to-medium mass collision stopping power ratio averaged over the energy spectra of the treatment beam’s primary electrons (not including 훿-rays). Monte Carlo techniques were employed to obtain the knowledge of electron fluence within various media which is needed for calculating the correspond- 30 5.2. Post-Processing of the Calculated Dose Distribution ing stopping power ratio. Based on their studies, Siebers et al concluded that for a photon beam of a given energy, for each particular media, a single correction factor can be used throughout the field. They also conclude that, at most, an error of ∼ 1.3% is introduced into the dose distribution by this conversion [9, 38, 39]. While Monte Carlo algorithms theoretically provide the most accurate method for calculating dose distributions, they converge to the solution probabilistically, and, as such, statistical noise in the solution is an inherent issue with these techniques. One way to overcome this issue is by running the simulation for a longer period of time, the Central-Limit Theorem [34] assuring us that for each datum the variance between the calculated and underlying “actual ”value will decrease if we do so. However, the longer a Monte Carlo simulation must be run, the less practical it becomes to use this method clinically. Another way to minimize the issue of statistical noise, and the one which we have employed, is to denoise the data via smoothing. Data smoothing is already routinely applied in a variety of disciplines that deal with noisy sig- nals, such as imaging, and at its core consists of nothing more than replacing each datum with a weighted sum of the surrounding data. The problems with smoothing are (1) determining which weighting to use so as to more closely approximate the “actual” value, and (2) inadvertently introducing bias (i.e., consistent difference between a set of smoothed and expected val- ues), both of which may lead to an increase in the variance for each datum. As previously mentioned, the denoising method we used was developed by I. Kawrakow [16] and attempts to adjust the dose distribution so as to ap- proach the local uncertainty minimum in each voxel, i.e., it aims to maximize the reduction of statistical noise while at the same time reducing the risk of introducing systematic bias. Kawrakow’s procedure can be considered the 3D extension of Savitzky and Golay’s original work on smoothing filters [35, 40], applied specifically to denoising dose distributions. This method is also known as the locally adaptive Savitzky-Golay curve-fitting (LASG) technique. In LASG, weighting for a given point is determined by fitting a second- 31 5.2. Post-Processing of the Calculated Dose Distribution order polynomial over 3 data points on each side of the given point and minimizing the Chi-squared value per degree of freedom (휒2) for that point. If 휒2 < 1 than the dose of the central point is replaced with the value provided by the polynomial. Otherwise, the process is repeated for a second- order polynomial over 2 data points on each side of the given point. Should this also not satisfy the 휒2 < 1 condition then no de-noising is applied to that point. Aside from LASG, various other denoising techniques have also been de- veloped and tested. These include 3D wavelet threshold denoising with sub- band adaptive thresholding, content adaptive mean-median-hybrid (CAMH) filtering, anisotropic diffusion (AD) and an iterative reduction of noise (IRON) method formulated as an optimization problem. These various techniques were compared by Naqa et al using various criteria and test geometries [25]. The LASG method had the best mean-square-error reduction for three of the four test geometries and performed the best for the ‘Van Dyk’ criteria∗ of less than 2mm in high dose-gradient regions [7]. Coupled with its ease of implementation, the LASG method was considered the best denoising tech- nique to be used. We created a Matlab script to apply this method to our Monte Carlo data. ∗This is a pass/fail criteria of either agreement within 2% with the reference image in low dose-gradient regions, or a distance-to-agreement-within-2% 32 Chapter 6 Analysis 6.1 Dose Distributions Obtained via TPS and Film, for Comparison The treatment planning system (TPS) was the EclipseTM planning system, platform 8.1 (Varian Medical Systems, Inc., Palo Alto, California), which utilizes the Pencil-Beam Convolution (PBC) Algorithm [42, 43] and, more recently, the Analytic Anisotropic Algorithm (AAA) [43–46]. The PBC al- gorithm and AAA∗ calculate dose-to-water [43], and so, unlike for the Monte Carlo-based dose distributions, no processing is required before comparing against film measurements. Briefly, the PBC algorithm calculates the dose in water for a given plan via convolution of kernels, special functions which, in this instance, de- scribe the dose distribution of very thin elementary beams entering water- equivalent material. These kernels are derived from measured beam data, namely central axis depth doses, beam off axis ratios, and phantom scatter factors. Correction techniques, such as contour correction via the modified Batho method, are applied after to correct for the curved patient surface and any inhomogeneities within [42, 43]. The Analytical Anisotropic Algorithm represents the beam using a multiple-source model, incorporating a primary source, extra-focal source, electron-contamination source, and wedge scatter source, as needed, and models the beam using physical parameters, which give a description of the phase-space of the particles comprising the treat- ment beam. These parameters provide a description of the beam above the ∗There is some uncertainty regarding whether it is truly dose-to-water being calculated by this algorithm [41]. However, following Sterpin et al [41] we will assume AAA calculates 퐷푤푎푡푒푟, not 퐷푚푒푑푖푢푚 33 6.1. Dose Distributions Obtained via TPS and Film, for Comparison patient; a patient scatter model is incorporated into the model when de- positing dose inside the the patient. AAA is faster than the PBC algorithm and provides for more accurate dose in regions of inhomogeneity [43, 46]. Film measurements were conducted using Kodak EDR2 R⃝ (Extended Dose Range) film (Eastman Kodak Co.; Rochester, NY, U.S.A.) as well as Gafchromic R⃝ EBT film (International Specialty Products; Wayne, NJ, U.S.A.). TPS vs film comparisons and Monte Carlo vs film comparisons were made using the Matlab R⃝ (The MathWorks; Natick, MA, U.S.A.) soft- ware package. In particular, we used DoseLab (version 4.11), a set of Matlab based programs for quantitative comparison of measured and computed ra- diation dose distributions. DoseLab was started by Nathan Childress, PhD. and is freely available on the Internet∗. Unlike with Plans 1-4, in which measurements were conducted with the film perpendicular to the beam, and so the Kodak EDR2 R⃝film could be easily used, for testing of Plan 5 the film must be placed into the anthro- pomorphic phantom parallel to the beam. Due to Kodak EDR2 R⃝ film’s energy-dependence [55] we decided to use Gafchromic R⃝ EBT film, which does not exhibit strong energy-dependence [28]. However, there is a down- side to using Gafchromic film, which is that calibration is more time consum- ing since the optical density response may not be linear [28]. Our calibration curve, for converting image intensity to dose, is presented in Fig.6.2(a). This curve was obtained by irradiating pieces of gafchromic film with between 0 and 2050cGy and measuring the associated pixel value of the film, averaged across the central ∼ 1푐푚2 region. Linear interpolation was used to obtain the curve fitting to the measured values. Also, in the course of digitizing the film, flatbed scanners introduce an off-axis-, and optical-density-, dependent effect across the short axis, which must be corrected† [24]. We used the flatbed Epson Perfection v700 PhotoTM scanner to digitize our gafchromic film, which required such a correction. This scanner has a light-bar running the length of the scanner’s short axis, ∗www.doselab.com †Based on discussion with Stanislaw Szpala, Medical Physicist, Fraser Valley Cancer Centre 34 6.1. Dose Distributions Obtained via TPS and Film, for Comparison and scans along the length of the scanner’s long axis. In brief, this correction consists of irradiating strips of gafchromic film with known dose, scanning the films (noting the position of the centre of the scanner’s short axis relative to the length of the film strip) and fitting a parabola (or other appropriate polynomial) to the pixel values along the length of the film. This fit characterizes the scanner-effect as a function of pixel value and distance from the centre of the scanner’s short axis; the inverse of the parabola is used to correct that same central pixel value wher- ever it arises in the scanned measurement film. Matlab’s polyfit function was used to fit the parabolas (see Fig.6.2(b)) and spline was used for determining correction factors for pixel values not explicitly scanned. Gafchromic film was cut to the contours of the phantom for two planes, the isoplane and the secondary plane, which have been defined as the 푧 = 0.0푐푚 plane and the 푧 = 2.5푐푚 plane. Also, the statement above of not being able to use EDR2 film requires an addendum: we were able to fol- low the lead-filtration method of Yeo et al [55] to minimize the energy- dependence of the EDR2 film. This was done because we wished to have further validation that our scanner-corrected gafchromic film measurement was accurate. The lead-filtration works by predominately filtering the low- energy photons, which increase as a fraction of the total number of photons in the beam, and to which the EDR2 film is especially sensitive. This filter- ing technique requires sandwiching lead plates and water-equivalent phan- tom slabs/bolus around the EDR2 film, and this adds extra material to the phantom, changing the geometry of our “patient”. Such a change can be justified in this case because the prostate region has little heterogene- ity in the first place, so the geometry in this case is not being altered as much as one may first think. Taking into account the difficulty in correctly setting up the anthropomorphic phantom, it was not felt worthwhile to em- ploy the lead-filtering technique again for the 푧 = 2.5푐푚 plane. The close agreement of the Gamma-Dose, for Monte Carlo-vs-Gafchromic and Monte Carlo-vs-EDR2, at 푧 = 0.0푐푚 indicate that we can trust the accuracy of our scanner-corrected gafchromic measurements. Table 6.2 presents the results of Gamma-Analysis for both planes, while Fig.6.6 showcases the measured, 35 6.2. Gamma-Dose Index calculated, and simulated dose distributions for the isoplane and Fig.6.7 il- lustrates the associated Gamma-Dose distributions. It is to be noted that at this time during this project the Abbotsford and Fraser Valley Cancer Centres began commissioning the Analytic Anisotropic Algorithm for the Eclipse TPS; thus analysis using this algorithm is also presented. 6.2 Gamma-Dose Index Analysis via the Gamma-Dose Index is a means of dose distribution compar- ison, which combines both the dose difference (DD) that can arise between spatially equivalent points of two plans, as well as the distance to agreement (DTA) required to numerically equate a spatial location in one plan with a location in another plan. In essence the index acts as a metric to measure, pixel-by-pixel, the ‘nearness’ of the dose in one plan relative to another plan. The technique was created by Low et al [21] because use of either DD or DTA criteria alone was insufficient, particularly for IMRT plans, as these criteria are most useful in complimentary regimes [7]. By this we mean that the DTA criteria provides information useful for comparative purposes in regions of high dose-gradients, but not for low dose-gradient regions. The DD criteria, meanwhile, is most helpful in low dose-gradient regions; it is too sensitive to inter-distribution spatial discrepancies in high dose-gradient regions to be of much use in comparing such regions of a dose distribution against measured data, since unintended motion is generally unavoidable in a clinical setting. The two distributions which are to be compared are known as the refer- ence and evaluation distributions. Low et al [21] suggest that the evaluation distribution have a spatial distribution at least as high as the reference dis- tribution, so as to minimize evaluation error due to data discretization. Un- less stated otherwise, our film measurements always provide the evaluation distribution, and the various calculated distributions are used for reference. To reiterate, and as illustrated in Fig.6.2, the Gamma-Dose Index, Γ, can be thought of as a type of distance value for every dose point in the reference distribution from the evaluation distribution [3]. If we let 퐷푅(푦⃗) and 퐷퐸(푥⃗) 36 6.2. Gamma-Dose Index Figure 6.1: Gafchromic R⃝ EBT film and EPSON Perfection v700 Photo scanner related data (a) Calibration curve for the Gafchromic R⃝ EBT film, fit using linear interpo- lation (b) Measured profiles (blue) for Gafchromic film and associated parabolic fits (red) 37 6.2. Gamma-Dose Index (c) Normalized parabolae for scanner correction denote the reference and evaluation dose distributions, respectively, with 푦⃗, 푥⃗ ∈ ℜ푛, 푛 ∈ set of integers > 0, then the measure assigned to each point in the reference distribution is Γ(푦⃗,Δ퐷,Δ푑) = min 푦⃗ √( 푦⃗ − 푥⃗ Δ푑 )2 + ( 퐷푅(푦⃗)−퐷퐸(푥⃗) Δ퐷 )2 (6.1) where Δ퐷 is the numerical value for the DD criterion and Δ푑 is the nu- merical value for the DTA criterion. In radiation-therapy applications it is common to have DTA < 3푚푚 and DD < 3% of the maximal dose [21], and so we have used these values for the criteria in our analysis. Dose equivalence at a given point 푦⃗ is assumed if Γ(푦⃗,Δ퐷,Δ푑) < 1. The set of Gamma-Dose indices between two plans can be referred to simply as the Gamma-Dose. For the full, and in our case 2D (푛 = 2), distribution the final analysis consists of evaluating %[Γ < 1], the percentage of the Gamma-Dose with a value less than one. The higher the percentage, the more assurance we have that the two distributions are equivalent. In all figures presenting the Gamma-Dose between two plans, pixels shaded grey or white have Γ < 1, while pixels coloured orange to red have increasingly larger Γ values. For all relevant figures in this paper, we present Γ ∈ [1, 3] in this manner. 38 6.2. Gamma-Dose Index Figure 6.2: One-dimensional (n=1) illustration of the gamma index as a metric from the reference dose distribution. The radius indicated by the green arrow is the gamma index of (푦⃗, 퐷) with respect to the evaluation distribution 푑 = 퐷퐸(푥⃗), i.e. this is the closest ‘distance’ of the dose point (푦⃗, 퐷) to any point which defines the evaluation distribution 푑 = 퐷퐸(푥⃗) (represented here as a continuous and smooth maroon-coloured curve). 39 6.3. Results 6.3 Results Rather than a Gamma-Dose analysis for the simple case of the original 10푥10푐푚2 open-field, it was felt that it would be more illustrative if calcu- lated and measured dose profiles were presented. For the measured profiles, an ion chamber was used, in water, for the same linac configuration as for EPID image capture. It should be noted that this measured data was pre- viously obtained as part of regular QA and is for a different linac at the Abbotsford Cancer Centre. However, other QA checks∗ indicate that the output of the machine used for our simulations, and that used in the ion chamber measurements, are within 2% of each other. These comparisons are presented in Fig.6.3 Next we have the results of the four test plans with MLC leaves in-beam for various configurations (Fig.3.1). For these four cases the Monte Carlo calculation was run through a 30푐푚3 water phantom, with 0.125푐푚3 voxe- lation. The phantom surface was at SSD 95cm, and the dose distribution analyzed was at 5cm depth. Comparison for these plans was done against the TPS’s PBC algorithm, except for Plan 4, for which the TPS’s PBC al- gorithm failed to calculate a distribution, due to errors in extrapolating the necessary factors for such a highly interdigitated field. Film measurement were done using Kodak EDR2 film, with a similar setup as for the simu- lated phantom. The dose profiles (Fig.6.4) were taken at 2.5cm from the isocentre, crossing through the field where it would be modulated by the in-beam leaves. Visually the TPS-based dose profile for Plan 2 (Fig.6.4(b)) matches significantly better with film than in Plan 3 (Fig.6.4(c)). However, the change in agreement between the Monte Carlo-based profile and film- based profile, as the beam-modulation is made more complex via narrowing of the gap, is minimal. The Gamma-Dose results in Table 6.1, and shown visually in Fig.6.5, provide corroboration. The Gamma-Dose index was ap- plied to the full 10푥10푐푚2 field, plus a 1cm margin on all sides. The results indicate that our MCEF Procedure is able to provide a more accurate dose distribution for the MLC configuration. ∗As discussed with Bilal Shahine, Abbotsford Cancer Centre Medical Physicist 40 6.3. Results Figure 6.3: Various dose comparisons between MCEF-simulated (blue) and ion-chamber-measured (red) dose distributions for a 10푥10푐푚2 open field (a) PDD Comparison between MCEF-simulation (blue) and ion- chamber-measurements (red) for a 10푥10푐푚2 open field (b) Crossline Comparison between MCEF-simulation (blue) and ion- chamber-measurements (red) for a 10푥10푐푚2 open field 41 6.3. Results (c) Inline Comparison between MCEF-simulation (blue) and ion- chamber-measurements (red) for a 10푥10푐푚2 open field For the tests in the water phantom, there are no inhomogeneities which would add errors to the dose calculated by the PBC algorithm. Yet the dose calculated with PBC from the TPS-based fluence matches more poorly with film than the dose calculated using 푃퐻푆푃푀 . We conclude that the projected fluence matrix derived from the captured EPID image is a more accurate representation of the real fluence below the MLC. For Plan 5 gafchromic film was used, as previously mentioned. The Gamma-Dose index was applied over the entire field, beginning 1cm from below the surface of the phantom and going down to a depth of 13cm. Again a 1cm margin on either side of the field, defined at the 13cm depth, was also included. The numerical results of application of the Gamma-Dose index are presented in Table 6.2. The measured, calculated, and simulated dose distributions for the isoplane are shown in Fig.6.6; the associated Gamma- Dose distributions are in Fig.6.7. For this setup it is important to note that comparison of MC results with PBC alone would be difficult, as the PBC algorithm does not handle inhomogeneities well, and there are some inhomogeneities present in the prostate region. However, the results of AAA are not significantly improved in comparison with the PBC and MC results, and so our conclusion regarding 42 6.3. Results the accuracy of the TPS-based fluence versus the EPID-based fluence is still valid. Table 6.1: Percentage of the Gamma-Dose with a value less than one, for the water phantom. Gamma-Dose Index criteria: DTA 3mm, DD 3% 43 6.3. Results Figure 6.4: Comparison of dose profiles from measurement, TPS, and Monte Carlo simulation, for plans 1-4 (a) Plan 1 (b) Plan 2 44 6.3. Results (c) Plan 3 (d) Plan 4 45 6.3. Results Figure 6.5: The Gamma-Dose comparisons for the initial test plans (a) Gamma-Dose Distribution, between TPS (PBC) and film, for plan 1 (b) Gamma-Dose Distribution, between MC and film, for plan 1 (c) Gamma-Dose Distribution, between TPS (PBC) and film, for plan 2 (d) Gamma-Dose Distribution, between MC and film, for plan 2 46 6.3. Results (e) Gamma-Dose Distribution, between TPS (PBC) and film, for plan 3 (f) Gamma-Dose Distribution, between MC and film, for plan 3 (g) Gamma-Dose Distribution, between MC and film, for plan 4 47 6.3. Results Table 6.2: Percentage of Gamma-Dose with value less than one, for the prostate field (Plan 5) delivered on an anthropomorphic phantom. Gamma- Dose Index criteria: DTA 3mm, DD 3% 48 6.3. Results Figure 6.6: 2D dose distributions, attained various ways, at the isoplane, for the dynamic-MLC prostate field (Plan 5) delivered on an anthropomorphic phantom. (a) Gafchromic Film (b) Monte Carlo Simulation (c) Analytic Anisotropic Algorithm (d) Pencil-Beam Convolution Algorithm 49 6.3. Results Figure 6.7: Gamma-Dose distributions, at the isoplane, for the prostate field (Plan 5) delivered on an anthropomorphic phantom (a) MC vs Gaf. (b) PBC vs Gaf. (c) AAA vs Gaf. (d) MC vs EDR2 50 Chapter 7 Conclusion To summarize, our MCEF procedure for obtaining an accurate 3D, in-vivo, dose distribution consists of four steps, all except the third of which we have looked at in detail. The four steps are: 1. Calculate fluence at EPID height 2. Modulate phase-space file at MLC height 3. Obtain Cone-Beam CT images of patient 4. Run forward Monte Carlo calculation on phantom (CT or CBCT ob- tained) From the previous section we have seen that certain assumptions (posi- tioning accuracy, dose and dose-rate independence) must be made regarding electromechanical behaviour of the EPID, as well as regarding beam diver- gence. These assumptions have been verified to be valid. Following this, several static-MLC test cases were validated in a water phantom and com- pared with 2D film measurements via Gamma-Dose analysis. The results of these cases proved promising, especially as compared with the Gamma- Dose analysis for the Eclipse TPS output, via PBC, versus the same film measurements. Certainly, as Plan 4 shows, there are currently limitations on how “extreme”∗ the plans can be for this method to work effectively, but at the same time it is worth mentioning that, for Plan 4 at least, the PBC algorithm implemented in the Eclipse TPS had even more difficulty in calculating a distribution. ∗Extreme in the sense of ’to which degree does the plan exhibit traits commonly associated with IMRT plans’, such as high spatial variation in the dose 51 Chapter 7. Conclusion Following the static-MLC scenario, we used a dynamic-MLC plan (Plan 5) to represent a simplified IMRT treatment. Hence, the treatment plan was run through an anthropomorphic prostate phantom. Film measurements were taken for two planes, using both Kodak EDR2 film and Gafchromic EBT film. Again, relative to the results of the TPS’s PBC algorithm, as well as AAA, good agreements were obtained. As mentioned before, due to their much shorter path-lengths there is no clear relationship between the elements of the fluence matrix and the charged particles of the phase-space, and so these particles are ignored in the phase- space file during the simulation. As such, we expect dose deposition near the skin to be inaccurately calculated via the MCEF procedure. While this issue was not investigated vigorously, and despite the close match between the calculated measured distributions of Fig.6.3 suggesting that the effect may not be important, use of the MCEF procedure near the skin should be avoided until a more rigorous investigation indicates otherwise. 7.0.1 Future Work Despite the promising Gamma-Dose analysis, there is still work that needs to be done on developing the procedure. Three issues in particular come to mind: ∙ Adjusting the RL algorithm parameters ∙ Incorporating EPID spectral-response behaviour ∙ Incorporate patient Cone-Beam CT images The first of these issues we have already discussed in some detail. At the moment there is no objective function, or any other means, which would allow us to determine which set of parameters gives optimal performance for the MCEF Procedure. It may be that it is not possible to have a universal set of parameters for use with all plans. Rather, it may be the case that plans are categorized into various levels of “extremeness”∗, each of which requires ∗Extremeness in the sense of ’to which degree does the plan exhibit traits commonly associated with IMRT plans’, such as high spatial variation in the dose 52 Chapter 7. Conclusion the use of a different set of parameters. Perhaps the parameters need to be adjusted on a pixel-by-pixel basis in some manner. This is currently speculation and requires attention. The second issue, of spectral-response, arises due to the EPID’s height- ened sensitivity to low energy photons, which is a result of the high atomic number of the phosphor layer (Appendix A). For an IMRT plan, the pro- portion of low energy photons within the treatment beam varies with each spatial location, and depends on the ratio of MUs which were transmitted through an MLC leaf to that location, relative to the total MUs delivered there (Let us call this ratio T, the transmission factor). This is impor- tant because the calibration method employed to obtain the EPID response kernel (Appendix A) is based on images with large uniform spatial extent, relative to many IMRT plans, and which were created via relatively uniform MLC leaf motion; the EPID response for this scenario, subsequently used to characterize the intra-EPID scatter, may not be indicative of the types of MLC-related attenuation which clinical IMRT beams may undergo, and so the fluence derived for these plans based on the current blur kernel may not be accurate. This could lead to an inaccurately calculated in-vivo dose. As an illustrative, albeit extreme, example, Li et al [20] used Monte Carlo to show that using an EPID’s response to an open-field beam (T=0 throughout the beam) would result in up to ∼ 30% overestimation of the EPID response for an MLC-blocked beam (T=1 throughout the beam). While copper filters have been shown to reduce the difference in EPID sensitivity to open beams versus beams attenuated with physical compen- sators [19], such a filter does not appear to have a similarly significant effect on the difference in EPID sensitivity to open beams versus MLC transmitted beams [12]. Vial et al have investigated this matter [50] in the context of improving the match between the off-axis TPS calculated portal dose image prediction (푃퐷퐼푃푇푃푆) and the actual EPID-captured image. We believe their method can be adopted into correcting an EPID-captured image to match ion chamber, film, or other measurement based dose distribution. As for how we seem to have “gotten away” without such a correction in Plan 5, Vial et al provide empirical evidence that for T in the range of 0.80-0.95 53 Chapter 7. Conclusion EPID and 푃퐷퐼푃푇푃푆 profiles agree best (within 3%); we shall have to de- termine if Plan 5 has T within this range throughout most of the beam’s spatial extent. Finally, we would like to extend this procedure to incorporate Cone- Beam CT (CBCT) images of the patient, taken before treatment. This is similar, but not identical, to the work done by McDermott et al [23], who also use CBCT images to reconstruct a phantom for in-vivo calculation. However, they use the EPID for transit dosimetry and then assume that the CBCT-based phantom is homogeneous and water-like. On the one hand this requires a more complicated model of dose deposition within the EPID, as McDermott et al must account for patient-induced scatter and energy- spectrum alteration. On the other hand, their simplification via assumption of phantom homogeneity can, and does, lead to dose miscalculations, as they ascertain. While there is clearly more work to be done, non-transit dosimetry coupled with a fully heterogeneous phantom, as we have discussed for our MCEF Procedure, should avoid such issues. 54 Bibliography [1] P. S. Atwal, B. Shahine, S. Thomas, F. Cao, and R. Ramaseshan. Sci- thurs pm: Planning-11: An epid-based monte carlo approach to in-vivo dosimetry for intensity-modulated radiation therapy treatments. Med. Phys., 36(9):4320, 2009. [2] D. S. C. Biggs. Acceleration of iterative image restoration algorithms. Applied Optics, 36(8):1766–1775, 1997. [3] M. Chen, W. Lu, Q. Chen, K. Ruchala, and G. Olivera. Efficient gamma index calculation using fast euclidean distance transform. Phys. Med. Biol., 54:2037–2047, 2009. [4] C. S. Chui, T. LoSasso, and S. Spirou. Dose calculations for photon beams with intensity modulation generated by dynamic jaw or multileaf collimators. Med. Phys., 21:1237–1243, 1994. [5] K. Chytyk and B. M. C. McCurdy. Comprehensive fluence model for absolute portal dose image prediction. Med. Phys., 36(4):1389–1398, 2009. [6] J. Deng, T. Pawlicki, Y. Chen, J. Li, S. B. Jiang, and C-M Ma. The mlc tongue-and-groove effect on imrt dose distributions. Phys. Med. Biol., 46:1039–1060, 2001. [7] J. Van Dyk, R. B. Barnett, J. E. Cygler, and P. C. Shragge. Commis- sioning and quality assurance of treatment planning computers. Int. J. Radiat. Oncol. Biol. Phys., 26:261–273, 1993. [8] P. A. Jansson (Ed.). Deconvolution of Images and Spectra. Academic 55 Chapter 7. Bibliography Press, San Diego, CA, U.S.A.; Chestnut Hill, MA, U.S.A., second edi- tion, 1997. [9] M. Fippel and F. N푢̈sslin. Comments on ’converting absorbed dose to medium to absorbed dose to water for monte carlo based photon beam dose calculations’. Med. Phys., 45:L17–L18, 2000. [10] B. A. Fraass, J. Smathers, and J. Deye. Summary and recommendations of a national cancer institute workshop on issues limiting the clinical use of monte carlo calculation algorithms for megavoltage external beam radiation therapy. Med. Phys., 30:3206–3216, 2003. [11] J. M. Galvin, A. R. Smith, and B. Lally. Characterization of a multi- leaf collimator system. Int. J. Radiat. Oncol. Biol. Phys., 25:181–192, 1993. [12] P. B. Greer, P. Vial, L. Oliver, and C. Baldock. Experimental investiga- tion of the response of an amorphous silicon epid to intensity modulated radiotherapy beams. Med. Phys., 34:4389–4398, 2007. [13] E. J. Hall and A. J. Giaccia. Radiobiology for the Radiologist. Lippincott Williams & Wilkins, Philadelphia, PA, U.S.A., sixth edition, 2006. [14] H. Jin, Z. Su, H. Chung, T.-S. suh, J. Palta, and S. Kim. Imrt dose veri- fication using the dose uncertainty prediction model. In World Congress on Medical Physics and Biomedical Engineering 2006, pages 1819–1822, 2006. [15] H. E. Johns and J. R. Cunningham. The Physics of Radiology. Charles C. Thomas, Springfield, IL, U.S.A., fourth edition, 1983. [16] I. Kawrakow. On the de-noising of monte carlo calculated dose distri- butions. Phys. Med. Biol., 47:3087–3103, 2002. [17] I. Kawrakow and D. W. O. Rogers. The egsnrc code system: Monte carlo simulation of electron and photon transport. NRCC Report PIRS- 701, 2006. 56 Chapter 7. Bibliography [18] I. Kawrakow and B. R. B. Walters. Efficient photon beam dose calcu- lations using using dosxyznrc with beamnrc. Med. Phys., 33(8):3046– 3056, 2006. [19] C. Kirby and R. Sloboda. Consequences of the spectral response of an a-si epid and implications for dosimetric calibration. Med. Phys., 32:2649–2658, 2005. [20] W. Li, J. V. Siebers, and J. A. Moore. Using fluence separation to account for energy spectra dependence in computing dosimetric a-si epid images for imrt fields. Med. Phys., 33:4468–4480, 2006. [21] D. A. Low, W. B. Harms, S. Mutic, and J. A. Purdy. A technique for the quantitative evaluation of dose distributions. Med. Phys., 25:656–661, 1998. [22] L. B. Lucy. Astron. J., 79:745, 1974. [23] L. N. McDermott, M. Wendling, J. Nijkamp, A. Mans, J.-J. Sonke, B. J. Mijnheer, and M. van Herk. 3d in vivo dose verification of entire hypo-fractionated imrt treatments using an epid and cone-beam ct. Radiotherapy and Oncology, 86:35–42, 2008. [24] L. Menegotti, A. Delana, and A. Martignano. Radiochromic film dosimetry with flatbed scanners: A fast and accurate method for dose calibration and uniformity correction with a single film exposure. Med. Phys., 35(7):3078–3085, 2008. [25] I. E. Naqa, I. Kawrakow, M. Fippel, J. V. Siebers, P. E. Lindsay, M. V. Wickerhauser, M. Vicic, K. Zakarian, and N. Kauffmann. A comparison of monte carlo dose calculation denoising techniques. Phys. Med. Biol., 50:909–922, 2005. [26] International Commission on Radiation Units and Measurements. Pre- scribing, recording, and reporting photon beam therapy. ICRU Report 50. Bethesda, Maryland, U.S.A., 1993. 57 Chapter 7. Bibliography [27] N. Papanikolaou, J. Battista, A. Boyer, C. Kappas, E. Klein, T. R. Mackie, M. Sharpe, and J. van Dyke. Tissue inhomogeneity corrections for megavoltage photon beams. AAPM Report No. 85, Task Group No. 65 of the Radiation Therapy Committee of the American Association of Physicists in Medicine. Medical Physics Publishing, Madison, WI, 2004. [28] International Specialty Products. Gafchromic R⃝ ebt: Self-developing film for radiotherapy dosimetry. Product white paper, 2007. [29] W. D. Renner, K. Norton, and T. Holmes. A method for deconvolution of integrated electronic portal images to obtain incident fluence for dose reconstruction. Journal of Applied Clinical Medical Physics, 6(4), 2005. [30] N. Reynaert, S. C. van der Marck, D. R. Schaart, W. Van der Zee, C. Van Vliet-Vroegindeweij, M. Tomsej, J. Jansen, B. Heijmen, M. Coghe, and C. De Wagter. Review: Monte carlo treatment plan- ning for photon and electron beams. Radiation Physics and Chemistry, 76:643–686, 2007. [31] R. B. Richardson. J. Opt. Soc. Am., 62:55, 1972. [32] D. W. O. Rogers, B. R. B. Walters, and I. Kawrakow. Beamnrc users manual. NRCC Report PIRS-0509(A)revK, 2006. [33] F. Rosca and P. Zygmanski. An epid response calculation algorithm using spatial beam characteristics of primary, head scattered and mlc transmitted radiation. Med. Phys., 35(6):2224–2234, 2008. [34] S. M. Ross. Introduction to Probability Models. Academic Press, San Diego, CA, U.S.A., eighth edition, 2003. [35] A. Savitzky and M. J. E. Golay. Smoothing and differentiation of data by simplified least squares procedure. Anal. Chem., 36(8):1627–1639, 1964. 58 Chapter 7. Bibliography [36] D. Sheikh-Bagheri, I. Kawrakow, B. R. B. Walters, and D. W. O. Rogers. Monte carlo simulations: Efficiency improvement techniques and statistical considerations. In Integrating New Technologies into the Clinic: Monte Carlo and Image-Guided Radiation Therapy - Proc. of 2006 AAPM Summer School, pages 71–91, 2006. [37] J. V. Siebers, P. J. Keall, J. O. Kim, and R. Mohan. A method for photon beam monte carlo multileaf collimator particle transport. Phys. Med. Biol., 47:3225–3249, 2002. [38] J. V. Siebers, P. J. Keall, A. E. Nahum, and R. Mohan. Converting absorbed dose to medium to absorbed dose to water for monte carlo based photon beam dose calculations. Phys. Med. Biol., 45:983–995, 2000. [39] J. V. Siebers, P. J. Keall, A. E. Nahum, and R. Mohan. Reply to ’comments on ’converting absorbed dose to medium to absorbed dose to water for monte carlo based photon beam dose calculations’ ’. Med. Phys., 45:L18–L19, 2000. [40] J. Steinier, Y. Termonia, and J. Deltour. Comments on smoothing and differentiation of data by simplified least square procedure. Anal. Chem., 44(11):1906–1909, 1972. [41] E. Sterpin, M. Tomsej, B. De Smedt, N. Reynaert, and S. Vynckier. Monte carlo evaluation of the aaa treatment planning algorithm in a heterogeneous multilayer phantom and imrt clinical treatments for an elekta sl25 linear accelerator. Med. Phys., 34(5):1665–1677, 2007. [42] Storchi, van Battum, and Woudstra. Calculation of a pencil beam kernel from measured photon beam data. Phys. Med. Biol., 44:2917– 2928, 1999. [43] Varian Medical Systems. Eclipse algorithms reference guide: Eclipse. P/N B500298R01C, version 8.1. 59 Chapter 7. Bibliography [44] D. Ulmer and D. Harder. Applications of a triple gaussian pencil beam model for photon beam treatment planning. Med. Phys., 6:68–74, 1996. [45] W. Ulmer and D. Harder. A triple gaussian pencil beam model for photon beam treatment planning. Med. Phys., 5:25–30, 1995. [46] W. Ulmer and W. Kaissl. The inverse problem of a gaussian convolu- tion and its application to the finite size of the measurement cham- bers/detectors in photon and proton dosimetry. Phys. Med. Biol., 48:707–727, 2003. [47] W. J. C. van Elmpt, S. M. J. J. G. Nijsten, R. F. H. Schiffeleers, A. L. A. J. Dekker, B. J. Mijnheer, P. Lambin, and A. W. H. Minken. A monte carlo based three-dimensional dose reconstruction method derived from portal dose images. Med. Phys., 33(7):2426–2434, 2006. [48] A. van Esch, T. Depuydt, and D. P. Huyskens. The use of an asi- based epid for routine absolute dosimetric pre-treatment verification of dynamic imrt fields. Radiotherapy and Oncology, 71:223–234, 2004. [49] Inc. Varian Medical Systems. Portal dosimetry (revision 1.1.0) course manual. Varian Medical Systems, Inc., U.S.A., 2006. [50] P. Vial, P. B. Greer, P. Hunt, L. Oliver, and C. Baldock. The impact of mlc transmitted radiation on epid dosimetry for dynamic mlc beams. Med. Phys., 35(4):1267–1277, 2008. [51] B. R. B. Walters, I. Kawrakow, and D. W. O. Rogers. Dosxyznrc users manual. NRCC Report PIRS-794revB, 2006. [52] X. Wang, S. Spirou, T. LoSasso, J. Stein, C. Chui, and R. Mohan. Dosi- metric verification of intensity modulated fields. Med. Phys., 23:317– 328, 1996. [53] B. Warkentin, S. Steciw, S. Rathee, and B. G. Fallone. Dosimetric imrt verification with a flat-panel epid. Med. Phys., 30(12):3143–3155, 2003. 60 [54] M. Wendling, R. J. W. Louwe, L. N. McDermott, J.-J. Sonke, M. van Herk, and B. J. Mijnheer. Accurate two-dimensional imrt verification using a back-projection epid dosimetry method. Med. Phys., 33(2):259– 273, 2006. [55] I. J. Yeo, C.-K. C. Wang, and S. E. Burch. A filtration method for improving film dosimetry in photon radiation therapy. Med. Phys., 24(12):1943–1953, 1997. 61 Appendix A Electronic Portal Imaging Device For this appendix, the majority of the work is referenced from the Varian Portal Dosimetry Manual (revision 1.1.0) [49]. In turn, this portal dosimetry manual extensively references the work of van Esch et al [48]. Unless another source is explicitly referenced, these two works are the sources for all facts in this appendix. A.1 Behaviour Characteristics The EPID is an indirect imager; photons incident on the detector first inter- act with a 1mm copper plate, used to provide build up. Below the copper plate is a high atomic number scintillation screen (Gd2O2S:Tb, gadolinium oxy-sulfide phosphor). The energy deposited in this screen is converted to optical photons, which are then detected by an array of photodiodes directly below the screen. Conversion of the incident radiation into optical photons enhances the sensitivity of the detector more than tenfold. After processing, which includes analog to digital conversion, this signal provides an intensity matrix below the EPID. We used the Varian aSi 1000 EPID, with Acquisition Unit IAS3. The aSi 1000 EPID has a 1024x768 element array of photodiodes with 0.392mm2 resolution per element. For this device, the manufacturer has specified the water equivalent thickness of the detector material as 8mm. Previous studies regarding the dose- and dose-rate- response of this device indicate that these characteristics make it suitable for use in IMRT treatment verification. Some of these results are summarized here. 62 A.2. Portal Dose Image Prediction Algorithm The response of the detector is independent of dose-rate for all clinically relevant dose-rates; due to faster read-out than previous models, for SSD 105cm the IAS3 acquisition unit does not exhibit saturation effects even at 600MU/min. As well, the aSi 1000 imager response is linear with respect to the number of MU’s delivered, within, at least, the range of 2-500MU. Furthermore, the imager response follows the Inverse Square Law with dis- tance. However, we have to be careful with respect to the field-size used. The imager output is field-size dependent, and the form of the dependence varies with energy. There is greater variation at higher energies, such as 18MV, than at lower energies, in the neighbourhood of 6MV. All images were taken between 100MU/min-600MU/min. For static de- livery, 200MU were delivered; during IMRT verification, much higher MU’s were delivered, MLC attenuation ensuring that the MU’s delivered at any point on the EPID were below the 500MU maximum mentioned. All tests were conducted using photon beams with a nominal energy of 6MV, and a field-size of 10cm2, defined at SAD 100cm. The SDD used for image-capture was 105cm. A.2 Portal Dose Image Prediction Algorithm As necessary background for conversion of the intensity matrix to a flu- ence matrix, we briefly summarize van Esch’s Portal Dose Image Prediction (PDIP) algorithm, as implemented by Varian, to allow the aSi EPID to be used for pre-treatment IMRT verification. The form of the algorithm is as follows: 푃 (푥, 푦,푋, 푌 ) = 휅(푟)⊗ 퐹 (푥, 푦) ⋅ ( 푆퐴퐷 푆퐷퐷 )2 ⋅ 푂퐹 (푋,푌 ) 푃푆퐹 (푋,푌 ) (A.1) ⊗ is the convolution operator, r is the distance from the origin, (x,y) are Cartesian coordinates, and (X,Y) indicate the total field size in each dimen- sion.The meaning of the remaining terms will be discussed below. The purpose of this algorithm is to provide the clinician with P(x,y,X,Y), 63 A.2. Portal Dose Image Prediction Algorithm a prediction of the EPID image based on the optimized fluence provided by the treatment planning system (TPS) for a given plan. This image is then compared prior to delivery with actual EPID images of the IMRT treatment, various metrics being employed to asses the accuracy of the plan setup. Before it can be used though, the algorithm requires information characterizing the EPID behaviour. This characterization is captured in the blur kernel, 휅, which, being theoretically unique for each device, requires that each linac with an EPID attached be individually calibrated. The calibration consists of iteratively altering 휅 so as to minimize the difference between the right side of equation A.1 and an EPID-captured dose image based on MLC motion calculated for the same optimal fluence matrix. This fluence matrix is provided by the manufacturer, and consists of a pyramidal shape emphasizing sharp gradients and varying field sizes, characteristics typical of IMRT plans. Previous studies show that the blur kernel incorporates two effects [5, 29, 33, 47, 53, 54], the dose deposition in the EPID’s scintillator screen and the spreading out between the scintillator and the photodiodes of the op- tical photons created at the scintillator screen. Various studies have used different functions to describe this behaviour, generally via a combination of Monte Carlo simulations of point-like input through the EPID and fitting to experimental data. This has lead to the blur kernel being expressed, among other forms, as a sum of standard Gaussians, a sum of exponentials, and as a convolution of exponentials and piece-wise defined functions. Radial sym- metry is generally assumed in all these formulations. Varian mathematically defines the blur kernel as 휅(푟) ≡ 10∑ 푖=1 푎푖 ⋅ 1√ 2휋휎푖 푒 −푟2 2휎2 푖 (A.2) with 푎푖 the amplitude, and 휎푖 the standard deviation, of the 푖 푡ℎ Gaussian. This is the form of the blur kernel used in this paper. The term F(x,y) of equation A.1 refers to the fluence (particle number per unit area) striking the surface of the EPID. It consists of the TPS- 64 A.2. Portal Dose Image Prediction Algorithm derived fluence corrected via an empirically-derived beam profile, which al- lows F(x,y) to incorporate the ’beam horns’ caused by the flattening filter, and which were removed by the EPID image-acquisition algorithm. The beam profile is a measurement of beam intensity diagonally across a water phantom using an ion chamber, at a depth no greater than D푚푎푥. At our centre, this measurement was done during equipment acquisition/calibration within the past year, and there did not seem to be sufficient justification to validate this measurement via a new measurement. The factor ( 푆퐴퐷 푆퐷퐷 )2 of equation A.1 shifts the fluence from the upper surface of the EPID to SAD 100 cm, a manufacturer-defined distance at which to compare the calculated EPID dose images with measured values. The default SDD for portal dose acquisition at our centre is 105 cm, and so all images were taken at this depth. However, by capturing a second portal dose image during calibra- tion, the manufacturer claims the kernel configuration is improved, allowing for precise calculations from images captured at SSD’s other than the de- fault. To facilitate future studies, a dose image was also captured at 140 cm. OF(x,y) is the imager output factor for field size x,y and accounts for the change in detector response for various field sizes. This output factor is different from the machine output factor in that it is imager-dependent. The difference between the imager OF and the machine OF becomes more pronounced with higher energies and larger field sizes. The PSF(x,y) is the phantom scatter factor for field size x,y. This factor is the value of the central pixel, obtained by convolving a rectangular function (1 inside the field, 0 outside) with the blur kernel. As a slight aside, it is to be noted that despite the manufacturer’s (Var- ian) claim that {푎푖, 휎푖}10푖=1 are available in a system file after calibration, this information was nowhere to be found. Rather, the system provides a file containing {푟푖, 휅(푟푖)}, which was fit with a linear piece-wise spline to obtain an approximate blur kernel. As a further approximation, despite be- ing Gaussian, the system file (understandably) truncates 휅. In particular, 휅(푟) = 0, 푟 > 200푚푚. However, as the blur kernel is described as a mono- tonically decreasing function of r, and 휅(200)휅(0) ≈ 10−5, truncation beyond 65 A.2. Portal Dose Image Prediction Algorithm this value of 푟 leads to little loss of useful information regarding intra-EPID photon scatter. 66 Appendix B Expectation Maximization: Richardson-Lucy Algorithm For this appendix, all work is referenced from [2, 8, 22, 31]. [8] also exten- sively references [22, 31], amongst other sources. It was previously mentioned that the EPID provides an intensity ma- trix, but what is required for correct phase space modulation is the corre- sponding matrix of fluence values at the surface of the EPID. The reason for this requirement is that photon scattering through the EPID leads to a loss of information, particularly as pertains to sharp gradients and low- intensity regions. In signal-processing terms, the EPID acts as a low-pass filter. However, the so-called blur kernel obtained during calibration can be used to correct for the effects of passage through the aSi device; the need for conversion from an intensity matrix to a fluence matrix can be restated as an issue of image restoration, and the blur kernel is treated as the point-spread function (PSF) of the system. While there are various techniques for image restoration, in the case of a known PSF the iterative Richardson-Lucy (RL) algorithm, also known as the Expectation Maximization (EM) algorithm, is considered superior and is commonly employed. It became particularly prominent in the 1990’s, when it was used to correct optical-image aberration in the then-new Hubble Space Telescope (HST). For this study we used the function deconvlucy, a coding of the RL algorithm found in MATLAB’s Image Processing Toolbox. To appreciate its usefulness and limitations, some background on this algorithm is provided next. Let 퐹 (푥, 푦) be the desired image, as a function of the Cartesian spatial 67 Appendix B. Expectation Maximization: Richardson-Lucy Algorithm coordinates x,y, and 휅(푥, 푦) be the value of the PSF of the image detector at the same location. Ignoring noise and assuming infinitely small pixels, the image obtained via the detector would be 푃 (푥, 푦) = 휅⊗ 퐹 ≡ ∫ ∫ 휅(푥, 푦, 푥̃, 푦̃)퐹 (푥, 푦, 푥̃, 푦̃)푑푥̃푑푦̃ (B.1) Noise is introduced naturally into the data as a consequence of counting statistics by recalling that what is being detected at each pixel of the de- tector is the number of photon strikes per given unit of time. As such, the observed data 푃 (푥, 푦) are distributed about the mean ⟨푃 (푥, 푦)⟩ with a Pois- son distribution, i.e., the probability P of having 푃 (푥, 푦) photons strike a given pixel, when the mean number of photon strikes per pixel is ⟨푃 (푥, 푦)⟩, is P (푃 ∣ ⟨푃 ⟩) = ⟨푃 ⟩ 푃 푒−⟨푃 ⟩ 푃 ! (B.2) The Bayesian likelihood function L for data collected over all pixels of the image detector is the product of the probability P at each location. Thus, the (natural) log-likelihood for the captured image is ln [L ] = ∫ ∫ ( 푃 (푥, 푦) ln [⟨푃 (푥, 푦)⟩]− ⟨푃 (푥, 푦)⟩ − ln [푃 (푥, 푦)!] ) 푑푥푑푦 (B.3) The value 퐹̂ which maximizes ln [L ], given B.2, is an estimate of 퐹 . Taking the derivative of ln [L ] with respect to 퐹 (푥̃, 푦̃) gives ∫ [ 푃 (푥, 푦) ⟨푃 (푥, 푦)⟩ − 1 ] 휅(푥, 푦, 푥̃, 푦̃)푑푥푑푦 = 0 (B.4) The actual RL iteration, the crux of this method, is that we may set 퐹̂ (푖+1)(푥̃, 푦̃) = 퐹̂ (푖)(푥̃, 푦̃) ⋅ ∫ ∫ [ 푃 (푥,푦) ⟨푃 (푖)(푥,푦)⟩ ] 휅(푥, 푦, 푥̃, 푦̃)푑푥푑푦∫ ∫ 휅(푥, 푦, 푥̃, 푦̃)푑푥푑푦 (B.5) with 퐹̂ (푖) the 푖푡ℎ iterative estimate of 퐹 . Also, analogous to B.1, 푃̂ (푖)(푥, 푦) = ∫ ∫ 퐹̂ (푖)(푥̃, 푦̃)휅(푥, 푦, 푥̃, 푦̃)푑푥̃푑푦̃ (B.6) 68 Appendix B. Expectation Maximization: Richardson-Lucy Algorithm Up to this point, for the sake of simplicity we assumed that the pixels were infinitely small. To correct for this obviously false assumption, and to rec- ognize that computers cannot provide infinite precision, we recast equations B.5 and B.6 into their respective discrete forms: 퐹̂ (푖+1) 푚̃,푛̃ = 퐹̂ (푖) 푚̃,푛̃ ⋅ ∑ 푚,푛 ( 푃푚,푛 푃̂ (푖) 푚,푛 휅푚,푛,푚̃,푛̃ ) ∑ 푚,푛 휅푚,푛,푚̃,푛̃ (B.7) 푃̂ (푖)푚,푛 = ∑ 푚̃,푛̃ 휅푚,푛,푚̃,푛̃ 퐹̂ (푖) 푚̃,푛̃ (B.8) 휅푚,푛,푚̃,푛̃ is interpreted as the integral of 휅(푥, 푦, 푥̃, 푦̃) over pixel (푚,푛) with boundaries (푥푚, 푥푚+1) and (푦푛, 푦푛+1): 휅푚,푛,푚̃,푛̃ = 푦푛+1∫ 푦푛 푥푚+1∫ 푥푚 휅(푥, 푦, 푥̃푚̃, 푦̃푛̃)푑푥푑푦 (B.9) There are issues pertaining to the existence and unbiased nature of 퐹̂ which, while interesting in their own right, are not required knowledge for utilizing the RL algorithm. Suffice to say that the RL algorithm can be con- sidered to converge to a solution in the sense that each iteration of the RL algorithm will not cause L to decrease, and so, in this sense, 퐹̂ can be con- sidered the “best” estimate of 퐹 . An important property worth mentioning is that so long as 퐹̂ (푖) and 휅 are nonnegative, 퐹̂ (푖+1) will also be nonnegative. This is important for correct physical interpretation: it makes no sense to have a negative number of photon strikes per pixel, nor a negative average count rate. One final point that must be addressed regarding use of the RL algo- rithm is determining when to stop the iterations. If too few iterations are specified, we end up with an image in which the aberrations introduced by the PSF have not been completely removed. On the other hand, too many iterations lead to excessive noise amplification, because the algorithm, as with all maximum likelihood methods, ends up trying to fit the noise in the data too closely. The only limit on the level of noise amplification is impo- 69 Appendix B. Expectation Maximization: Richardson-Lucy Algorithm sition of the requirement, mentioned before, that data values not become negative. For the RL algorithm as presented, the non-negativity constraint also leads to a soft constraint on the upper bound of noise as well. This constraint arises mathematically from the fact that the correlation coeffi- cient between neighboring pixels is negative. Physically, it merely restates the reasonable assumption that photon flux should be conserved; when the value at a pixel decreases after an RL iteration, the value in neighbouring pixels increases. Once a given region has been pushed down to zero flux, the flux in surrounding regions cannot increase. The MATLAB impementation of this algorithm, deconvlucy, allows the user to set two parameters, 휂 and 휐, which address these issues of iterations and noise amplification. The pa- rameter 휂 sets an upper-bound on the number of iterations each pixel may undergo. The parameter 휐 provides a unique set of upper and lower bounds on the possible value of each pixel. This is done by assuming that for each pixel the possible values are distributed following a Poisson distribution, with the initial value of the pixel taken as the average for that pixel’s distri- bution. 휐 is the variance the user allows in the iteratively calculated value, relative to the original value, with the variance defined for the given Poisson distribution of the pixel. The purpose of 휐 is to prevent noise amplification by stopping the iterative cycle for a pixel once the value has exceeded the bounds, setting the value to the last value within the bounds, and moving to the next pixel. Practically speaking, noise amplification is a difficult issue to deal with. There is no universally agreed upon number of iterations which should be used and this is one of the algorithm’s weaknesses. Lucy suggests to stop once the reduced 휒2 between 푃 and 퐹̂ is about one per degree of freedom (d.o.f.). However, then the issue arises as to how many d.o.f. have been used to fit the data; especially as the number of iterations increases, one can no longer safely assume that the d.o.f. is as small as the number of pixels of the image detector. The problem is further exacerbated by the fact that the optimal number of iterations may be a function of (푥, 푦): regions with smooth, extended structure can be fit well with fewer iterations than regions with sharp gradients and a high signal-to-noise ratio. An alternative 70 Appendix B. Expectation Maximization: Richardson-Lucy Algorithm method of dealing with this issue of noise amplification is to apply smoothing algorithms to 퐹̂ . For optical images, such as those obtained by the HST, this method has been developed and mathematically justified. However, the application of these smoothing algorithms to the particle fluence at the EPID surface is not readily apparent, and it was felt that, while an interesting line of inquiry, delving into the use of such smoothing algorithms would take us outside the scope of our project. However, as discussed in the main body of this thesis, the idea of utilizing smoothing algorithms to reduce noise was not ignored; where we could readily justify it, it was incorporated to good effect. 71

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0067746/manifest

Comment

Related Items