UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Photon dose calculations in inhomogeneous media Shahine, Bilal Haidar 2000

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2000-56620X.pdf [ 6.84MB ]
Metadata
JSON: 831-1.0085572.json
JSON-LD: 831-1.0085572-ld.json
RDF/XML (Pretty): 831-1.0085572-rdf.xml
RDF/JSON: 831-1.0085572-rdf.json
Turtle: 831-1.0085572-turtle.txt
N-Triples: 831-1.0085572-rdf-ntriples.txt
Original Record: 831-1.0085572-source.json
Full Text
831-1.0085572-fulltext.txt
Citation
831-1.0085572.ris

Full Text

P H O T O N D O S E C A L C U L A T I O N S I N I N H O M O G E N E O U S M E D I A by BILAL H A I D A R SHAHINE B.Sc, American University of Beirut, 1993 M.Sc, Carleton University, 1995 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF PHYSICS AND ASTRONOMY We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA June 2000 © BILAL HAIDAR SHAHINE, 2000 P H O T O N D O S E C A L C U L A T I O N S I N I N H O M O G E N E O U S M E D I A by BILAL H A I D A R SHAHINE B.Sc, American University of Beirut, 1993 M.Sc, Carleton University, 1995 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF PHYSICS AND ASTRONOMY We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA June 2000 © BILAL HAIDAR SHAHINE, 2000 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Physics and Astronomy The University of British Columbia Vancouver, Canada Abstract A series of experiments were carried out to simulate air cavities in a polystyrene phantom. Results of experiments were compared to calculations done us-ing three treatment planning systems employing Batho, modified Batho and the equivalent tissue-air-ratio (ETAR) methods for inhomogeneity corrections. The measured interface dose decreased by 55% for a 5 cm air gap, 5x5 cm2 field size and 6 MV photons while only a 10% decrease was calculated by these methods. This points to the need for proper inclusion of electronic dis-equilibrium effects caused by the air cavities. In an attempt to account for electron transport, an inhomogeneity correction factor model is proposed based on separating the primary and scatter photon interaction effect with matter. The primary inhomogeneity correction factor was evaluated in phantoms con-taining air cavities following Klein-Nishina formalism and detailed electron transport. The Fermi-Eyges theory was adopted to transport recoil electrons based on multiple Coulomb scattering formalism. The scatter inhomogene-ity correction factor is proposed as a semi-empirical model based on ratios of tissue-maximum-ratio with scaled effective beam radius. A total correc-tion factor was derived by weighting the primary and scatter components and Monte Carlo simulation was used to verify these results. A calculation code was written and an optimization technique was devised reducing the computa-ii tion time to a practical limit while maintaining accuracy. An additional topic on the subject of Monte Carlo treatment planning was discussed. The two Monte Carlo codes EGS4 and GEANT3 have been compared for calculations of photon and electron depth doses in water. Good agreement was seen for radiation beams relevant to radiation therapy. To conclude the comparison, a timing study was performed. GEANT3 was seen to be two times slower than EGS4 in one of its electron transport modes and up to three times faster when using its energy straggling mode of electron transport. iii Contents Abstract ii List of Tables viii List of Figures x Chapter 1 Background 1 1.1 Introduction 1 1.2 Clinical Motivation 2 1.3 Radiation Dosimetry 2 1.3.1 Physical interactions 3 1.4 Production and measurement of Radiation from Linear Accel-erators 9 iv 1.4.1 Dose measurement 11 1.4.2 Dose distribution 13 1.5 Treatment planning 15 1.5.1 Treatment planning algorithm 15 1.6 Radiation dose calculation 17 1.6.1 Implicit modelling of particle transport 18 1.6.2 Explicit modelling of particle interaction and transport 23 1.7 Thesis objectives and highlights 29 Chapter 2 Experiments and treatment planning calculations in phantoms with air cavities 31 2.1 Introduction 31 2.2 Experimental Procedure 33 2.3 Treatment Planning Systems 35 2.4 Results and Discussion 36 2.5 Summary 44 v Chapter 3 Inhomogeneity correction factor incorporating ex-plicit electron transport 46 3.1 Introduction 46 3.2 Theory and Methods 49 3.2.1 Theory • 50 3.2.2 Monte Carlo simulation 62 3.2.3 Experiments 62 3.3 Results and Discussion 63 3.3.1 Calculated primary Compton dose compared to Monte Carlo simulation 63 3.3.2 Scatter and total inhomogeneity correction factor and comparison with Monte Carlo simulation and experiments 71 3.3.3 Computation time 82 3.4 Summary 89 Chapter 4 Comparison of GEANT3 and EGS4 Monte Carlo codes for photon and electron beams 92 4.1 Introduction 92 vi 4.1.1 Description of electron transport methods 93 4.1.2 Parameters for particle tracking in a medium 95 4.2 Materials and methods 96 4.3 Results and discussion 98 4.3.1 Photons 98 4.3.2 Electrons 101 4.3.3 Calculation time comparison 105 4.4 Summary 107 Chapter 5 Conclusions and future directions 109 5.1 Summary of work 110 5.2 Future directions 113 Appendix A 128 Appendix B 131 Appendix C 152 vii List of Tables 3.1 The mass attenuation coefficient for polyenergetic photon beams (/i/p) is shown evaluated at the average photon energy of spec-tra 55 3.2 Cadplan TMR table for 6 MV beam is shown as a function of depth and field size. The 0x0 cm2 TMR are extrapolation from higher field size values 61 3.3 A is determined as a function of energy for both monoenergetic and polyenergetic beams 71 3.4 Computation time for 3D dose distribution for high and low photon energies for different choices of dE. dz was chosen to be 0.2 cm which corresponds to the voxel size dimension. The first column (NE) indicates the number of intervales into which the range of integration is divided. The % error is relative to "small" values of dE in which NE is set to 2000 85 viii 3.5 Best estimate of parameters in Equations 3.31 and 3.32 was derived based on the Least-squares fitting method 89 4.1 Calculation time comparison between EGS4 and GEANT3 per event history. ESTEPE and DEEMAX were set to 0.01, ECUT and CUTELE were set to 10 keV, and AE was set to 521 keV while DCUTE is given below 107 4.2 Calculation time comparison between EGS4 and GEANT3 per electron event history. STMIN is set as recommended in GEANT3 manual 108 ix List of Figures 1.1 Differential cross section per unit solid angle per electron as a function of ejected electron angle 6 for 1, 10 and 20 MeV incident photon 6 1.2 Relative importance of the three main photon interactions for different atomic numbers of the absorbing material and different initial photon energies (Adapted from [34]) 8 1.3 A linac head assembly in photon beam mode is shown. Also shown is water phantom used for measurement. (Adapted from [60] with permission) 10 1.4 A typical x-ray spectrum for a nominal 18 MV linac 11 1.5 The geometry used in the definition of the tissue air ratio (TAR) at a fixed source to axis distance (SAD) with a field size r is shown. (Adapted from [60] with permission) 15 x 1.6 The geometry used in the definition of the tissue maximum ratio (TMR) at a fixed source to axis distance (SAD) with a field size r is shown. dmax is the distance from surface to maximum depth. (Adapted from [60] with permission) 16 2.1 The experimental setup used for the three experiments is shown. The black rectangle represents the position of the Markus Cham-ber 34 2.2 Comparison of calculated and measured relative dose (Dj/Dh) for 6 MV photons is shown for (a) 10x10 cm2 and (b) 5x5 cm2 field sizes. Calculations of relative dose using the Batho, modified Batho and ETAR correction methods as implemented on Cadplan lie between 1 and 0.9 while measurements with the Markus chamber show a decrease down to 0.45. Results are normalized to unity at depth 3 cm in the homogeneous polystyrene phantom 37 2.3 Relative dose calculation is shown as a function of air gap thick-ness using the Batho correction method as implemented on the Cadplan, GE Target 2 and Xplan treatment planning systems. A 6 MV beam and two field sizes, (a) 10x10 cm2 and (b) 5x5 cm2 were used 39 xi 2.4 Measurements of the relative dose as a function of air gap side for a 5x5 cm2 field size are shown. Sides were varied from 0x0 (no gap) cm2 to 20x20 cm2 for two different air gap thicknesses (3 and 5 cm): (a) 6 MV photons and (b) 18 MV photons. The dose was normalized to the dose in homogeneous phantom with 6 cm (t = 3 cm) and 8 cm (t = 5 cm) of polystyrene over the detector 41 2.5 Measurements of the relative dose as a function of depth in a phantom with a 5 cm air gap thickness for 6 MV photons are shown, (a) for a 10x10 cm2 field size and (b) for a 5x5 cm2 field size. The dose was normalized to the dose in the homogeneous phantom at 8 cm deep. The position of the air gap is indicated by the borded region 42 2.6 Measurements of the relative dose as a function of depth in a phantom with a 5 cm air gap thickness for 18 MV photons are shown, (a) for a 10x10 cm2 field size and (b) for a 5x5 cm2 field size. The dose was normalized to the dose in the homogeneous phantom at 8 cm deep. The position of the air gap is indicated by the borded region 43 3.1 The multilayered geometry used in our calculations is shown. . 50 xii 3.2 The Klein-Nishina differential energy cross sections are shown for 1, 6 and 10 MeV raonoenergetic photons and the derived cross sections for the 6 and 10 MV photon spectrum are also included 64 3.3 6 MeV photon primary dose calculated with forward directed electrons is compared to dose calculated including electron recoil in all directions 65 3.4 Examining delta ray production effect using Monte Carlo sim-ulation compared with analytical calculations. The upper two curves show that r5-ray production can be accounted for in our analytical calculations by using unrestricted stopping powers. . 67 3.5 A comparison of primary Compton dose derived from theory and Monte Carlo simulations for a 5x5 cm2 parallel beam of 6 and 20 MeV monoenergetic photons incident on homogeneous and inhomogeneous phantoms: (a, c) variation of absolute dose with depth, (b, d) lateral absolute dose profiles at 4 cm depth. The absolute dose shown is per million photons 68 3.6 A comparison of primary Compton dose derived from theory and Monte Carlo simulations for a 5x5 cm2 point source (100 cm SSD) of 6 and 18 MV photon spectra incident on homogeneous and inhomogeneous phantoms: (a, c) variation of absolute dose with depth, (b, d) lateral absolute dose profiles at 4 cm depth. The absolute dose shown is per million photons 70 xiii 3.7 Total dose and primary Compton dose as determined by Monte Carlo simulation are shown for 6 MeV 2x2 cm2 and 10x10 cm2 irradiation field size in homogeneous and heterogeneous phan-toms: (a, c) variation of absolute dose with depth, (b, d) Inho-mogeneity Correction Factor derived from the 2 sets of curves of part (a) and (c) respectively. 73 3.8 Inhomogeneity Correction Factor as determined by Monte Carlo simulation are shown for 6 MV and 18 MV polyenergetic pho-tons: (a, b, c) 6 MV photon beam, and (d, e, f) 18 MV photon beam using field sizes as indicated in the legend 74 3.9 Cadplan TMR curves for 6 MV and 18 MV beams. Three field sizes, 0x0 cm2, 3x3 cm2 and 10x10 cm2 are shown as a function of depth. The 0x0 cm2 TMRs are extrapolation from higher field size curves 76 3.10 Scatter inhomogeneity correction factor (ICFS) and the corre-sponding effective beam radius derived for 6 MV and 18 MV beams using 5x5 cm2 and 10x10 cm2 irradiation field size in inhomogeneous phantoms: (a, c) variation of ICFS with depth, (b, d) effective beam radius used for calculating the 2 sets of curves of part (a) and (c) respectively. 77 3.11 Ratio of primary dose to total dose as derived from Monte Carlo simulation using 5x5 cm2 and 10x10 cm2 irradiation field size in homogeneous water phantom: (a) 6 MV, (b) 18 MV. . . . 79 xiv 3.12 Inhomogeneity correction factor (ICF) in the presence of 5 cm air cavity is derived from analytical calculation, Monte Carlo simulation and experiment as a function of depth, (a, b) 6 MV beam 5x5 cm2 and 10x10 cm2 irradiation field size, (c, d) 18 MV beam 5x5 cm2 and 10x10 cm2 irradiation field size. . . . 81 3.13 The effect of size of integration parameters (dE and dz) on the primary dose and ICFp distributions is shown, (a) Absolute primary dose for homogenous and inhomogenous case is plotted for "small" and "large" values of dE and dz. (b) the deduced ICFp for both sets of values is shown 83 3.14 Variation of the primary Compton dose value at (0,0,4) is shown as a function of integration interval size, (a) Dose versus dE is plotted for a constant dz, (b) Dose versus dz for a constant dE. 88 3.15 Primary Compton dose as calculated at 4 cm depth versus depth interval length for 20 MeV photons. dE was chosen following Equation 3.34 90 4.1 Comparison of EGS4 and GEANT3 Monte Carlo codes for depth dose calculations in water phantoms irradiated with monoener-getic 6 MeV and 20 MeV photons is shown in (a) and (b) respectively. Broad parallel beams were used 99 xv 4.2 Comparison of EGS4, GEANT3 Monte Carlo codes, and exper-iment for depth doses in water phantoms irradiated with 6 MV and 18 MV photon beams is shown in (a) and (b) respectively. A point source was used and SSD was set to 100 cm 100 4.3 The comparison of EGS4, GEANT3 Monte Carlo codes for depth dose calculations in water phantoms irradiated with a range of monoenergetic electrons is shown. The electron trans-port option used in GEANT3 was the "no fluctuations". Dose was normalized to incident electron fluence 102 4.4 The comparison of the three electron transport modes imple-mented in the GEANT3 Monte Carlo code for depth dose cal-culations in water phantoms is shown. Depth doses for 5 MeV and 20 MeV monoenergetic electrons are shown in (a) and (b) respectively. Dose was normalized to incident electron fluence. 104 xvi Acknowledgements In the name of God ... I am very greateful for having had the opportunity to work under the supervision of Dr. Ellen El-Khatib at UBC. Her guidance and assistance in this entire project have been invaluable. I also thank her for her review of research papers and this thesis. Thanks are also due to Dr. Matthew Al-Ghazi for his co-supervision in the first half of this project. I wish to express my appreciation to Dr. Dave Axen for the knowledge that he offered me. He taught me the basics of the Monte Carlo method. The patience and time he gave me everytime I dropped by are greatefully acknowledged. Many thanks are due to the rest of the committee members for their constructive input. I also wish to thank my fellow student Shevonne Ozard for many in-teresting discussions and helpful suggestions. Her interest in scatter dose and my interest in primary dose brought us together. Her modification of the DOSXYZ usercode for the calculation of primary Compton dose is warmly acknowledged. xvii Finally, I thank my parents for their patience and continuous encour-agement throughout my life. Their prayers are always guiding me. Their faith is what keeps me going. BILAL HAIDAR SHAHINE The University of British Columbia June 2000 xviii Dedication To very special people in my life: wife Samar and my daughter Batoul xix \ Chapter 1 Background 1.1 Introduction In radiation therapy, the aim is to eradicate cancer and preserve normal func-tioning of human organs. It is therefore important to have a method of ac-curate dose calculation and delivery [70]. One recommendation is to attain an accuracy for dose delivery of better than 5% [21], but this is difficult to achieve. Modern 3D planning systems operate with an estimated uncertainty in homogeneous material of 3% — 4% [30]. However, larger systematic errors ( 10% — 50%) may occur in the vicinity of tissue inhomogeneities, for example in the head and neck region [58, 106]. These errors result mainly from the approximations of empirical dose calculation algorithms [38]. The goals of the work described in this thesis are twofold. The first is to quantitativly assess of the accuracy of dose calculations with present treatment planning algorithms with relevance to head and neck radiotherapy. The second is to formulate an analytical model for inhomogeneity correction, providing improvement over 1 existing models. Addionally, the ultimate goal of radiation treatment plan-ning using the Monte Carlo method will be discussed, and two different codes will be compared for radiation transport calculations. 1.2 Clinical Motivation Head and neck cancers account for about 10 percent of all malignancies. They have a rather good cure rate if they are found early [29]. All three main treat-ment methods, surgery, radiotherapy and chemotherapy, are used. Tumors can grow in several areas including lip and oral cavity, paranasal sinuses and nasal cavity, salivary glands and the larynx. About a third are in the larynx, and often directly affect speech and other functions. Although most early lesions can be cured by radiotherapy or surgery, radiotherapy is usually the option of choice, because it is possible to preserve a better voice [37]. So our motivation is to come up with more accurate dose calculation algorithm and test it under cases of phantoms containing air cavities. 1.3 Radiation Dosimetry The basic concepts in radiation dosimetry are well described in the literature [4, 34, 50, 56]. A brief description of some pertinent aspects will be presented in this section. 2 1.3.1 Physical interactions When an x-ray beam passes through a medium, interactions between photons and matter can take place with the result that energy is transferred to the medium. The initial step in the energy transfer involves the ejection of elec-trons from the atoms of the medium by the photo-electric or Compton effect. Another method of energy transfer is the production of an electron-positron pair via the interaction of photons with the Coulomb force field of a nucleus or that of an atomic electron (triplet production). These high-speed electrons or positrons transfer their energy by producing ionization (hard collisions) and excitation (soft collisions) of the atoms along their paths. They may also lose energy by radiating bremsstrahlung photons in the Coulomb field of the atom. A . Photon interactions The interactions of photons in matter is governed by five major effects: Rayleigh (coherent) scattering, photoelectric effect, Compton effect, pair pro-duction and photonuclear interactions. The Rayleigh scattering interaction consists of an electromagnetic wave passing near a bounded electron and set-ting it into oscillation. The oscillating electron reradiates the energy at the same frequency. Rayleigh scattering is elastic; there is no energy loss by the photon which is merely redirected through a small angle. Photonuclear inter-actions occur when a proton or neutron is set in motion by a photon entering and exciting a nucleus. These photonuclear interactions are only significant for photon energies greater than 10 about MeV. The other three most important interactions lead to energy deposition at the energies used in radiotherapy and are therefore discussed in detail. 3 i) Photoelectric effect The photoelectric effect is the most important interaction for low-energy photons. In this process, the entire energy of the photon is transferee! to the atomic electron. Electrons ejected (photoelectrons) will acquire a kinetic energy given by KE = hu — EB, (1.1) where hu is the energy of the incident photon and EB is the binding energy of the electron. Photoelectrons are ejected predominately sideways relative to the inci-dent photon direction for low photon energies, because they tend to be emitted in the direction of the photon's electric field. With increasing photon energy they favor the initial direction of the incident photon but are still not per-fectly aligned with it. Electron scattering at zero angle to the incident photon direction is forbidden because that is perpendicular to the electric vector ([4]). ii) Compton effect The incident photon interacts with the outer electrons, the electron is ejected and a photon is scattered. The description of the Compton effect can be most conveniently presented in two aspects: kinematics and cross section. The first relates to the energies and angles of the emerging particles after the interaction occurs, and the second predicts the probability that such an interaction will occur. By applying the laws of conservation of energy and momentum, kine-matic relationships between energy and momentum of the electron and photon 4 can be derived as E = hu. a(l — cos(f)) (1.2) 'o 1 + a(l — cos<p) 1 (1.3) 1 + a(l - cos 4>) cote = (1 + a) tan <£/2 (1.4) where hu0, hu', and E are the energies of the incident photon, scattered photon and electron, <f> is the angle of photon scattering relative to the incident photon direction, 9 is the recoil angle of the electron, and a = hvo/moc2, where TUQC2 is the rest mass energy of the electron [56]. In 1928 Klein and Nishina (KN) applied Dirac's relativistic theory of the electron to the Compton effect to obtain interaction cross sections. The differential cross section for photon scattering at angle (f>, per unit solid angle and per electron may be written in the form Another useful form of the K-N cross section is dea/dQ,e, the differential cross section for electron scattering at angle 9, per unit solid angle and per electron. The relationship between the two differential cross section is dea _ dea (1 + a)2(l - cos (j))2 f . (1.5) dQo dfi^ , COS' and is shown in Figure 1.1. A third form of the K-N cross section is dea/dE. This is the probability that a single photon will have a Compton interaction in traversing a layer 5 c ID •5 10 CO 2E-24 1.5E-24 u c o +3 O 4) W cn co O k_ O c a> 1E-24 5E-25 20 MeV 0.0 0.4 0.8 Recoil Angle of Electron (radians) 1.2 Figure 1.1: Differential cross section per unit solid angle per electron as a function of ejected electron angle 9 for 1, 10 and 20 MeV incident photon. containing one e/cra2, transferring to that electron a kinetic energy between E and E + dE. Thus dea/dE is the energy distribution of the electrons, averaged over all scattering angles 9. This relationship is discussed in Chapter 3 where it will form the basis for the calculation method being developed. iii) Pair production Pair production is an interaction in which a photon disappears and gives rise to an electron and a positron pair. It can occur in a Coulomb force field near an atomic nucleus when the photon has an energy greater than the sum 6 of the rest mass of an electron and a positron, ie, 1.022 MeV. The electron-positron pair are created in a conversion of energy to mass. The excess incident photon energy greater than the sum of the rest mass energy of the two particles appears as kinetic energy of the positron and electron. The positron eventually annihilates with an electron, producing a pair of photons with energy of 0.511 MeV [4]. The relative importance of the photo-electric, Compton, and pair pro-duction interactions depend on both the photon energy and the atomic number Z of the medium. Figure 1.2 indicates the regions of Z and hu in which each in-teraction is dominant. The curves are isoeffect where two kinds of interactions are equally probable. As shown in this Figure the photoelectric effect is dom-inant at the lower photon energies, the Compton effect at medium energies, and pair production at the higher energies. B . Electron interactions In matter, electrons lose energy predominantly by ionization and exci-tation. This results in deposition of energy or absorbed dose in the medium. Ionization occurs if the energy transferred to orbital electrons is greater than their binding energy, otherwise electrons are displaced from their stable po-sition and then return to it causing excitation. In the process of ionization, some of the "freed" electrons receive sufficient energy to produce ionization tracks of their own. These are often called 6 — rays. The electron may also interact with the Coulomb field of a nucleus and be decelerated so rapidly that a part of its energy is lost as bremsstrahlung (breaking radiation). The rate of energy loss due to bremsstrahlung increases 7 120 H 100 H N 0> .a 80 A E o 60 -\ E o < 40 H 20 H 0 0.0 0.1 1.0 II Photon Energy (MeV) 10.0 100.0 Figure 1.2: Relative importance of the three main photon interactions for different atomic numbers of the absorbing material and different initial photon energies (Adapted from [34]). with electron energy and the atomic number of the medium. An important concept which gives the rate of energy loss per unit of path length by a charged particle in a medium is the medium stopping power. Based on the main energy loss mechanisms, stopping power may be subdi-vided into an unrestricted "collision stopping power" and a "radiative stop-ping power". Finally the total stopping power is the sum of the collision and radiative contributions and is given as (1.7) 8 where E is the electron energy and x is the pathlength. In the description of the electron-electron collision, the outgoing electron with the smaller kinetic energy is defined as a delta ray (5 — ray). Therefore, the energy of the 5 — ray is always smaller than half the energy of the incident electron. In many situations only those energy losses which lead to S — rays with energy below a cut-off energy, A, are of interest. The restricted stopping power, LA , includes only such energy losses. Therefore, the restricted stopping power is smaller than the unrestricted collision stopping power and their ratio as a function of electron energy is given by ICRU35 [79]. 1.4 Production and measurement of Radiation from Linear Accelerators A linear accelerator (linac) is a device that uses high-frequency electromagnetic waves to accelerate charged particles (usually electrons) through evacuated waveguides. The high-energy electron beam itself can be used for treating superficial tumours, or it can be made to strike a target to produce x-rays which are used for treating deep-seated tumours [51]. To produce a uniform radiation beam in the plane of the patient several devices are required as illustrated in Figure 1.3 for the photon treatment mode. The x-ray beam emerging from the target is collimated by a fixed primary collimator. A typical x-ray spectrum is shown in Figure 1.4. Since for megavoltage energies the x-ray intensity emerging from the target is peaked in the forward direction a flattening filter is used to make the beam intensity uniform across the field. 9 Transmission ion chambers monitor beam output, flatness, and symmetry. A light and mirror produce a visible field corresponding to the position and size of the radiation beam and the secondary upper and lower orthogonal movable collimators shape the radiation beam into various rectangular sizes. i collimator flattening filter mirror lower collimator Figure 1.3: A linac head assembly in photon beam mode is shown. Also shown is water phantom used for measurement. (Adapted from [60] with permission) 10 0 3 6 9 12 Photon energy (MeV) 15 18 Figure 1.4: A typical x-ray spectrum for a nominal 18 MV linac. 1.4.1 Dose measurement Ionization chambers are one of the devices used for measuring radiation dose. There are many varieties of ion chambers used in clinical practice, but generally they are all composed of a gas-filled chamber containing electrodes that collect the ions produced in the gas by the ionizing radiation. These electrodes are connected to an electrometer which can be used to measure the charge or current. For air-filled chambers, the absorbed dose (energy deposited per unit mass) to air is given by (1.8) 11 where Q is the total charge of ions of one sign, m is the mass of air from which the ions are collected and ^ is the average energy required in air to produce a unit charge. For dry air, ^ is equal to 33.97 J/C. The dose to water can then be calculated from the dose to air using cavity theory. For a given spectrum of photon energies the generated electron fluence can be determined as shown in section 1.3. When the electron fluence spectrum is known at the wall surrounding the gas, and the cavity is assumed to be so small that it does not affect the electron spectrum, the gas cavity will witness the same electron fluence as does the wall. Then, the ratio of the dose absorbed by water to that absorbed by the air in the ion chamber is equivalent to the ratio of the stopping powers for the two media. This statement summarized the Bragg-Gray cavity theory which can be represented as In practice, when the ion chamber is introduced into a phantom for dose measurements it perturbs the dose deposition. Correction factors are applied to account for three effects, 1) the ion collection efficiency of the chamber, 2) the change in photon fluence resulting from the replacement of the medium by the ion chamber wall and cavity, and 3) the differences in the composition of the medium and the chamber wall. (1.9) 12 1.4.2 Dose distribution Basic dose distribution data are usually measured in a water phantom. Water closely approximates the radiation absorption and scattering properties of the human body at megavoltage energies. An essential step in the data preparation is to establish depth dose variation along the central axis of the beam. A number of quantities have been defined for this purpose, mainly percentage depth dose (PDD), tissue-air ratios (TAR), and tissue maximum ratios (TMR). These quantities are measured in a water phantom using an ion chamber. A . Percentage depth dose The PDD is the central axis dose distribution normalized to the max-imum dose with the distance of the source to phantom surface kept constant at 100 cm. Important features of the depth dose is the initial buildup which becomes more pronounced as the energy is increased. The region between the surface and the point of maximum dose is called the buildup region. The dose buildup may be explained as follows: as photons are incident on the phantom, electrons are ejected from the surface and the subsequent layers. These electrons may deposit a substantial portion of their energy some distance away the point of their creation. This results in an increase of electron fluence with depth up to a maximum level. This level of electron fluence would be maintained if the photon fluence were constant. At this point a charged particle equilibrium is achieved for a small volume v where each charged par-ticle of a given type and energy leaving v is replaced by an identical particle of the same energy entering. 13 Beyond the depth of dose maximum the photon energy fluence continu-ously decreases with depth due to removal of photons from the beam and hence the production of electrons also decreases. The net effect is that beyond a cer-tain depth the dose decreases with depth and only transient charged particle equilibrium exists in the descending portion of the depth dose curve. B . Tissue-air ratio and tissue-maximum ratio TAR is defined as the ratio of the dose (D(d, r)) at a given point in the phantom to the dose in free space (Dair(r)) at the same point (Figure 1.5). TAR depends on depth d and field size r at that depth. The distance from radiation source to point of measurement (SAD) is kept constant and the field size r is defined at SAD. Hence, TAR(d,r) = ^ £ (1.10) which is independent of SAD. Another similar quantity is the tissue-maximum ratio which is defined as the ratio of the dose at a given point in the phantom (D(d, r)) to the maximum dose D(dmax,r) in the phantom keeping the distance from the source to the measurement point constant and varying the water level above the measurement point. The geometry adopted for measuring TMR is shown in Figure (1.6). For megavoltage beam energies, measurements of TARs are not prac-tical since they involve measurements of dose in air. This is mainly because it would be necessary to supply the dosimeter with such a large buildup cap that it would not be fully irradiated by small area beams [50]. 14 • d isocenti SAD D(d,r) Dair(r) Figure 1.5: The geometry used in the definition of the tissue air ratio (TAR) at a fixed source to axis distance (SAD) with a field size r is shown. (Adapted from [60] with permission) Computerized treatment planning is a numerical calculation procedure in which absorbed dose distribution within a patient is predicted from models based on phantom measurements. These models would incorporate modifications of dose distributions measured in a water phantom to account for beam mod-ifiers (irregularly shaped fields), surface contours, and tissue heterogeneities 1.5.1 Treatment planning algorithm The treatment planning algorithms investigated in this thesis include an in-house algorithm and a commercial one that is available in many centres. 1.5 Treatment planning [56]. 15 D< d ' r) D (dU , r ) Figure 1.6: The geometry used in the definition of the tissue maximum ratio (TMR) at a fixed source to axis distance (SAD) with a field size r is shown. dmax is the distance from surface to maximum depth. (Adapted from [60] with permission) A . A n in-house treatment planning algorithm Our in-house treatment planning program is based on the work of Kor-nelsen and Young for the representation of depth dose data [59]. The absorbed dose is derived from the tissue maximum ratio (TMR) which is represented by a parametrized exponential function. The values of the parameters are determined empirically for a number of radiation beams and field size. These data curves are stored in the system and a numerical interpolation is employed for finding values in between. For tissue inhomogeneity correction, the Batho correction method is implemented [6]. A . Commercial treatment planning algorithm The Cadplan treatment planning system (Varian Oncology Systems, Varian/Dosetek, Zug, Switzerland) has two calculation models to calculate 16 the dose distribution in a patient. These are, 1) the photon beam recon-struction model which uses measured depth dose and profile curves and 2) an interpolation procedure is engaged to calculate the dose distribution in a water-equivalent material. Then, to apply the dose distribution to human anatomy the patient model handles the patient outline and inhomogeneities. The in-homogeneity correction factors are calculated by using either the generalized Batho [6, 94, 102], or modified Batho [97] or the "equivalent tissue-air ratio" (ETAR) method. The dose value calculated in a water-equivalent material is then multiplied by the inhomogeneity correction factor. 1.6 Radiation dose calculation Most clinical photon dose calculation algorithms assume electron equilibrium in the patient. This approximation neglects electron transport, and does not take into account the range travelled by secondary electrons set in motion by photons. In fact, the path of electrons is greatly altered which results in appreciable errors in areas near the beam edges or near inhomogeneities in high-energy photon beams [110]. Inhomogeneity corrections are more complex than other patient correc-tions. The presence of tissue inhomogeneities may lead to two general effects, 1) changes in the absorption of the primary beam and the associated pattern of scattered photons and 2) changes in the secondary electron fluence [56]. To deal with these perturbations several methods have been devised and adopted which can be grouped into two categories: (I) implicit modelling of particle transport and (II) explicit modelling of particle interaction and transport. 17 1.6.1 Implicit modelling of particle transport Implicit modelling of particle transport through scaling operations stem from the existence of two important theorems first presented by Fano [36] and O'Connor [78]. These theorems formulate density-scaling of data for water to "water-like media" with arbitrary densities. Fano's theorem states that when an object of varying density but con-stant atomic composition is present in a radiation field having a constant fluence of primary particles (photons), then the fluence of secondary particles (electrons) is also constant and independent of the density variations [36]. The main assumption in Fano's theorem is that the interaction cross sections per unit mass are independent of the density of a medium of identical atomic com-position. Hence, in order to apply Fano's theorem to external photon beams, one must assume that primary photon attenuation, density variations, and the production of scattered photons can be neglected [2]. Fano's theorem applies to situations of charged particle equilibrium. The density scaling theorem presented by O'Connor provides a bridge for the dose in two media of different density but equal atomic composition both irradiated by the same external beam. This theorem states that the ratio of the scattered photon fluence to that of primary photon fluence is constant in the two media provided all geometric distances, including field sizes, are scaled inversely with density [78]. It is often practical to describe the influence of a tissue heterogeneity as a perturbation of the dose to a homogeneous phantom exposed to an identical irradiation. Commonly, a correction factor is defined from the dose ratio 18 measured for the heterogeneous geometry versus the homogeneous, £ijp Dheterogeneous ^ ^-j^ D homogeneous The two main approaches of photon inhomogeneity correction will be discussed and their limitations will be presented. A . Correction-based approach i) Power-law (Batho) method Batho [6] suggested this method as an empirical correction to account for both primary beam attenuation and scatter changes within water and below a single slab of inhomogeneity. It was extended later to include dose calcu-lations within the inhomogeneity [94]. The correction factor which accounts for the density as well as the distance of the inhomogeneity from the point of calculation is given as C F ~ TAR(z2y-»> ( L 1 2 ) where z\ is the distance from the point of calculation to the anterior surface of the layer of density p\, and z-i is the distance from the calculation point to the anterior surface of the next overlying layer of density pi-Later, this method was generalized to handle arbitrary densities and non-water like material [94]. Multiple regions of slab-like material were further accounted for [24, 102]. The most recent developments of this method were presented by El-Khatib and Battista [32] and Thomas [97] who showed that the correction factor should be based on build-up depth-shifted TMRs instead 19 of the initially proposed TARs to yield CF(z) = { f X e n / p ) N (TMR(z -zm + z^))^-^-1^ (1.13) where pzm and pw are the linear attenuation coefficients of the material in layer m and water respectively, (pen/p)N is the mass energy absorption coefficient of the material in layer N, zmax is the buildup depth or depth of maximum dose, zm is the distance along the beam from the surface to the layer m in the phantom and z is the depth to the point of calculation [2]. This method is based on theoretical considerations assuming Compton interactions only. Its original form does not apply to points inside the in-homogeneity or in the build-up regions [6]. The assumption of semi-infinite horizontal geometry renders the application of this method unsuitable in re-gions with inhomogeneities of finite extent such as near the edges of lungs [105]. ii) The equivalent tissue air ratio method (ETAR) The ETAR method [95] is modeled after O'Connor's density scaling theorem and applies rigorously for contributions from Compton scattering. It attempts to correct for the scattered radiation component by using multiple-slice CT pixel density information. This method is represented by a ratio of TARs given as where z' is the radiological depth (scaled depth) of the calculation point and r' is the effective beam radius defined by r' = rp' (1.15) 20 P' = WHkPijk/ Yl WiJk (1.16) i,j,k (1.17) The weighting factors Wijk are calculated using Compton scatter cross-sections and integrating scatter over the entire irradiated volume for each point of dose calculation. They provide an approximate description of the relative contribution of each volume element having pijk as an electron density relative to water [50, 56]. In practical cases the choice of weighting factors is empirical. There does not exist a unique set of weighting factors that can be applied to all inhomogeneities [105]. In general, when comparing the above two methods it is noted that the generalized Batho method is more accurate in the high-energy range (> 10MV), and the ETAR is better suited for the lower energy beams (< 6MV) [56], however, its accuracy depends entirely on the correct choice of weighting factors. Thus the accuracy of different methods depend on the irradiation conditions including energy, field size, location and extent of inhomogeneity and location of point of calculation. B . Model-based approach The major operational differences between correction-based and model-based calculations is that the first corrects parametrized dose distributions obtained in a water phantom while the second computes dose from first prin-ciples. Model based methods such as the convolution/superposition method [1, 19, 20, 66, 72] still require precalculated parameters like the incident pho-ton energy spectra and the primary and scatter dose deposition kernels which 21 are all derived from Monte Carlo simulation [66, 72]. The convolution method first computes the distribution of TERMA (total energy released per unit mass) in the patient and then convolves it with a kernel that accounts for the transport of charged particles as well as scattered photons [69]. The two most famous methods in this subcategory are the dose spread array [67] and the differential pencil beam methods [72]. i) The dose spread array This method uses a direct summation of density scaled kernels in the dose calculation. The dose at a voxel labeled ijk is given as Dm = E *o(i - Ai , j - Aj, k - Ak)p(i - Ai, j - Aj, k - Ak) ijk 'A0il(Pll,Ai,Aj,Ak) + Am(pm^,Az,Aj,A/c)j Pi P m J' where $ 0 is the primary photon fluence incident at a given voxel (i — Ai,j — Aj, k — Ak), Ao,i is the combined dose spread array due to primary and prox-imal first scatter photon interactions, Am is the dose spread array due to all other photon interactions, pi is the average density along the ray path I connecting the primary interaction site to the dose point, pm is the empirical average density of the medium [67,105]. Dose spread arrays were generated for a range of densities to facilitate data lookup using O'Connor's density scaling method. ii) The differential pencil beam A differential pencil beam is an infinitesimal segment of a pencil beam directed along a ray line from the beam source, within which primary pho-22 tons interact and give rise to a differential pencil beam dose distribution. The differential pencil beam dose distribution is equivalent to a convolution ker-nel except that it is dose deposited in water per number of primary photons interacting in a unit volume, instead of per unit energy imparted by primary photons [69, 72]. The dose at a point in the medium is given as Dose = £ $0pcDPB(r, 0)AV, (1.19) where DPB(r, 9) is the differential pencil beam dose distribution in water as generated by Monte Carlo simulation and expressed in spherical coordinates (r, 9). r is the distance between the primary interaction site to the point of calculation and 9 is the angle with respect to the beam direction. pc is the photon collision density at the volume AV defined above, and <&0 is the primary photon fluence [72, 105]. In principle, the input data of the dose spread array and the differential pencil beams methods are identical. For dose calculation in a heterogeneous medium, the density information between the primary interaction site and the dose point is sampled using first-order scatter ray tracing. Density scaling using the O'Connor theorem is employed and a dose value in an equivalent homogeneous geometry is adopted. 1.6.2 Explicit modelling of particle interaction and trans-port None of the current photon dose algorithms include the effects of atomic num-ber changes in a heterogeneous phantom, which produce changes in electron 23 multiple scattering and pair production. Moreover, the presence of inhomo-geneities with different scattering and stopping powers perturbs the transport of the secondary electrons differently compared to inhomogeneities of only den-sity variations [110]. This leads to two important methods that are potentially able to handle these limitations. A . Random methods, Monte Carlo Of all the possible methods of predicting a dose distribution due to an incident photon beam in a heterogeneous phantom, the one that is poten-tially the most accurate is Monte Carlo modeling. This modelling provides an alternative to the correction methods by calculating the distribution in an inhomogeneous medium from first principles, using the physics of photon and electron transport [69]. The transport of an incident particle, and of particles that are subse-quently set in motion, is referred to as a particle history. Photons have a limited number of interactions before their histories terminate, which makes it easy to simulate all interactions directly. In electron transport, however, the number of Coulomb interactions with atomic nuclei is so large that a direct Monte Carlo simulation is impractical. Instead, multiple scattering theories are used to sample angular deflections and energy losses in successive short track segments. The length of these segments (step sizes) are chosen such that: 1) a sufficient number of collisions must occur within each segment, so that the application of multiple scattering theories is justified; 2) the cumulative deflections and energy losses in each segment must be small enough so that the "condensed" history model provides a sufficiently accurate simulation of electron-track generation, boundary crossings and the scoring of energy depo-24 sition. In this way "condensed" history techniques have been developed where "microscopic" interactions are utilized to provide a "macroscopic" representa-tion of the particle transport in heterogeneous media [3, 10, 15, 53, 85]. A dose distribution can be calculated by summing the energy deposition in each particle history. However, a large number of histories (depending on the beam field size) are required before the uncertainty in the distribution is small enough for it to be used in treatment planning [42, 64, 69]. Furthermore, long calculation times are a pronounced problem in conformal therapy. In such settings iterative algorithms are utilized for dose distribution optimization re-quiring the dose to be recomputed many times during the planning procedure. Monte Carlo simulation is currently used for beam characterization, bench-marking and other special studies. At present, a few centers [42, 64, 101] adopt Monte Carlo techniques for treatment planning, however it is unlikely that this practice will become routine in the next decade [2]. B . Deterministic methods for particle transport A radiation field can be most generally described by the phase space density, representing the number of particles per six-dimensions made up of the three spatial coorinates (x,y,z), the particle energy (E) and the direction Cl = (9,(j)). The phase space of radiation beam particles can be completely represented by the vectorial energy fluence differential in energy and directions for photons, $^n> for charged particles (electrons and positrons), The dose at a point r can be derived from these fluences as [89] D(r) = - - L f V . S g n f r ) + V.$f , n - q{r)) (1.20) where p is the mass density and q is the rest mass energy change per unit 25 volume. The main problem with this method lies in the evaluation of the resulting fluence distribution. i) Boltzmann transport equation The Boltzmann transport equation can be the starting point for de-scribing radiation transport [9, 14]. It ensures conservation of energy and can govern the production and loss of particles. The linear time-independent integral form of the Boltzmann equation is given as where phase space is described by r, a position vector, and v, velocity vector. $ is the energy fluence and Q is the source distribution of primary parti-cles. The interaction kernel I(v' —> v;r') takes into account the interactions that change the particle velocity, whether due to energy loss (absorption or inelastic scattering) or direction changes (elastic scattering) and production of secondary particles. The transport kernel T(r' —> r; v) changes the pri-mary and secondary particle positions by determining the path taken between collisions [52]. The scattering and absorption processes make the Boltzmann equation difficult to solve except for very simplified cases [9]. Making use of such a complete deterministic approach would probably exceed the complexity and computational burden of the Monte Carlo method [2, 18]. ii) Fermi-Eyges transport equation In his search for a cosmic-ray theory, Fermi proposed a model for par-ticles diffusing in a medium in which the main interaction is due to multiple v;r')dv' + Q(r',v)\T(r' r;v)dr' (1.21) 26 Coulomb scattering [88]. A charged particle initially traveling along the z axis undergoes angular deflection 9 with an azimuthal angle <f> (direction coordi-nates) after interacting with the Coulomb field of nuclei in the medium. Due to cylindrical symmetry around the z axis, calculations can be simplified by working in a two dimensional plane (x — z plane) and later generalizing the results to three dimensional space. The charged particle projected angular deflection onto the x — z plane is given by 9X where tan 9X = tan 9 cos <j>, (1-22) and its position coordinates are x and z. A distribution function Px(x, 9X, z) is defined such that Px(x, 9X, z)AxA9x is the probability that the electron will be located between x and x + Ax and have direction between 9X and 9X + A9X, when it is at depth z. It was demon-strated that Px must satisfy the following equation: dPx „ dPx k d2Px „ rtoN in which a small angle approximation was invoked by setting sin# « 9. Fermi provided a solution for the above partial differential equation for a special case of contant k. Eyges [35] solved the above equation for a depth dependent k (k(z)). k(z) is interpreted as the linear scattering power of a medium and is given as [79] m = Tz. d . 2 4 ) The depth dependence of the scattering power of a medium arises be-cause some particles lose an appreciable amount of energy through hard and 27 soft collisions while traversing that medium. The degree of this dependence is related to the type of incident particle and the nature of the medium. An example are electrons entering a water phantom where approximately 2 MeV of their kinetic energy is lost per 1 cm of penetration depth. The solution to Equation 1.23 can be expressed as T>( a \ 1 -(AQx2-2A1z6x+A292x) Px(x, 9X, z) = exp — , (1.25) •K^A0A2 - Af {A-0A.2 - J±i) where the parameters An are called the "scattering moments" of the medium and are given as An{z) = f* k(z')(z - z')ndz'. (1.26) Jo Then, according to Equation 1.20, finding the dose at a given depth z for electrons passing between x and x + Ax requires finding the fluence or the number of electrons irrespective of direction. Thus the location probability density is defined as [47, 48] /+00 Px(x,9x,z)d9x, (1.27) -oo which after evaluation yields a Gaussian of the from: 1 x2 Gx(x,z) = - . e x p - — T T . (1.28) y/nA2(z) A2{z) The solution to the Fermi-Eyges equation in y and 6y is identical to the above, and the three dimensional form of the location probability (Equation 1.28) is finally given as G(x, y, z) = , exp ——j--. (1.29) y/nA2(z) M[z) 28 Three important limitations of the Fermi-Eyges theory are: first the use of small angle approximation in the scattering of particles, second the neglect of bremss-trahlung energy losses which are important in high-z materials and most importantly third is the absence of delta ray (r5 — ray) production. The third limitation has restricted the use of this theory in electron dose calculation as in its original form. However, the incorporation of measured electron beam data made the Fermi-Eyges theory a successful approach for electron beam treatment planning [27, 44, 83, 104]. 1.7 Thesis objectives and highlights Two main objectives are pursued in this thesis. The first is to quantitatively study the extent of the discrepancy be-tween measurements and current dose calculation algorithms when applied to extreme inhomogeneities like air gaps. The implemented versions of these algorithms on commercial CT-based treatment planning systems were used. The second is to devise an analytical model for inhomogeneity correction incorporating electron transport. We will introduce the concept of separating the inhomogeneity correction factor into primary and scatter factors. For primary calculation, a detailed electron transport is done using the Fermi-Eyges theory. For the scatter correction, a semi-empirical model based on ratios of TMRs is discussed. Important features of our approach are: - Present models require Monte Carlo precalculated data in water and 29 density scaling operations to account for inhomogeneities. The aim of our method is to use basic physics to come up with an analytical method that is practicle when applied to radiation therapy treatment planning. - Our approach is based on a knowledge of the physics of photon and electron interactions with matter. Calculations for primary Compton dose distribution are made from first principles and a comparison with EGS4 Monte Carlo simulation is conducted. Scatter correction is of less importance and it is presented along with the primary correction as a total correction factor model. - Special attention is given to computation time. An optimization proce-dure is introduced to make the application of this method feasible in a clinical setting. As an additional topic, the clinical applicattion of GEANT3, a Monte Carlo code based on CERN library, will be investigated. The main goals are to explore the different electron transport modes implemented in GEANT3 and make a direct comparison with EGS4 and phantom experiments. Therefore, this thesis is composed of three main chapters, an introduc-tion and conclusions. Some analytical derivations related to the third chapter were put in Appendix A. For reference, the computer code BATOUL.c, writ-ten in the ' C language, which was used to implement our method is provided in Appendix B. Appendix C contains the computer routine DOSE.f, written in FORTRAN, which was the usercode driver for the GEANT3 simulations. 30 Chapter 2 Experiments and treatment planning calculations in phantoms with air cavities 2.1 Introduction Simplistic radiation dose calculations at a point usually presuppose that the point is located in a region where electronic equilibrium exists. This assump-tion breaks down at interfaces of materials of different density and atomic number. A loss of both longitudinal and lateral electronic equilibrium occurs, the extent of which depends on the energy, radiation field size and the range of charged particles set in motion. As the x-ray energy is increased, the range of charged particles set in motion also increases. Therefore, as the beam energy is increased the region of electronic disequilibrium becomes significant partic-ularly for low density tissues. These effects are clinically important in the thoracic region where the lung, a low density organ, occupies a large volume. Accurate dosimetry is critical to reduce toxicity in radiation therapy. 31 Dose measurements and calculations both within and beyond inhomogeneities have been extensively reported [7, 26, 31]. The most commonly used dose correction methods for lung inhomogeneity correct the dose for changes in photon fluence but do not account for changes in charged particle transport and therefore may not accurately predict the dose in interface regions [7, 66, 68]. These methods are: the ratio of TAR's, the Batho or power law correction [6], the equivalent TAR (ETAR) method [94, 95] or variations thereof [105]. The introduction of a low density material alters the radiation trans-port in a complex manner. The primary transmitted radiation is increased because the lower density material attenuates the beam less. However, per mm3, there are fewer photons scattered by the lower density material. This produces a decrease in dose. There are also fewer charged particles generated per mm3 of low density material but their ranges are larger. The result is that inhomogeneity corrections will be greater for smaller field sizes and lower x-ray energies [66, 68]. Besides the lung, other regions of low density are air cavities situated in the upper respiratory tract. These may be larger than lxl cm2 in cross sec-tion. The dose received by tissues situated at the interface of the air channel or beyond is of clinical significance and may influence outcome of treatments with curative intent. Several studies have reported underdosing at interfaces due to loss of electronic equilibrium [33, 80]. The degree of underdosing de-pends on the energy of the radiation beam, field size and geometry of the air cavity. Directly at the interface the dose is lower for higher energy and smaller field size [8, 57, 109]. Beyond the interface the dose becomes greater than its corresponding value in homogeneous tissue [57]. In an investigation of air cav-32 ity interface dose in a humanoid phantom, the variation in dose with energy was found to be clinically insignificant, however, a strong dependence on the shape and geometry of the air cavity is reported [77]. The present study was initiated to quantify the extent of discrepancy between measurements and calculations using three treatment planning sys-tems and to assess the dose in heterogeneous phantoms, at the air/polystyrene interface and at depths for varying air gap geometries. 2.2 Experimental Procedure Irradiations were performed with 6 MV and 18 MV x-rays produced by a Philips SL20 linear accelerator (Elekta Oncology Systems/Philips Medical Sys-tems, Sweden and U.K.). Field sizes of 5x5 cm2 and 10x10 cm2 at a source to chamber distance (SCD) of 100 cm were used. The experimental setup is depicted in Figure 2.1. A parallel plate air ionization chamber (Markus PTW model 30-329, Victoreen Nuclear Associates, Carle Place, NY) was used for the measurements. This chamber has a polyethylene window of thickness 2.7 mg/cm2, a 5.4 mm diameter circular collector, a nominal charge collection volume of 0.05 cm3 and a 2 mm electrode separation. The chamber is located at the air/polystyrene interface at a constant distance of 100 cm from the source (SCD setup). In Figure 2.1(c), a constant distance of 100 cm between the source and the distal the air/polystyrene interface was mantained (SAD setup) while the chamber was moved down the phantom. The effective point of measurement is the proximal surface of the chamber. The polarity effect was investigated and found to be less than 1%. 33 (a) (b) (c) Figure 2.1: The experimental setup used for the three experiments is shown. The black rectangle represents the position of the Markus Chamber. The air gap depth was varied from 1 cm to 5 cm while keeping a 3 cm polystyrene buildup slab above it. The dose Z)j was measured at the air/polystyrene interface and the dose Dh was measured at a depth of 3 cm of homogenous phantom. The relative dose was then determined from Di/Dh-In the first experiment, the lateral dimension of the air gap was made larger than the radiation field size (Figure 2.1a). In the second experiment the lat-eral dimension of the air gap was varied from 0x0 cm2 (no air gap) to 20x20 cm2 (Figure 2.1b). The width of the air cavity was altered to investigate the contribution of scattered radiation on the interface dose for the 5x5 cm2 field size. The depth (t) of the air gap used was 3 and 5 cm for the second experiment. The dose normalization point was situated in a homogeneous polystyrene phantom at depths 3 cm +1 for the 0x0 cm2 air gap. The effect 34 of primary photon attenuation was investigated by adding polystyrene layers on top of the point of measurement keeping the SAD (source to interface dis-tance) constant at 100 cm. Measurements were carried out for air gaps of 5 cm depth. Polystyrene sheets were added to increase the depth of measurement up to 4 cm beyond the air/polystyrene interface (Figure 2.1c). Again dose normalization was at 8 cm deep in the homogeneous phantom. In these experiments, relative ionization measurements read by the chamber are not exactly the same as relative dose measurements, but should be quite close because the mass stopping power ratio (Equation 1.9) changes little as a function of depth [66]. Therefore, the relative dose at a given depth was equated to the relative ionization. Furthemore, several ionization readings were taken per point and the results were averaged. Repeated measurement series taken on different days indicate that dose measurement at a given depth have a statistical uncertainty of the order of 1%. Hence, relative dose value have an uncertainty of the order of 2%. 2.3 Treatment Planning Systems Three treatment planning systems were used for the relative dose calcula-tion. These are: Cadplan (Varian Oncology Systems, Varian/Dosetek, Zug, Switzerland), General Electric Target 2 (General Electric, Milwakee, WI), and an in-house developed treatment planning system (Xplan, Vancouver Cancer Centre, B.C., Canada). The Cadplan system employs three methods of inho-mogeneity correction which are user selectable: the Batho, modified Batho and the equivalent TAR (ETAR) methods. The modified Batho method uses only 35 the descending portion of the TMR curve. This was suggested after imposing continuity of the correction factor at the interface bewteen dissimilar media [97]. This calculation does not reflect build-down and re-buildup at interfaces. Mathematically, the correction factor for the Batho method is given by Equation 1.13 evaluated at the depth z — zm as defined in Chapter 1. In the modified Batho method the depth of zmax is added to the depth of calculation as given by Equation 1.13. Therefore the Batho and the modified Batho models differ only in the depth definition for the TAR/TMR value. The second system studied is the GE Target 2. It uses the Batho method with the capability of doing 3D dose calculation. Finally, an in-house system (Xplan) implemented at the Vancouver Cancer Center was also assessed. This system uses the same Batho method as Cadplan with only 2D dose calculation capabilities for 6 MV photons. 2.4 Results and Discussion Dose perturbations produced by air gaps are due to alterations in both photon and electron transport. As the air gap thickness increases electrons from the upper polystyrene layer no longer reach the point of measurement giving rise to a second build up region. Thus the point of measurement effectively represents another surface. Figure 2.2 shows results of experimental measurements and calculations of the relative dose (Di/Dh) for 6 MV photons using the Batho, modified 36 2 10x10 cm (b) 0 1 2 3 4 5 6 Air Gap (cm) Figure 2.2: Comparison of calculated and measured relative dose (Di/Dh) for 6 MV photons is shown for (a) 10x10 cm2 and (b) 5x5 cm2 field sizes. Calcu-lations of relative dose using the Batho, modified Batho and ETAR correction methods as implemented on Cadplan lie between 1 and 0.9 while measure-ments with the Markus chamber show a decrease down to 0.45. Results are normalized to unity at depth 3 cm in the homogeneous polystyrene phantom. 37 Batho and equivalent TAR (ETAR) inhomogeneity correction methods on the Cadplan system for a 10x10 cm2 (Figure 2.2a) and a 5x5 cm2 (Figure 2.2b) field size. In this experiment the 3 cm polystyrene slab is moved away from the point of measurement and the air gap increases from 1 to 5 cm. The decrease in the measured relative dose is attributed to changes in electron transport and loss of electronic equilibrium. As the air gap increases less secondary electrons are seen by the detector since they are dispersed over a greater range. However, for larger field sizes an increase in the scatter component predominates, producing greater relative dose. Calculation methods only account for changes in photon transport while assuming electronic equilibrium. The decrease in calculated relative dose is much less than that measured. In our experimental setup the primary beam attenuation is unaffected by the size of the air gap. Therefore, the decrease in relative dose is attributed to less scattered radiation reaching the point of calculation as the air gap increases up to 3 cm after which point the relative dose remains constant. However, the dose is significantly overpredicted in the calculation. For instance, the relative dose calculated using the Batho method for a 5x5 cm2 field irradiated with a 6 MV beam behind a 5 cm air gap is 0.9 whereas the measured relative dose is 0.45. While air gaps in the head and neck region are typically less than 3 cm and larger air gaps are not en-countered clinically, our experiments are designed to illustrate the limitations of the treatment planning algorithms. A similar trend of the relative dose as a function of air gap thickness was observed using the three methods with the Batho method differing by 5 percent from the modified Batho and ETAR methods. 38 1.00 0.95 0) in o a 0) > 0.90 ra cr 0.85 0.80 (a) 1.00 0.95 a> «i O a a> > 0.90 -I a * 2 tt 5x5 cm 0.85 - —Q— Cadplan —W— Xplan —V— Target 2 0.80 -I 1 1 1 i 1 (b) 0 1 2 3 4 5 6 Air Gap (cm) Figure 2.3: Relative dose calculation is shown as a function of air gap thickness using the Batho correction method as implemented on the Cadplan, GE Target 2 and Xplan treatment planning systems. A 6 MV beam and two field sizes, (a) 10x10 cm2 and (b) 5x5 cm2 were used. 2 10x10 cm Q — Cadplan —W— Xplan — — Target 2 1 1 1 1 1 0 1 2 3 4 5 6 Air Gap (cm) 39 To illustrate differences that may occur with the implementation of the Batho method on different treatment planning systems results based on three treatment planning systems are presented in Figure 2.3. Relative dose calculations for the 6 MV photon energy and two field sizes (10x10 cm2 and 5x5 cm2) are shown. The field size dependence is insignificant in the case of the GE Target 2 system. All the results for the 3 treatment planning systems lie within a 5 percent range. In the second experiment where the side walls are smaller than the ra-diation field size (Figure 2.1b), photons scattered from the walls of the cavity as well as electron transport, contribute to the overall behavior of the absolute dose with change in the air gap thickness. Variation in the lateral dimensions of the cavity will eventually influence this behaviour. As the lateral dimen-sions of the cavity were increased from 0x0 cm2 (no cavity) to 2x2 cm2 the relative dose increased since the primary photon fluence is no longer attenu-ated. For both 6 MV (Figure 2.4a) and 18 MV (Figure 2.4b) the dose reaches a maximum at lateral dimensions of 2x2 cm2. It decreases afterwards due to a smaller number of photons scattered by the walls reaching the detector. Fur-thermore, the increase in the air gap thickness resulted at first in a higher dose because of greater scatter from the thicker walls; but as the air gap dimen-sion approaches the radiation field size dimension, thicker air gaps result in a lower dose. The latter is consistent with the results of the first experiment (Figure 2.2) where the phenomenon is attributed to a reduction in electron transport. Calculations are not presented at this point since their limitations at the interface was already shown in Figures 2.2 and 2.3. Figure 2.5 (a, b) shows the variation of dose as a function of depth for 40 Figure 2.4: Measurements of the relative dose as a function of air gap side for a 5x5 cm2 field size are shown. Sides were varied from 0x0 (no gap) cm2 to 20x20 cm2 for two different air gap thicknesses (3 and 5 cm): (a) 6 MV photons and (b) 18 MV photons. The dose was normalized to the dose in homogeneous phantom with 6 cm (t = 3 cm) and 8 cm (t = 5 cm) of polystyrene over the detector. 41 Figure 2.5: Measurements of the relative dose as a function of depth in a phantom with a 5 cm air gap thickness for 6 MV photons are shown, (a) for a 10x10 cm2 field size and (b) for a 5x5 cm2 field size. The dose was normalized to the dose in the homogeneous phantom at 8 cm deep. The position of the air gap is indicated by the borded region. 42 1.4 1.2 H 0) 1.0 in o o S 0.8 -| ^ 0.6 0.4 0.2 2 10x10 cm Homogeneous Cadplan(ETAR) —Q— Experiment A i r (a) 6 8 10 12 Depth (cm) 14 16 18 20 1.4 1.2 H in o o 0) > M Q) 0.4 0.2 (b) 5x5 cm Cadplan (ETAR) —0— Experiment A i r 8 10 12 14 16 18 20 Depth (cm) Figure 2.6: Measurements of the relative dose as a function of depth in a phantom with a 5 cm air gap thickness for 18 MV photons are shown, (a) for a 10x10 cm2 field size and (b) for a 5x5 cm2 field size. The dose was normalized to the dose in the homogeneous phantom at 8 cm deep. The position of the air gap is indicated by the borded region. 43 (a) 10x10 cm? and (b) 5x5 era2 fields irradiated with 6 MV photons using an air gap thickness of 5 cm. Figure 2.6 (a, b) shows the results of the same irradiation conditions for 18 MV photons. The position of the air gap is indicated by the borded area. There was a build up in the dose after the air/polystyrene interface which was more significant for the smaller field size. The maximum dose occurs when the thickness of polystyrene approaches the maximum range of primary electrons for the respective energies investigated (6 and 18 MV) in this second buildup region. Furthermore, the application of the ETAR method as implemented in CADPLAN did not predict this second buildup. There was good agreement at greater depths, beyond the maximum dose (1.5 cm for 6 MV and 2.5 cm for 18 MV) where electronic equilibrium was re-established. 2.5 Summary A series of experiments were carried out to simulate air cavities in a polystyrene phantom. Dose was measured at an air/polystyrene interface and as a func-tion of depth. Results of experiments were compared to calculations done us-ing three treatment planning systems. These systems employ Batho, modified Batho and the equivalent tissue-air-ratio (ETAR) methods for inhomogene-ity corrections. The measured interface dose decreased by 55% for a 5 cm air gap, 5x5 cm2 field size and 6 MV photons. This has been attributed to lack of electronic equilibrium and dispersion of secondary particles transported through the air gap. These results are at variance with predictions of calcula-tions using three treatment planning systems, for which only a 10% decrease 44 was calculated. This is because the calculation algorithms employed do not incorporate electron transport. Further experiments were conducted to study the contribution of scatter from the sides of the walls of the cavities. Dose measurements as a function of depth were also performed to investigate the effect of primary fluence attenuation. The Batho algorithm did not show any sensitivity to the position of air gap side walls. This points to the need for proper inclusion of disequilibrium effects and shape of inhomogeneity. 45 Chapter 3 Inhomogeneity correction factor incorporating explicit electron transport 3.1 Introduction Although most of the radiation treatment planning algorithms accurately pre-dict dose in homogeneous tissues there are differences in accuracy for dose calculations within and beyond heterogeneous tissues especially in the inter-face region of dissimilar media [2, 26, 31, 66, 68, 93]. In inhomogeneous media the radiation dose to homogeneous water equivalent tissue is calculated and modified by an inhomogeneity correction factor (ICF). The ICF is defined as the ratio of the dose beyond or within inhomogeneous tissue to that in homo-geneous tissue at the same geometries point [2, 6, 94]. The most accurate calculation methods currently available for radiation dose calculations in inhomogeneous tissues are Monte Carlo based techniques [28, 42, 43, 64]. However, these techniques are too computationally intensive to 46 use for routine clinical treatment planning with currently available computer hardware. The next most physically rigorous x-ray dose calculation methods are the convolution/superposition algorithms [1, 19, 67]. The convolution/superposition method was developed with particular intent to handle the condition of electronic disequilibrium in high-energy pho-ton beams. For these beams the electrons set in motion have long tracks of the order of a few centimeters. In this method, the Monte Carlo method is used to calculate the dose distribution resulting from photon interaction at a point referred to as the photon kernel. This kernel can be divided into two components [67, 107]: the primary kernel, defined as the spatial distribution of energy deposited by electrons and positrons emerging from the site of the primary photon interaction; the scatter kernel which represents the distribu-tion of deposited energy resulting from the interaction of scattered photons. For this kernel, the electrons generated are generally lower in energy than the primary kernel and their energy can be assumed to be deposited locally. In an inhomogeneous phantom, the convolution/superposition method attempts to correct for the effects of density variations by linearly scaling the primary and scatter kernels. The rectilinear scaling method is based on O'Connor's theorem [78] to account for secondary electron and scattered pho-ton transport in inhomogeneous media [54, 67]. Woo and Cunningham [106] showed that rectilinear scaling leads to an overestimation in dose in a layer beyond a high to low-density interface due to primary interactions before the interface. This overestimation (up to 50% in the case of an air gap) is because the largest contribution to lateral electron spread at a plane is from scatter-ing events furthest from the plane, and this effect is not accounted for by the 47 rectilinear scaling. This discrepancy increases with larger air gap thicknesses but decreases with larger field sizes [93]. Yu et al [110] calculated the primary photon dose inclusive of explicit electron transport, and also the scatter photon dose using the convolution/ superposition method. In their work, Fermi-Eyges electron scattering the-ory is used to transport the electrons set in motion at the primary photon interaction site from which the dose deposited was subsequently calculated. The scatter photon component which contributes a portion of the total dose is calculated using the dose spread array method of Mackie et al [67]. Only aluminum inhomogeneities were considered in their study. However, the main emphasis of their model for predicting dose accurately in the vicinity of in-homogeneities is on the transport of the secondary electrons generated at the primary interaction site. Yu et al's approach [?] has three major drawbacks, 1) the need to derive Monte Carlo simulated parameters (initial scattering moments) which make it similar to convolution algorithms, 2) the use of certain assumptions, and values of parameters in particle transport integrals, (e.g. all secondary electrons have the same mean angle need to be quantified), 3) high computational cost (i.e. routine clinical dose calculation using this method is impractical). The first drawback renders this model a Monte Carlo dependent method requiring precalculation of the parameters for all energies and materials encountered in radiation therapy. The second, using arbitrary numbers for parameters makes it subject to systematic errors. The third drawback remains an outstanding problem that needs to be addressed. In this work, we present a method to calculate the inhomogeneity correc-48 tion factor incorporating explicit electron transport. Up to now, the concept of primary and scatter dose separation has been been applied to the calculation of total dose. In our method, we used this concept of separation to calculate a primary inhomogeneity correction factor (ICFP) and a scatter inhomogeneity correction factor (ICFa). An ICF is eventually calculated as a weighted sum of the primary and scatter ICF. This method will be described and tested in inhomogeneous phantoms containing air cavities in tissue. For the ICFp calculation, secondary electrons emerging from primary Compton interactions are transported. The explicit electron transport will be based on the "photon-electron cascade model" approach [110] and a Gaussian multiple-scattering theory [48, 100]. For the scatter ICF calculation (ICFS), we present a semi-empirical method that is similar to the "equivalent tissue-air ratio" method and corrects for scatter dose in inhomogeneous phantoms. The result is a cor-rection method that accounts for both photon and electron transport and can be calculated in a time appropriate for routine clinical implementation. 3.2 Theory and Methods The theoretical development of our approach is presented which includes two subsections: primary dose correction and scatter dose correction. Furthermore, the description of Monte Carlo simulation and experiments used to verify our results are discussed in later sections. 49 4 2 a » r ] J layer 1 k V . ' 1 layer 2 i k r • layer n A k 4>x Figure 3.1: The multilayered geometry used in our calculations is shown. 3.2.1 T h e o r y The inhomogeneity correction factor (ICF) is defined as the ratio of dose in the inhomogeneous phantom to the dose in the homogeneous phantom at the same geometries point. Using the concept of primary and scatter separation, ICF will be given as F)pthom "T Dg!fwm where Dp and Ds are the corresponding primary and scatter components of dose in the inhomogeneous and homogeneous phantoms. 50 In fact, letting our correction factor for the primary dose be: ICFV = (3.2) and the correction factor for the scatter dose be: ICFS = (3.3) i-J s,hom we can combine the two to come up with the total correction factor as follows: ICF = = iCFp.R + ICFS.{1 — R), (3.4) where R = DVthom/Dtotalthom is the ratio of the primary dose over the total dose in the homogeneous phantom. The next two sections will be devoted to the derivation of ICFP and ICFS. I. P r i m a r y inhomogeneity correction factor Two main aspects are considered in our ICFP derivation: Electron pro-duction and electron transport. As was shown in Figure 1.2, the Compton process is dominant in the therapeutic energy range. Compton interactions will only be considered in this study and hence we assume that ICFP ~ ICFpjComp, (3.5) and the primary Compton ICF will be the representative of the primary ICF. However, R given in Equation 3.4 remains the ratio of primary dose to the total dose resulting from all physical interactions. A . Electron production i) Initial secondary electron fluence produced by monoenergetic photons 51 Primary dose at a given point can be calculated by first estimating the fluence of secondary electrons at that point. Secondary electrons are the ones that emerge from the photon interaction site and their fluence is defined as the number of particles passing through a unit area perpendicular to the z direction (Figure 3.1). The energies of these electrons consist of a continuous spectrum and they can be grouped into discrete energy intervals. The product of electron fluence of a given energy interval with the corresponding stopping power yields the radiation dose deposited in that interval [56, 110, 100]. The primary dose is then the summation over all intervals and may be expressed as where Sc/p is the collisional mass stopping power and d(/)e(E)/dE is the dif-ferential energy secondary electron fluence between E — 4f and E + ^- where E is a value in the range between E^t and Emax (minimum kinetic energy cut-off value and maximum kinetic energy of secondary electrons). Considering Compton interactions only, the amount of energy transfered to secondary electrons set in motion can be obtained from the Klein-Nishina differential energy cross sections [5, 50]. Given a photon fluence ( (f>p) inter-acting in a layer of thickness dz, the initial differential electron fluence set in motion with energies between E - ^ and E + 4f (at the interaction site) is now calculated as where p is the density of the medium and Ne is the corresponding number of electrons per unit mass. For primary dose calculation, the photon fluence (3.6) #e,o(z; E) dE (j>p(z)pNe da(hu, E) dE dz (3.7) 52 (pp(z) decreases exponentially with depth and is given by Mz) = MQ) e x P (-PNe<™) (3-8) where (/>p(0) is the photon fluence entering the medium. If all types of photon interactions all allowed, the Compton cross section (a) should be replaced by the total cross section (p) in Equation (3.7). The Klein-Nishina term is given by [5, 50] da(hv,E)=z3 a0 f 2E E2 E2 1 dE 8 ahv \ a(hv - E) a2(hv - E)2 hv(hv - E) J K ' ' where a0 = 66.525X10_30m2 is the total cross-section for the Thomson clas-sical scattering of a free electron. This expression is plotted in Figure 3.2 as dashed lines for monoenergetic photons of 1, 6 and 10 MeV where the electron kinetic energies range from E = 0 to E = Emax where Emax = hv.2a/(l + 2a) in which a = hu/rriQC2 (hu is the energy of the incident photons and TUQC2 is the electron rest mass energy, 0.511 MeV). ii) Initial secondary electron fluence produced by poly energetic photons However, the photon beams produced by linacs are polyenergetic and to calculate the final dose one can follow two approaches. Assuming the photon spectrum emerging from the linac is known, we can calculate the dose response curve due to each photon energy interval in the spectrum and find the sum of all these curves weighted by the corresponding photon energy weight [49]. This approach is time consuming since accurate linac spectra contain a large number of photon energy intervals [71]. In our model, we propose to calculate a mean differential energy cross section for a photon spectrum. Given a photon distribution f(hv) and a differ-53 ential energy cross section , the mean differential energy cross section for photon interaction resulting in electrons set into motion with energies be-tween E - *f and E + 4P is given by -h . (3.10) 2 da[E) = Itmax f(hu)d-^^dhu dE ft"™* }(hu)dhu For discrete photon spectra, the integration in Equation 3.9 is replaced by a weighted sum over all photon energy intervals of the spectrum. On the other hand the attenuation of the polyenergetic photon fluence is still assumed exponential as in Equation 3.7. The linear attenuation coefficient resulting from Compton interactions of polyenergetic photons (pNea(hu)) is calculated at the mean energy of the photon spectrum (hu). These mean energies (along with the spectra) can be obtained from the literature [71] and a is obtained by integrating Equation 3.9 over all electron energies to obtain ln(l + 2a) l+3a 1 2a~ (l + 2a)'/' 1 j which is used in Equation 3.7. If in any given calculation primary Compton dose is to be evaluated while allowing all types of photon interactions, a(hu) should be replaced by n(hu). Tables for fx(hu) are found in the litterature [50], and values for 6 MV and 18 MV spectra are listed in Table 3.1. B . Electron transport i) Pencil beam The Fermi-Eyges equation [35] describes the lateral and angular distri-bution of charged particles undergoing multiple small angle elastic scattering 54 Table 3.1: The mass attenuation coefficient for polyenergetic photon beams (p/p) is shown evaluated at the average photon energy of spectra. hv (MeV) U/p(hy) (cm /e) Water Air 6 M V 1.92 0.051 0.046 18 M V 5.82 0.028 0.026 in passing through a layer of matter. The Fermi-Eyges theory is based on the assumption that the particles have not scattered to large angles, however, variations in the scattering properties of the medium with depth are accounted for. First consider secondary electrons emerging from a photon interaction site with an angle 9 with respect to the incident photons. Then at the point (x, y, z) the location probability density is described as [48] „ ,. 1 (x — tan9xz)2 + (y — tan9vz)2 . , G ^ = W f e i ) -ia^J) ( 3 ' 1 2 ) where cr2(z, 9) is given by a\z) = l-A2{z) = i jT k(z')(z - z'fdz' (3.13) in which A2(z) is the second moment of the linear scattering power and k(z) is the linear angular scattering power at depth z in the medium, and 9X and 9y are the projections of the polar angle 9 onto the X-Z and Y-Z planes. More 55 specifically these projections can be written as [48] tandx = tanOcoscj) (3-14) tandy = tan9sin<j>, where <p is the azimuthal angle. The analytical derivation of the scattering moments in homogeneous and inhomogeneous phantoms is given in Appendix A. Yu et al [110] have derived a set of recursion relations for the zeroeth, first, and second moments of the linear scattering power for a two-layer phantom. Appendix A gives a generalization of the ith moment in a n-layer phantom. The spatial distribution at (x, y, z) due to electrons recoiling with an angle 9 over all <f> is found as G{x, y, z; 9) = . (3.15) Jo # The solution to this integral was given by Wang et al [100]. Using the present notation, the spatial probability will be n< o\ 1 x 2 + y2 r (1y/x2 + y2ztan9\ G ( X ' * 6 ) = W M ) 6 X P - 2 ^ ) - / o { 2na2(z,9) )' ( 3 1 6 ) where IQ is the modified Bessel function of the first kind, of zero order. To reduce the number of variables, the energy-angle relationship for Compton interaction (Equation 1.2) is employed. Therefore, the secondary electron fluence at depth z for electrons originating between — 4f and 4f, and — ^  and 4f having energies between E — ^ and E + 4jr will be the convolution of the initial electron fluence with the Fermi-Eyges Gaussian location density: d</>e(x,y,z,E) r* ^^T^{E)t dE = jf* <?v{z - z>)pNe-^-G(x, y, z - z'; E)dxdydz'. (3.17) 56 ii) Broad beam First, we consider the case of a broad parallel photon beam with a field size [(—a, +a), (—& + &)] (Figure 3.1). To account for photons interacting across the field, we extend the above convolution to the x and y dimensions. Mathematically the secondary electron fluence will be given as = I" f l [ M*-')PN/-^G(X-X', y-V; z-A EWiy'dA (3.18) and hence the primary Compton dose can be written as f E m a x f+a r+b rz SC(E) 1 . ,. R r d(j(E) G(x -x',y- y', z - z'\E)dz'dy'dx'dE. (3.19) The evaluation of the above 4-dimentional integral in Equation 3.14 is a formidable task, and we have to resolve to making some approximations. First, based on Figure 1.1 it is safe to assume that secondary electrons emerge in the forward direction from the interaction site. This makes the angle 9 equal to zero in Equation 3.11 and the location density purely Gaussian. This approximation will be quantified in the Results Section. Second, if we only consider a layered phantom as in Figure 3.1 the initial secondary electron fluence at a depth z will be spatially invariant. Thus, the above integral can be evaluated explicitly over the lateral dimensions x' and y', and this yields an error function (erf). The resulting form of the dose function is a 2-dimensional integral over the depth z and the initial secondary electron energy E: rEmax rz\SJE)( x - a . x + a W < ; 4 ^ ) > + - w>^«E- (3 JO) 57 Following the "photon-electron cascade model" [110] in dealing with energy straggling, a parameter (A) was used in Equation 3.20 for electrons crossing an interface between dissimilar media. A accounts for the loss of electrons and represents the ratio of the number of electrons exiting the layer of water to the number entering the air gap. Equation 3.20 may be expressed in the form of a numerical summation over all energy intervals and over all depths to calculate the dose deposited at that point. Only secondary electrons which have kinetic energies greater that a cut-off energy (Ecut = lOkeV) are transported and below Ecut their energy is considered to be deposited at the spot. The above derivation of Compton primary dose is specific to a parallel incident photon beam (infinite source to surface distance, SSD). To obtain the absorbed dose due to a diverging photon beam (finite source to surface distance), a converging factor involving inverse square law is applied [104]. This conversion can be stated as nf \ r S S D 1 2 D(x,,y,z) = ( x.SSD y.SSD \ •D~[zTSM'zTsSD>Z)> ( 3 - 2 1 ) z + SSD\ where is the depth dose for the infinite SSD or broad parallel photon beam setup evaluated at the arguments inside the brackets. In our method we are not calculating the total dose but rather a correction factor to be applied to the dose in the equivalent homogeneous phantom. Hence, primary Compton dose in the homogeneous and inhomogeneous cases are calculated simultaneously to do the proper normalization. 58 II. Scatter inhomogeneity correction factor Because of the long range of secondary electrons emerging from the pri-mary interaction site (several centimeters for megavoltage energies), a detailed electron transport model was included, as discussed in the previous section, to account for perturbations resulting from phantom inhomogeneities. Fur-ther interactions of scattered radiation result in secondary electrons of much smaller range which can be considered to deposit their energy at the point of creation. This allows the use of approximate methods to correct for the scatter dose distribution. The "equivalent" tissue-air ratio method [95] attempts to correct for primary dose in a inhomogeneous medium through the use scaling to a water equivalent depth and the scatter dose through the scaling of the field size (Equation 1.14). For a scatter inhomogeneity correction factor, we propose the use of tissue-air ratios evaluated at the same depth in both homogeneous and inhomogeneous phantoms but using a scaled field size for the inhomogeneous phantom as follows: I C F s M ~ TAR{z,rY (3"22) where r' is the effective beam radius defined by r' = rp' (3.23) pJ='£Wmpijk/YlWi3k, (3-24) i,j,k ijjjfc and the beam radius r of a circular field is equivalent to the side of a square field s by [56] r = -j=. (3.25) 59 Instead of implementing Equations 3.22 and 3.23 for the calculation of the effective beam radius at every point, we used results given by the "equiv-alent" tissue-air ratio method (ETAR) to extract these values. Denoting the ETAR correction method as ICFETAR and following its definition in Section 1.4.1 Equation 1.14, we can state that TAR(z', r') = ICFETAR(z, r).TAR(z, r). (3.26) ICFETAR(Z, r) is calculated from the treatment planning system (Figures 2.5 and 2.6). Then the effective beam radius (r1) corresponding to TAR(z',r') at water equivalent depth (z') is found from the TAR tables. The use of tissue-maximum ratios (TMRs) instead of tissue-air ratios (TARs) is recommended in Equations 3.27 (see sections 1.4.2 and 1.6.1). TMR tables are generated by the treatment planning system based on measured precentage depth doses (PDDs) in water. The TMRs for our 6 MV beam are given in Table 3.2. For example, for a water phantom containing an air cavity between 3 cm and 8 cm if one needs to calculate r' at 11 cm depth for a 6 MV beam and 10x10 cm2 field size, the following procedure is recommended. First, the value of TMR in the inhomogeneous phantom at 11 cm depth is found from Equation 3.26 (0.867). The water equivalent depth (z1) at 11 cm is ~ 6 cm (3 + 5x0.001+ 3). In order to extract the effective beam radius from the TMR table a search algorithm is invoked and interpolation between the data values is used. So, the search will be in the 6 cm depth row and a value for the equivalent beam radius is found for a TMR of 0.867. In this example, the effective beam radius is 2.42 cm or 4.3x4.3 cm2 square field. This value is then used in Equation 3.22 and the scatter inhomogeneity correction factor (ICFS) is derived at that depth. 60 Table 3.2: Cadplan TMR table for 6 MV beam is shown as a function of depth and field size. The 0x0 cm2 TMR are extrapolation from higher field size values. Field size (cm 2) 0x0 3x3 5x5 10x10 15x15 20x20 Depth (cm) 0 0.111 0.146 0.169 0.225 0.279 0.324 1 0.963 0.960 0.959 0.962 0.968 0.979 1.5 1.000 1.000 1.000 1.000 1.000 1.000 2 0.996 0.999 1.000 1.000 0.997 0.999 3 0.950 0.965 0.972 0.978 0.978 0.982 4 0.930 0.933 0.939 0.952 0.957 0.962 5 0.872 0.894 0.907 0.925 0.933 0.939 6 0.837 0.858 0.873 0.898 0.911 0.918 8 0.754 0.785 0.806 0.842 0.860 0.871 10 0.675 0.717 0.743 0.786 0.808 0.822 12 0.615 0.657 0.684 0.729 0.756 0.773 14 0.551 0.598 0.627 0.676 0.706 0.726 16 0.513 0.550 0.575 0.625 0.659 0.679 18 0.462 0.503 0.530 0.577 0.611 0.633 20 0.422 0.461 0.486 0.534 0.567 0.590 As can be seen from the above formulation, the poposed scatter cor-rection method resembles the ETAR method. In fact, a mathematical re-lationship can be drawn between the two which makes this method easily implemented on treatment planning systems that provide ETAR correction. Thus, the proposed ICFS can be stated as ICFs(z,r) = ICFETAR{z,r).'^^^-y (3.27) where z' is the water equivalent depth (radiological depth) and r' is given as above (Equation 3.23). 61 3.2.2 Monte Ca r l o simulation Monte Carlo simulation was performed using the EGSA [76] Monte Carlo code with default values for the Parameter Reduced Electron Step Algorithm (PRE ST A) [15]. Primary Compton dose was scored after determining the interaction type and terminating the photon history [81]. Total dose was also simulated using the code DOSXYZ [65]. AE was set to 0.521 MeV, AP to 0.01 MeV, ECUT to 0.521 MeV and PCUT to 0.01 MeV. Restricted stop-ping powers were employed in these calculations, and delta-rays (8 — rays) were allowed to be produced above a 10 keV threshold of kinetic energy. The use of unrestricted stopping powers with no account of 5 — rays in our calcula-tion and restricted stopping powers and 8 — ray production in the Monte Carlo simulation needed to be tested. Monodirectional incident photons with ener-gies ranging from 1 to 20 MeV were transported through the water and the water-air-water phantoms, and the subsequent energy deposited was recorded in a Cartesian array of 0.2x0.2x0.2 cm3 voxels. For photon beams, published spectra of 6 and 18 MV were used [71, 99]. Radiation beam sizes considered were 5x5 cm2 and 10x10 cm2. At least 100 million incident photons were used for simulation giving a relative error of less than 2% on the scored dose. 3.2.3 Experiments Irradiations were performed with 6 MV and 18 MV x-rays as described in Chapter 2. Radiation field sizes of 5x5 cm2 and 10x10 cm2 at a source to axis distance (SAD) of 100 cm was used. The air gap was set to 5 cm with 3 cm polystyrene supported by side walls (much larger than the field size) above 62 the air gap [93]. The ICF at a given depth was calculated as the ratio of the measured dose in the inhomogeneous phantom to the measured dose in the homogeneous phantom at the same corresponding depth. 3.3 Results and Discussion 3.3.1 Calculated primary Compton dose compared to Monte Carlo simulation In this section we present the effects of the different assumptions made in the development of our primary inhomogeneity correction factor method. Further-more, we present a comparison of three-dimensional dose disributions calcu-lated with our analytical method to those calculated with Monte Carlo sim-ulation. In Figure 3.2 is shown the differential energy cross sections used in the calculation of monoenergetic primary Compton dose distributions. The differential energy cross sections for 6 and 10 MV spectra [71] that are also shown are based on Equation 3.9. As Figure 3.2 demonstrates, for polyener-getic photon beams the differential electron fluence decreases with increasing electron energy owing to the lower weight of high energy photons in these linac spectra [71, 99], and therefore less high energy electrons. Our method is very promising since the computation time for a given photon beam is equivalent to the one for monoenergetic photons having an energy of hvmax (maximum energy in the photon spectrum). For instance, the calculation of primary dose distribution due to 10 MeV photons takes exactly the same computation time as for a 10 MV photon beam. 63 1E-24 c o o a> °> 1E-25 > CD E CM 1E-26 c o u CD « 1E-27 co o o 1E-28 c cu cu ° 1E-29 0.1 1.0 10.0 Recoil Electron Energy (MeV) Figure 3.2: The Klein-Nishina differential energy cross sections are shown for 1, 6 and 10 MeV monoenergetic photons and the derived cross sections for the 6 and 10 MV photon spectrum are also included. Absolute primary dose, derived according to Equation 3.18 in which secondary electron recoil in all directions based on Compton effect, is compared to absolute primary dose using forward directed electrons assumption. The results are shown in Figure 3.3 for 6 MeV photons. The dose distribution for forward directed electron assumption is achieved by setting 9 to zero in Equation 3.11. An agreement to within 2% is seen between the two dose distribution curves. The all-direction assumption curve is slightly higher in the buildup region and slightly lower beyond d m o I which is expected due to smaller depth attained by secondary electrons travelling initially with an angle 9 with respect to the z direction. It is helpful to point out that the effect of nonzero 9 in the dose distribution is less seen with higher photon energies as 64 illustrated by Figure 1.1. 0.60 0 2 4 6 8 10 12 14 16 18 Depth (cm) Figure 3.3: 6 MeV photon primary dose calculated with forward directed elec-trons is compared to dose calculated including electron recoil in all directions. The use of unrestricted stopping powers without 5 — ray production in our calculations was investigated, and the results of comparison with Monte Carlo are depicted in Figure 3.4. In this Figure, the dose curve of 20 MeV photons calculated using Monte Carlo simulation is shown under two condi-tions; using restricted stopping powers with and without 5 — ray production. Similarly, in our analytical calculations, restricted and unrestricted stopping powers, taken from ICRU35 [79], were used and compared to Monte Carlo results. As can be seen, there is good agreement between the analytical pri-mary dose calculation and Monte Carlo simulation when considering restricted stopping powers without 5 — ray production. Meaning that if we remove the 65 delta—ray production assumption, our theoretical calculations were successful in accounting for dose distribution. Furthermore, allowing 6 — ray production to take place in the Monte Carlo simulation can be accounted for by the use of unrestricted stopping powers in our theoretical calculation and still good agreement is maintained. The discrepancy between theory and Monte Carlo seen in the maximum depth region is due to the energy loss model employed in our calculation which is based on Harder's equation [41] (see Appendix A). This equation, which gives a linear relationship between the mean energy of electrons and depth of penetration, is only accurate in the two third range of electrons [79]. Beyond this depth this linear relationship is known to under-estimate the mean energy [40, 79], and hence a lower dose is expected to be seen near the end of the electron range. It is worth to point out that the 20 MeV monoenergetic photon case chosen to illustrate the effect of 6 — rays is an extreme case; however, the validity of using the unrestricted stopping powers was well demonstrated. Making the above assumptions in our calculations, analytically calcu-lated primary Compton dose distribution are compared to Monte Carlo sim-ulation for mono- and polyenergetic photon beams. Homogeneous and inho-mogeneous phantoms were considered with a 5 cm air gap situated between 3 cm and 8 cm depth. The primary Compton dose was calculated according to Equations 3.6 and 3.19. Figures 3.5(a) and (c) show good agreement between the analytically calculated and the Monte Carlo simulated homogeneous and inhomogeneous absolute primary Compton dose for both 6 MeV and 20 MeV monoenergetic photon beams with a 5x5 cm2 field size as a function of depth. The 20 MeV curve deserves special attention; it shows the primary dose dis-tribution in the presence of an air cavity right in the middle of the build up 66 0.80 0 2 4 6 8 10 12 14 16 18 Depth (cm) Figure 3.4: Examining delta ray production effect using Monte Carlo simula-tion compared with analytical calculations. The upper two curves show that 5-ray production can be accounted for in our analytical calculations by using unrestricted stopping powers. region. The usefulness of our model can be seen in this difficult case. No nor-malization has been made in this case and the values are per million photons entering the medium. Accurate estimation of the interface dose requires such an agreement especially at the distal end of the air gap. At the distal end of the air gap the second interface acts as another surface and the buildup curve repeats itself starting from a background level that is dependent on the air gap depth and beam energy. Lateral dose profiles taken at a depth of 4 cm (inside the air gap in the inhomogeneous phantom) are compared with Monte Carlo simulations and 67 6 MeV 20 MeV 0.40 H MC (Homo) MC (Heter) Theory (Homo) Theory (Heter) 6 9 12 Depth (cm) 15 18 0.80 0.70 0.60 O J i 0.50 O ) CR O 0.40 •£ •f 0.30 w < 0.20 0.10 J ^•-^ .'V jy •«. h MC(Homo) MC (Heter) Theory (Homo) Air Theory (Heter) (b) -2 0 2 Off Axis Distance (cm) w -2 0 2 Off Axis Distance (cm) Figure 3.5: A comparison of primary Compton dose derived from theory and Monte Carlo simulations for a 5x5 cm2 parallel beam of 6 and 20 MeV mo-noenergetic photons incident on homogeneous and inhomogeneous phantoms: (a, c) variation of absolute dose with depth, (b, d) lateral absolute dose profiles at 4 cm depth. The absolute dose shown is per million photons. 68 excellent agreement was seen even in the penumbral regions (Figures 3.5(&) and (rf)). The Fermi-Eyges theory is derived from the Boltzmann-Fokker-Plank equation using a small angle scattering approximation. It is known to underestimate scattering of electrons in the forward region and overestimate it at large angles [79]. The results presented in Figures 3.5 support the use of this simple theory to describe the secondary electron scattering distribution. A similarly good agreement between primary Compton depth dose cal-culations and Monte Carlo simulation is demonstrated in Figures 3.6(a) and (c) for polyenergetic 6 MV and 18 MV photon beams. Profiles taken at 4 cm in both homogeneous and inhomogeneous phantoms were also compared and agree well as shown in Figures 3.6(&) and (rf). This good agreement validates our approaches of weighting the differential energy spectra of electrons by the corresponding polyenergetic photon spectrum (Equation (3.10)). For polyenergetic photon beams, a broad maximum dose region with a steep rise in the buildup region was as observed in the calculated depth dose. Furthermore, the descending portion of the primary Compton depth dose was well reproduced by our calculation showing that the use of a linear attenuation coefficient (*^p) evaluated at the mean energy of the photon spectrum (hv) and used in Equation 3.7 is a good approximation. In our calculations, A representing the loss of electrons at interfaces had to be gradually changed empirically from 1 to 0.7 with increasing photon energies (1 MeV to 20 MeV). Table 3.2 shows the values of A adopted for monoenergetic and polyenergetic cases in which good agreement with Monte Carlo simulation was observed. Using these values, we were able to explain the sudden drop in the dose value at the first dose bin inside the air cavity. 69 Figure 3.6: A comparison of primary Compton dose derived from theory and Monte Carlo simulations for a 5x5 cm2 point source (100 cm SSD) of 6 and 18 MV photon spectra incident on homogeneous and inhomogeneous phantoms: (a, c) variation of absolute dose with depth, (b, d) lateral absolute dose profiles at 4 cm depth. The absolute dose shown is per million photons. 70 Moreover, these values were seen to be field size independent. Table 3.3: A is determined as a function of energy for both monoenergetic and polyenergetic beams. Monoenergetic Polyenergetic 6 MeV 20 MeV 6 MV 18 MV X 0.80 0.90 0.75 0.85 Backscatter of electrons was not taken into account in our calculations although it can be characterized by a simple monoexponential function [110]. In the case of air gaps, backscatter is almost negligible as can be seen from the regions in the front and the distal end of gaps in the different Monte Carlo simulations. However, this effect may become more prominent for the case of high Z inhomogeneities which was discussed in the "photon-electron cascade model" [110] in which case more consideration is needed. 3.3.2 Scatter and total inhomogeneity correction factor and comparison with Monte Carlo simulation and experiments It is well known that the magnitude of scatter dose is depth and field size dependent. For small field size (< 5x5 cm2), the scatter component is negli-71 gible and it is safe to calculate the ICF based on primary dose only. In this Section, the effect of field size on the determination of ICF based on primary component will be studied, and solution to all field sizes encountered clinically will be presented. A. Comparison of primary and total ICF derived from Monte Carlo simulation In Figure 3.7 is shown a comparison of the total dose deposited due to all types of interactions compared to the primary Compton dose of monoenergetic 6 MeV photon beams calculated by Monte Carlo simulation. The effect of varying the radiation field size from 2x2 cm2 to 10x10 cm2 is demonstrated. Also shown in this figure is the corresponding inhomogeneity correction factor (ICF) derived from total dose and primary Compton dose scoring ((b) and As can be seen from these figures the primary Compton dose constitutes a major component (up to 90 percent) of the total dose for small field size. The lower two panels in Figure 3.7 show that ICF and ICFP are equivalent when the scatter dose over the primary dose ratio is negligible. In a sense, increasing the field size to 10x10 cm2 however causes a maximum discrepancy in the order of 6.6% at 10.5 cm deep (Figure 3.7(d)). This results from the increase in magnitude of the scatter dose with respect to the 2x2 cm2 irradiation field size. For polyenergetic photon beams, ICF derived from primary Compton dose and total dose are presented in Figure 3.8 for the 6 MV and 18 MV photon beams. Figure 3.8 shows the effect of using 2x2 cm2, 4x4 cm2, and 72 2X2 cm 10X10 cm 3 6 9 12 Depth (cm) 1.50 -I 1.25 • 1.00 • 0.75 • 0.50 • 0.25 • J Total 0.00 • Air ' Primary Compton (b) 3 6 9 12 15 U Depth (cm) a 0, (c) 1.50 0.25 0.00 Total (Homo) Total (Heter) Compton (HDFTIO) Compton (Heter) 9 12 Depth (cm) 18 Total Primary Compter Air 1 1 (d) 6 9 12 15 Depth (cm) Figure 3.7: Total dose and primary Compton dose as determined by Monte Carlo simulation are shown for 6 MeV 2x2 cm2 and 10x10 cm2 irradiation field size in homogeneous and heterogeneous phantoms: (a, c) variation of absolute dose with depth, (b, d) Inhomogeneity Correction Factor derived from the 2 sets of curves of part (a) and (c) respectively. 73 6 MV beam 18 MV beam Figure 3.8: Inhomogeneity Correction Factor as determined by Monte Carlo simulation are shown for 6 MV and 18 MV polyenergetic photons: (a, b, c) 6 MV photon beam, and (d, e, f) 18 MV photon beam using field sizes as indicated in the legend. 74 10x10 cm2 field sizes. This figure further demonstrates that primary Comp-ton dose distribution gives a good approximation for the total dose distribu-tion under certain conditions dictated by photon energy and field size (Figure 3.8(a,b,d,e)). Discrepancies of the order of 10% were seen between I C F P and I C F for field size larger that 5x5 cm2. This points out to the need of including a scatter correction factor. B . Total inhomogeneity correction factor To implement our scatter inhomogeneity correction factor method based on Equation 3.27, TMR curves as a function of depth were used for a number of field sizes and are shown in Figure 3.9. Curves of TMRs for field size smaller than 3x3 cm2 are extrapolated by the treatment planning system. Figure 3.10 shows the I C F S for 6 MV and 18 MV beams in the presence of a 5 cm air cavity. Two field sizes were investigated ie 5x5 cm2 and 10x10 cm2. The four curves in Figure 3.10 (a,c) predict a decrease in the scatter dose inside the air cavity and a buildup beyond the air/water interface as in the primary dose case. The lower two pannels (Figure 3.10 (b,d)) depict the effective field radius r' as calculated by the treatment planning system, and extracted from the ICFETAR curve according to Equations 3.26. Based on Equation 3.25, the equivalent radius of a 5x5 cm2 square field is 2.8 cm, and the equivalent radius for a 10x10 cm2 square field is 5.6 cm. Considering the air cavity as being the inhomogeneity, we expect the effective field radius (Equation 3.23) to be smaller than the actual field radius. Indeed, this was seen in all cases studied in Figure 3.10. Inside the air cavity, the effective field radius was almost zero indicating a minimal contribution of scattered radiation to the dose in that region. Hence, a zero effective field radius is the limit of the current 75 120 y-100 -80 -1 60 -40 - | 20 -[ 0 - -0 2 4 6 8 10 12 14 16 18 20 Depth (cm) Figure 3.9: Cadplan TMR curves for 6 MV and 18 MV beams. Three field sizes, 0x0 cm2, 3x3 cm2 and 10x10 cm2 are shown as a function of depth. The 0x0 cm2 TMRs are extrapolation from higher field size curves. scatter inhomogeneity correction factor method. Further improvement on the results requires an extensive scatter ray tracing calculations which lengthens considerably the computation time [105]. In combining the primary and scatter correction methods, it is required to know the relative dose contribution of the primary and scatter components (Equation 3.4). This ratio (R) needs to be known in homogeneous water only and for each field size of interest. This made Monte Carlo simulation a perfect candidate since these simulations are to be done once and no further scaling is required. The ratio of primary dose (Compton and Pair production) over total dose (R) is shown in Figure 3.9 for 6 MV and 18 MV using 5x5 cm2 10x10 cm 18 MV 6 MV 76 Figure 3.10: Scatter inhomogeneity correction factor (ICFS) and the corre-sponding effective beam radius derived for 6 MV and 18 MV beams using 5x5 cm2 and 10x10 cm2 irradiation field size in inhomogeneous phantoms: (a, c) variation of ICFS with depth, (b, d) effective beam radius used for calculating the 2 sets of curves of part (a) and (c) respectively. 77 and 10x10 cm2 field sizes. As it is expected, this ratio is higher in the smaller field cases. An alternate method by which this ratio can be calculated is by using ratio of TMRs. TMR curve for 0x0 cm2 has long been thought to represent the relative primary dose in water. However, obtaining this curve usually involves extrapolation which makes the 0x0 cm2 field size TMR curve uncertain. Woo et al [107] has shown that for an 18 MV beam the extrapolated zero field TMR corresponds to the Monte Carlo calculated primary dose TMR for a field radius greater than 2 cm. Nevertheless, this validation remains to be conducted for each treatment planning system which might employ a different extrapolation technique. Thus, using Monte Carlo simulation to derive the ratio R is practical and sufficient. In Figure 3.12 is shown the results of combining the primary and scatter correction factors based on Equation 3.4. The ICF is plotted as a function of depth for 6 MV and 18 MV beams using 5x5 cm2 and 10x10 cm2 field sizes. Measured ICF are plotted in the region beyond the air cavity. Monte Carlo results and their error bars are also shown which reflect the statistical fluctuation of these simulations. As stated above, the error achieved on the Monte Carlo dose distribution curve was within 2%. Hence, the error on the Monte Carlo derived ICF was within 4% (resulting from a ratio). ICF. Therefore, our analytical calculations can predict the build up in the ICF curve seen by Monte Carlo simulation and experiments beyond a 5 cm air cavity. The agreement is within the statistical error or 4% of the Monte carlo results. This was shown for low and high energies (6 MV and 18 MV) and small and large field sizes (5x5 cm2 and 10x10 cm2). Inside the air cavity 78 2 0.9 o Q « o I— 0.8 ra E •= 0.7 o o 00 DC 0.6 03 rr 0.5 (a) S 0 9 o Q ~2 o 1- 0.8 CO E •= 0.7 a. o o 0.6 0.5 (b) i 1 1 1— 6 9 12 15 Depth (cm) 18 1 8 M V 5x5 cm 2 10x10 cm 2 6 9 12 Depth (cm) 15 18 Figure 3.11: Ratio of primary dose to total dose as derived from Monte Carlo simulation using 5x5 cm2 and 10x10 cm2 irradiation field size in homogeneous water phantom: (a) 6 MV, (b) 18 MV. 79 there was good agreement seen in the case of the 6 MV beam; however, for the 18 MV beam the theoretical ICF was constantly higher than Monte Carlo results. Our primary dose comparisons (Section 3.3.1) indicated good agree-ment with Monte Carlo simulations. Therefore, the discrepancies inside the air cavity in Figure 3.12 (c and d) are due to the limitations of the proposed scatter inhomogeneity correction factor method. Hence, there is a need for modelling the transport of the secondary electrons set in motion by photons that have scattered at least once. Again, The region of interest lies beyond the air cavity, and the buildup effect shown by the Monte Carlo simulations and experiments was well reproduced by our analytical calculations. Although the calculation of ICF inside the air cavity represent extreme cases of clinical rel-evance, the physics behind our model and its limitations were demonstrated. 80 6 MV beam 18 MV beam Depth (cm) 1.25 H o 0.75 H 0.50 H 0.00 •iWiiHtiilH 10x10 cm 2 0 Experimenl Theory Air 1 1 ' MC (b) 12 15 16 1.50 10x10 cm 2 0 Experimenl — — Theory Depth (cm) w T I 1 9 12 15 18 Depth (cm) Figure 3.12: Inhomogeneity correction factor (ICF) in the presence of 5 cm air cavity is derived from analytical calculation, Monte Carlo simulation and experiment as a function of depth, (a, b) 6 MV beam 5x5 cm2 and 10x10 cm2 irradiation field size, (c,d) 18 MV beam 5x5 cm2 and 10x10 cm2 irradiation field size. 81 3.3.3 Computation time A. Inhomogeneity correction factor computation In any given computational technique a general rule applies that the more accuracy required the more calculation time consumed. However, in the current approach due to the use of numerical evaluation of integrals, espe-cially Equation 3.20, speed of calculation can be increased while accuracy is maintained. The use of the trapezoidal rule [84] in evaluating Equation 3.20 means dividing electrons into different energy intervals having width dE and origin depth interval dz and summing over all energies and depths. The time needed for such an integration depends on how large the intervals dE and dz are. To get an accurate answer, dE and dz should be as small as possible, however it turns out that the inhomogeneity correction factor or the ratio of primary Compton dose in the inhomogeneous phantom to the homogeneous case is independent of the choice of intervals. Figure 3.13 depicts the depth dose for 20 MeV photons in water and in the inhomogeneous phantom containing 5 cm air cavity for two sets of values for dE and dz as explained in the legend. For "small" values of dE and dz we get an accurate absolute dose ditribution which compares well with Monte Carlo as seen in section 3.3.1. If these intervales were increased by a factor of hundred, an underestimation of dose in both phantoms is observed at all depths as shown in Figure 3.13(a). Nevertheless, if ICFP is calculated for both sets of values, a remarkable agrreement is observed (Figure 3.13(b)). The calculation time for the "large" intervals were 1002 shorter than the "small" intervals particular to this example. 82 0.80 -I 0.70 -0.60 -o 0.50 -cu CO o a 0.40 -<u 3 O 0.30 -CO .0 < 0.20 -0.10 -0.01 -1.40 1 dZ=0.001 cm, dE=0.01 MeV dZ=0.1 cm,dE=1 MeV 6 9 12 Depth (cm) - i — 15 18 6 9 12 Depth (cm) 15 18 Figure 3.13: The effect of size of integration parameters (dE and dz) on the primary dose and ICFP distributions is shown, (a) Absolute primary dose for homogenous and inhomogenous case is plotted for "small" and "large" values of dE and dz. (b) the deduced ICFp for both sets of values is shown. 83 Therefore, choosing the largest possible values for dE and dz gives the best time performance for primary dose calculation. In principle, the largest value for dE is Emax, and for dz is z, meaning that the limits of integration is divided into one interval for both variables. However, since there can exist inhomogeneities between the limits of integration of the z variable, the smallest dimensions of which is limited by the size of a voxel. Hence, the limiting value of dz is the z dimension of the voxel considered to be 0.2 cm in our Monte Carlo simulation. For example, calculating the dose at a depth of 4 cm requires dividing the range of integration into 20 (^) intervals. The size of intervals for the energy dimension (dE) is not limited by the geometry of voxels, and the largest value that can be chosen is E^x indicating that the range of integration for the energy variable can be divided into one interval. To find the error incured from using "large" values of dz and dE an absolute percentage error is computed. The comparison is relative to ICFP with "small" dz and dE (0.001 cm and 0.01 MeV). An average value for the percentage error over all points is calculated as ICFpjSman Table 3.3 summarizes the calculation time needed to produce homoge-neous and inhomogeneous three-dimentional dose grids based on the choice of dz and dE. As discussed earlier, the calculation time needed for a polyen-ergetic photon beam is the same as a monoenergetic beam having an energy equal to the maximum photon energy in the spectrum. As shown, the calcu-lation times for the higher energy are considered to be practical in the clinic. It is recommended that the number of intervals of dE between the limits of 84 integration be kept equal or larger than 5 so the resulting error on the ICFP (due to the evaluation of integrals) is within few percent. Table 3.4: Computation time for 3D dose distribution for high and low pho-ton energies for different choices of dE. dz was chosen to be 0.2 cm which corresponds to the voxel size dimension. The first column (NE) indicates the number of intervales into which the range of integration is divided. The % error is relative to "small" values of dE in which NE is set to 2000. 6 MeV or 6 MV 20 MeV or 20 MV d E Time % d E Time % NE (MeV) (min) error (MeV) (ms) error 200 0.03 90.0 0.2 0.1 250.0 0.2 10 0.6 5.0 1.0 2.0 12.0 1.8 5 1.2 3.0 3.0 3.9 5.0 2.2 The above calculations are based on a 5x5 cm2 field size, incident on a 8x8x20 cm3 phantom with 0.2x0.2x0.2 cm3 voxels. Technically, primary Compton dose was calculated at all centres of voxels (0.2x0.2x0.2 cm3) as considered in Monte Carlo simulation inside both phantoms. To cut down on computation time by a factor of two, calculations can be done every other voxel and interpolation can be employed between data points. Another approach is to increase frequency of calculation points in sensitive regions and decrease it in other places. It is worthwhile to point out that these timing studies were done on a 266MHz Pentium PC running the Linux operating system. A faster CPU clock yields improved timing, and at least a factor of three (with a 700 MHz clock) can be achieved at the time of writing. 85 Yu et al [110] used a different approach in implementing his model. The calculation time on a VAX 8600 was estimated to be 12 h just for the primary dose component calculated in a similar phantom as above. On the other hand, a further advantage of the current model is due to the semi-empirical method proposed for calculating the scatter inhomogeneity correction factor (ICFS). The computation time for the scatter method is in the order of seconds on a UNIX platform. This outmatches the convolution/superposition method used by Yu et al for the calculation of the scatter component which requires considerably more execution time. B. Absolute primary Compton dose computation In cases when there is an interest in calculating the absolute primary Compton dose with good accuracy rather than a correction factor, an optimiza-tion technique is proposed for the evaluation of the two-dimensional integral (Equation 3.20). Using again the trapezoidal rule, Equation 3.20 can be eval-uated within 2% of accuracy when compared with the Monte Carlo method. This evaluation, as presented in Figure 3.5(c) and Figure 3.11(a) considering 20 MeV photons for example, was achieved using "small" values for dz and dE. This led to long computation times (on the order of hours) as can be extrapolated from Table 3.3. To be able to see the effect of changing the value of dz or dE on the magnitude of dose, absolute primary Compton dose was evaluated at (0,0,4) (4 cm deep along the central axis) as a function of different combination values of dz and dE. In Figure 3.14 is shown the effect of varying dz while keeping dE "small" and varying dE while keeping dz "small". The behaviour of these curves can be anticipated since the trapezoidal rule gives an error to the order 86 of I/AT2, where N is the number of intervals considered between the limits of intergration [84]. Since the length of interval is inversely proportional to the number of intervals, for instance, dz=£- (3.29) NE it is clear that the error involved from increasing dz and dE will be to the order of dz2 and dE2 respectively. Hence, the absolute primary Compton dose varies as second order polynomial function of dE when dz is "small" or dz when dE is "small". Thus, D(dz\dE) = aidz2 + b\dz + C\ (3.31) D(dE\dz) = a2dE2 + b2dE + c2. (3.32) In Figure 3.14 is also shown Equations 3.31 and 3.32 fitted to the the dose curves. The estimates of the paramaters a, b and c are given in Table 3.4. One thing to note is that the error originating from increasing dz or dE is obviously D(dz\dE) — Ci or D(dE\az) — c2. The advantange seen from these errors is that they are of opposite signs and similar tendency. A fortunate situation is to be achieved when both errors cancel out and an accurate dose value can be found that is independent of the choice of intervals (dz and dE). In fact, rearranging Equations 3.31 and 3.32 and setting the sum of errors to zero, a relationship between dz and dE can be obtained: aidz2 + bidz + a2dE2 + b2dE = 0, (3.33) which is a quadratic equation, and dE solution can be given as d E _ ~h + \/b22 - ±a2(axdz2 + bxdz) 2a2 ' { ' ' 87 1 a> 10 o Q a> o in AQ < (b) 0.6 0.5 0.4 0.3 dZ=0.001 cm £ Theory Polynomial 0.2 I i i 111 1 1—i—i i i 111 1 1—i—i i i 111 (a) 0.1 1.0 dE (MeV) 10.0 1.0 0.4 dE=0.01 MeV 4} Theory Polynomial 0.3 I 0.1 H 1 1 1—I I I I I 1 1 — i — r 1.0 dZ (cm) Figure 3.14: Variation of the primary Compton dose value at (0,0,4) is shown as a function of integration interval size, (a) Dose versus dE is plotted for a constant dz, (b) Dose versus dz for a constant dE. 88 Based on this optimized choice of dE (Equation 3.34) the primary Compton dose at (0,0,4) was evaluated as a function of dz. In Figure 3.15 is shown an almost flat curve that is independent of dz as along as dE fol-lows Equation 3.34. This technique results in a saving in computation time as was seen in the previous section, however, accuracy is maintained not only for ICFP distribution but also for absolute primary Compton dose distribution. Table 3.5: Best estimate of parameters in Equations 3.31 and 3.32 was derived based on the Least-squares fitting method. a b c D(dZ) -2.71xl0"10 l . lOxlO' 7 4.56xl0"7 D(dE) 1.70xl0"9 -3.61xl0"8 4.64xl0"7 3 . 4 Summary An inhomogeneity correction factor based on the concept of separating the primary and scatter dose is proposed. The primary inhomogeneity correction factor was formulated considering explicit secondary electron transport. The scatter inhomogeneity correction factor is proposed as a semi-empirical model based on the ratios of TARs. The results were compared with Monte Carlo simulation and experiments for extreme inhomogeneities such as air cavities. For primary calculation, Compton dose was derived on a purely theoretical 89 1.0 Optimized dE —0— Theory 0.4 H 0.3 H 1— i 1 1— i 1 1—r—I 0.10 1.00 dZ (cm) Figure 3.15: Primary Compton dose as calculated at 4 cm depth versus depth interval length for 20 MeV photons. dE was chosen following Equation 3.34. ground. The steps taken to calculate the primary Compton dose involved determining the photon fluence distribution incident on every layer of the medium, calculating from the photon fluence the primary Compton electron distribution (based on Klein-Nishina formalism) exiting a given layer, trans-porting the electrons to the downstream layers to obtain the electron planar fluence distribution, converting it to dose, and summing the dose contributions by all the electrons to obtain the primary Compton dose. For scatter calcu-lation, a ratio of tissue-air ratios employing scaled beam radius is proposed. Good agreement between the calculated and the Monte Carlo simulated ab-solute primary Compton dose was seen for monoenergetic photons (6 and 20 90 MeV) and polyenergetic photon beams (6 and 18 MV). ICF comparison between the three modalities gave results within 4% in regions beyond the air cavity. Parameters of integration were optimized such that a reduction in the computation time to a practical limit was observed while accuracy of calculation was mantained. 91 Chapter 4 Comparison of G E A N T 3 and EGS4 Monte Carlo codes for photon and electron beams 4.1 Introduction Monte Carlo calculations provide a benchmark for analytical calculations and a verification for experimental measurements [10]. The use of Monte Carlo codes in radiotherapy has become common [28, 42, 43, 64, 87, 101]. Comparisons between different Monte Carlo codes, for example EGS4 (Electron Gamma Shower version 4) [76] and MCNP4B (Monte Carlo N-Particle version 4B) [22], were recently reported for photons and electrons [63] and the codes were compared to experimental measurements for clinical electron beams [46]. Several differences were observed when the central axis depth dose curves calculated with EGS4 and MCNP4B were compared [46]. These stem from the use of different electron transport methods in accounting for the large energy loss events that occur due to the production of delta rays and 92 bremsstrahlung. EGS4 was found to agree better with the experimental val-ues [46]. MCNP4B was seen to better account for electron backscattering for low and high Z materials but required longer computation times [46, 63]. Monte Carlo simulations can consume large amounts of computing time. Therefore, the efficiency of the calculations done by a particular code is of importance in a clinical setting. Simulation variables (for example, electron cutoff energy and maximum fractional energy loss) can influence the amount of time taken for any calculation [45]. The present study was initiated to investigate the GEANT3 (GEometry ANd Tracking version 3.21) Monte Carlo code developed at CERN (Centre European de la Recherche Nucleaire), Geneva, Switzerland [23] for calculations in radiotherapy physics. GEANT3 is able to simulate the dominant processes which occur in the energy range from 10 keV to 10 TeV for electromagnetic interactions. The use of GEANT3 has been mostly limited to high energy physics, though it has been used for dose calculation for radiochromic films [96], radiation protection problems [25, 62], and more recently for nuclear interactions of 160 MeV protons stopping in copper [39]. 4.1.1 Description of electron transport methods GEANT3 is equipped with different user selectable electron transport modes. Being more versatile than most Monte Carlo codes concerning the production of r5-rays, GEANT3 has three options for dealing with these rays. An im-portant user controlled variable for these options is DCUTE below which the 93 delta ray energy losses are simulated as continuous energy loss by the inci-dent electron, and above it they are explicitly generated. In the first option, 8—ray are produced over the entire energy range: DCUTE to \Einitiai for electron-electron scattering and DCUTE to Einitiax for positron-electron scat-tering. This mode is termed as "no fluctuations". The second mode of energy loss is "full fluctuations", in which 8—rays are not generated, and the energy loss straggling is sampled from a Landau [61], Vavilov [90, 98] or Gaussian [92] distribution each according to its validity limits [23]. The third is "restricted fluctuations", with generation of 8—rays above DCUTE and restricted Lan-dau fluctuations below DCUTE. In principle, choosing energy loss fluctuations carries an advantage if energy deposited is scored in voxel sizes larger than the range of 8—rays. This results in great savings of computation time and avoids tracking a large number of 8—rays generated below DCUTE. In all cases a continuous energy loss by the incident electron is assumed according to the Berger-Seltzer formulae [11, 17, 91]. When traversing matter, charged particles undergo deflection from their original trajectory because of their interactions with atoms. This deflection is rather large for (i) electrons that are deflected by the electric field of nuclei and (ii) electrons that undergo multiple small elastic collisions. Over the years, considerable effort has gone into the formulation of a theory of Coulomb multiple scattering. Multiple scattering is well described by Moliere theory [12, 73, 74]. Moliere multiple scattering theory is used by default in GEANT3. An important limiting factor in the Moliere theory is the average number of Coulomb scatters for a charged particle in a step, £IQ. When ClQ > 20, the 94 Moliere theory is not applicable. The range 1 < fin < 20 is called the plural scattering regime [55]. In this range a direct simulation method is used for the scattering angle in GEANT3 [23]. A simplification of the Moliere theory by a Gaussian form is also implemented in GEANT3. The Gaussian multiple scattering represents Moliere scattering to better than 2% for 10 < fin < 108. On the other hand, EGS4 uses one mode of electron transport which includes generation of 5—rays without fluctuations, which corresponds to the GEANT3 first mode of electron transport described above. Both codes use the same differential cross section for 5—ray production for the electron-electron (Moller) scattering [75, 76] and for the positron-electron (Bhabha) scattering [13] above DCUTE in GEANT3 or AE in EGS4. For multiple Coulomb scat-tering both codes use the same Moliere theory, however, EGS4 does not take plural scattering into account. 4.1.2 Parameters for particle tracking i n a medium Simulation results depend critically on the choice of parameters that control the transport and tracking of particles. In GEANT3, electron step sizes are limited by the smaller of the upper limits for the step imposed by the con-tinuous energy loss and multiple Coulomb scattering. Continuous energy loss can introduce an upper limit on the step via the variable DEEM AX [23]. This parameter (maximum allowable fractional energy loss in an electron step) is also present in EGS4 (ESTEPE) which allows a fair comparison between the two codes. A lower limitation on the tracking step is not generally imposed in 95 GEANT3. There is, however, a protection against the step being reduced to a very small value by continuous processes, especially multiple scattering. A variable (STMIN) is introduced which is compared with the stopping range of particles. If this range becomes smaller than STMIN, the constraint imposed by the multiple scattering is ignored and the minimum is taken between the reduced stopping range (the distance the particle has to travel to reach its threshold energy) and STMIN itself. In this sense, STMIN is no more than a "tracking accelerator" for stopping particles. Another limitation on the step size in GEANT3 is the STEMAX user controlled parameter which sets an absolute upper limit to the size of a step for each tracking medium [23]. The purpose of this study is to explore the potential of GEANT3 in modeling radiation absorbed dose for a variety of problems in medical physics. The two main issues considered are photon and electron showers interacting in a water-like material. The validity of using the different electron trans-port modes is to be assessed and a comprehensive timing benchmark is to be presented. 4.2 Materials and methods Depth doses were calculated with both the EGS4 and GEANT3 Monte Carlo codes for monoenergetic and polyenergetic photons and monoenergetic elec-trons. The usercode DOSXYZ [65] was used in the EGS4 simulations with default values for the Parameter Reduced Electron Step Algorithm (PRESTA) [15]. For GEANT3, a usercode (DOSE) written in FORTRAN (Appendix C) which scores absorbed dose in cartesian coordinates was used. Monoenergetic 96 6 MeV and 20 MeV photons incident on a water phantom were simulated. Radiation field size was set to 5x5 cm2. The water phantom formed a three dimensional dose scoring grid with 0.5x0.5x0.2 cm2 voxel size. For polyener-getic photons, the bremstrahlung spectra that takes into account the structural components of the linac were for 6 MV [71] and 18 MV [99] Varian linacs and were derived from literature. Monoenergetic electron beams with energies 5 MeV, 10 MeV, 17 MeV and 20 MeV incident on a water phantom were also simulated in this study. Experimental measurements are based on commissioning data using 6 MV and 18 MV x-rays produced by a CL 21EX linear accelerator (Varian As-sociates, Palo Alto, CA). Depth doses were measured in a water phantom. A radiation field size of 5x5 cm2 at a source to surface distance (SSD) of 100 cm was used. The dose buildup region was measuerd with a parallel plate air ion-ization chamber (Markus PTW model 30-329, Victoreen Nuclear Associates, Carle Place, NY) as described in Chapter 2. Beyond dmax a cylindrical cham-ber (Ion chamber IC 10, Wellhofer Dosimetrie, Schwarzenbruck, Germany) was employed. This chamber has an active volume diameter of 6 mm and an active cylinder length of 3.8 mm. The central electrode has 1 mm diameter and a 3.3 mm active length. Measurements were normalized to the dose at 10 cm depth in a water phantom for both negative and positive potentials. The usercode XYZDOS [16] supplied with the EGS4 distribution and recommended for timing studies was adopted for a timing study using a value of 0.01 for the parameters ESTEPE and DEEMAX. The electron cutoff energy in both codes (ECUT and CUTELE) was set to 10 keV . The timing study was performed on a water cube of 20x20x20 cm3 volume with a broad parallel 97 source of electrons in the range 5 MeV to 20 MeV. Furthermore, the different electron transport modes of GEANT3 were considered in this study, and in the case of "restricted fluctuations" values of 500 keV, 1 MeV, and 2 MeV were set for DCUTE for all electron beams studied. The hardware was a 333 MHz PII PC running the Linux operating system. 4.3 Results and discussion 4.3.1 Photons Comparisons of EGS4 and GEANT3 calculations of depth doses in water phan-toms for the case of monoenergetic and polyenergetic photon beams are shown in Figures 4.1 and 4.2. Figures 4.1(a) and 4.1(b) show the comparison of EGS4 and GEANT3 calculations of depth doses for 6 MeV and 20 MeV monoen-ergetic photons respectively. Depth dose curves in Figure 4.1 are given per incident photon fluence. Results of the simulation with 6 MV and 18 MV photon spectra are given in Figures 4.2(a) and 4.2(b) respectively along with experimental results. In Figure 4.2 depth doses are normalized to the corre-sponding value of dose at 10 cm depth in the water phantom. This choice of depth normalization insures a minimal contribution to the scored dose at that depth due to contaminant electrons generated in the linac head. Excellent agreement was obtained in all comparisons between EGS4 and GEANT3 (Figures 4.1 and 4.2). Also good agreement between simulation and measurements for 6 MV photon spectra was seen (Figure 4.2(a)). However, 98 20 E o >» O cl) O 4) 0) O Q (a) E o >. a u a> 3 a> in o Q (b) 40 30 20 10 i r 6 9 12 Depth (cm) 15 6 9 12 Depth (cm) 15 18 JP V 20 MeV GEANT EGS4 18 Figure 4.1: Comparison of EGS4 and GEANT3 Monte Carlo codes for depth dose calculations in water phantoms irradiated with monoenergetic 6 MeV and 20 MeV photons is shown in (a) and (b) respectively. Broad parallel beams were used. 99 1.50 1.25 0) S ioo Q o ••I 0.75 CC 0.50 0.25 0.00 0.25 0.00 GEANT EGS4 Experiment 6 9 12 Depth (cm) 15 18 9 12 Depth (cm) 15 18 Figure 4.2: Comparison of EGS4, GEANT3 Monte Carlo codes, and experi-ment for depth doses in water phantoms irradiated with 6 MV and 18 MV photon beams is shown in (a) and (b) respectively. A point source was used and SSD was set to 100 cm. 100 a discrepancy of the order of 10% between the two Monte Carlo codes and experiment was observed at 1 cm depth in the water phantom as can be seen from Figure 4.2(b). This is in the buildup region of the 18 MV spectrum of photons where electron contamination from the linac head contributes to the absorbed dose more significantly than with the 6 MV spectrum. This problem of electron contamination is accounted for in the code BEAM [87] which is able to generate a phase space file of contaminant electrons that can be transported in the water phantom. The radiation transport in the current simulations using both EGS4 and GEANT3 was started at the water phantom entrance and it is assumed the discrepamcy is due to this difference. The electron transport option used in GEANT3 for these simulations is the "no fluctuations" option. The GEANT3 code used in this study was the 96a version of the CERN Library. Later versions of the CERN Library were also used, however, all of them showed artifacts in depth doses at the phantom entrance. All the simulations were based on 100 million events taking between 50 to 100 hours of computation time. 4.3.2 Electrons A comparison of depth doses (normalized to incident electron fluence) calcu-lated by EGS4 and GEANT3 for monoenergetic electrons of 5 MeV, 10 MeV, 17 MeV, and 20 MeV are shown in Figure 4.3. The electron transport option used in GEANT3 for these simulations was "no fluctuations". Good agreement was also seen in all of these cases. However, EGS4 101 Figure 4.3: The comparison of EGS4, GEANT3 Monte Carlo codes for depth dose calculations in water phantoms irradiated with a range of monoenergetic electrons is shown. The electron transport option used in GEANT3 was the "no fluctuations". Dose was normalized to incident electron fluence. 102 comes with the very important step size algorithm (PRESTA) which makes it more accurate at the beam entrance where very short electron steps are needed. A clear underestimation of the GEANT3 calculations was seen in the first voxel due to this effect. To assess the validity of using the other electron transport modes in GEANT3 a comparison was carried out using all three modes with a range of values for DCUTE in the case of " restricted fluctuations". Results using 5 MeV and 20 MeV electrons are shown in Figure 4.4(a) and 4.4(b) respectively. The values chosen for DCUTE were 1 MeV and 2 MeV for the 5 MeV and 20 MeV electron energies respectively. This choice covers the energy range for the majority of 8—rays produced. As can be seen from Figure 4.4, using energy loss fluctuations results in underestimating the value of dose in the buildup region and overestimating it past the buildup region. The maximum discrepancy was within 2% for the 5 MeV electron beam. This effect was seen with other Monte Carlo codes which account for energy loss straggling based on the Landau theory [61, 63, 86]. The "restricted fluctuations" option for the 5 MeV electrons gave depth dose curves in between the "no fluctuations" and the "full fluctuations" curves. For small values of DCUTE (100 keV) the depth dose curve matched the "no fluctuations" option and for large values of DCUTE (2 MeV) the depth dose curve matched the "full fluctuations" results as it is expected (data not shown). For the 20 MeV beam statistical fluctuations in the absorbed dose in the buildup region are observed for the two fluctuation modes available in GEANT3 (Figure 4.4(b)). This shows the inaccuracy of using fluctuation 103 0.5 5 M e V 0 3 6 9 12 Depth (cm) Figure 4.4: The comparison of the three electron transport modes implemented in the GEANT3 Monte Carlo code for depth dose calculations in water phan-toms is shown. Depth doses for 5 MeV and 20 MeV monoenergetic electrons are shown in (a) and (b) respectively. Dose was normalized to incident electron fluence. 104 modes for electron energies greater than 10 MeV under small scoring voxel (0.5 x 0.5 x 0.2 cm3) conditions. This is attributed mainly to the absence of f5-ray production below DCUTE and the deposition of energy local to the current voxel. A large number of delta rays produced by incident electrons in the energy range considered are below 1 MeV. This corresponds to a maxi-mum electron range of 0.53 cm in water (derived using DOSXYZ simulation). Increasing the scoring voxel size (for example, increasing the dimension along the beam direction to 0.5 cm) leads to an improvement of the 20 MeV electron beam results (data not shown). The main advantage of using "restricted fluctuations" is one can control the accuracy of the simulation via the parameter DCUTE. A small value of DCUTE (< bOOkeV for 5 MeV electrons) improves the agreement between the "restricted fluctuations" and the "no fluctuations" simulations. A large value for DCUTE (2.5 MeV for 5 MeV electrons) gives an exact match between the "restricted fluctuations" and the "full fluctuations" simulations. However, decreasing DCUTE comes at the expense of increasing computation time as we shall see in the next section. 4.3.3 Calculation time comparison A comparison of the time of calculation of the GEANT3 and EGS4 codes for electron beams is shown in Table 4.1. The numbers shown are the amount of time in milli-seconds (ms) for one history event of an incident electron when transported until its kinetic energy falls below a predefined cutoff value (ECUT or CUTELE). This comparison was carried out for four different elec-105 tron beams. The EGS4 time values appear to be half of the "No fluctuations" GEANT3 values consistently for all energies investigated. However, this trend is reversed when "restricted fluctuations" is chosen, and an improvement in GEANT3 cal-culation speed is noticed. Under this mode the parameter DCUTE was set to 500 keV, 1 MeV, and 2 MeV for all four electron energies. As DCUTE approaches the maximum energy range of 5—rays, the computation time de-creases and tends to the values of the "full fluctuations" mode. This can be seen from the last two colums of Table 4.1. The time values under the "full fluctuations" mode in GEANT3 are consistently lower than those of EGS4. For instance, in case of 20 MeV electrons, GEANT3 is three times faster than EGS4. Therefore, the fastest mode under which GEANT3 can operate is the "full fluctuations" mode. The tracking medium parameter (STMIN, discussed in section 4.2) was set to zero in the timing study. An attempt to increase this parameter to 0.007 cm for water as recommended [23] yielded an increase in speed by a factor of 10 for all energies considered (Table 4.2). This parameter is implemented in the GEANT3 code itself not in the usercode DOSE and it is regarded as a variance reduction tool. Furthermore, increasing STMIN to 0.007 cm gave an accuracy to better than 2% when compared with STMIN zero setting. This shows the crucial role of this parameter when calculation time is of practical importance. 106 Table 4.1: Calculation time comparison between EGS4 and GEANT3 per event history. ESTEPE and DEEMAX were set to 0.01, ECUT and CUTELE were set to 10 keV, and AE was set to 521 keV while DCUTE is given below. Energy (MeV) EGS4 GEANT XYZDOS (msi NO Fluctuations (ms^  Restricted Fluctuations (ms) Full Fluctuations (ms) AE= 10keV DCUTE= 10keV DCUTE= 0.5 MeV •CUTE= 1.0 MeV DCUTE= 2.0 MeV DCUTE= 10TeV 5 28.0 53.8 26.5 23.8 23.0 23.1 10 51.6 97.7 38.7 33.1 30.1 28.9 17 83.7 157.2 53.7 44.3 38.9 33.6 20 95.6 175.0 59.9 45.8 40.8 35.3 4.4 Summary The Monte Carlo codes EGS4 and GEANT3 have been compared for calcu-lations of photon and electron depth doses in water. For photons, monoen-ergetic (6 MeV and 20 MeV) and polyenergetic (6 MV and 18 MV) beams were considered and experimental results from a CL 21EX Varian linac were used for comparisons. In all cases excellent agreement was found between the two codes. For electron beams, energies ranging from 5 MeV to 20 MeV were chosen and again good agreement was found except for the interface dose where GEANT3 underestimates the dose, especially for the higher energies compared to EGS4. The discrepancy in the interface dose is attributed to the role of PRESTA in EGS4 in choosing appropriate electron step size at 107 Table 4.2: Calculation time comparison between EGS4 and GEANT3 per electron event history. STMIN is set as recommended in GEANT3 manual. Energy (MeV) EGS4 GEANT3 XYZDOS (ms) NO Fluctuations (msl AE=10 keV DCUTE=10keV 5 28.0 4.3 10 51.6 6.6 17 83.7 9.3 20 95.6 10.4 interfaces in small scoring voxels. The three electron transport modes imple-mented in GEANT3 were used for the above electron beams and their accuracy correlated with increasing computation time. To conclude the comparison, a timing study was performed. GEANT3 was seen to be two times slower than EGS4 in one of its electron transport modes and up to three times faster when adapting its energy straggling mode which accounts for the energy loss using the Landau/Vavilov/Gaussian models each being used according to its validity limits. 108 Chapter 5 Conclusions and future directions In this thesis, we have investigated a series of topics dealing with radiation dose calculations in heterogeneous tissues. This last chapter summarizes and presents conclusions of this work. The first topic was an investigation of the limitations of several current treatment planning algorithms for dose calcu-lations in heterogeneous phantoms with air cavities. The second topic is a description of an analytical approach to the calculation of primary Compton dose distribution in both homogeneous and heterogeneous phantoms, and the derivation of an inhomogeneity correction factor. Finally, in the pursuit of treatment planning based on Monte Carlo simulation we explored the poten-tial of GEANT3 for dose calculation in clinically relevant cases. 109 5.1 Summary of work The perturbation of dose distributions by air cavities was investigated as a function of photon energy, field size, size and shape of the cavity and depth of the point of measurement for varying geometries. Data presented in this report show alteration in the dose at air/polystyrene interfaces for different air gap thickness as determined by commercial treatment planning systems using the Batho, modified Batho and ETAR methods compared to experiments. The latter demonstrated a significant underdosing down to a relative dose at the interface of 0.45 for a 5 cm air gap for 6 MV and 5x5 cm2 field size while the treatment planning systems over-predicted the relative dose (0.9). These differences are due to the fact that electronic disequilibrium is not accounted for by the algorithms used in the treatment planning systems. Geometries of air gap widths ranging from 2x2 cm2 to 20x20 cm2 demon-strated the effect of the scatter component of the dose from the walls and its magnitude at interfaces. The underdosing observed may be significant at depths of several centimeters from the interface for smaller field sizes and air gaps approaching the field size in their lateral dimensions. These find-ings point to the need for proper inclusion of electron disequilibrium effects at interfaces as would be possible with Monte Carlo simulation and super-position/convolution algorithms [68]. This is an area of active investigation [82, 103]. The concept of separating the primary and scatter components of dose was applied to the derivation of an inhomogeneity correction factor (ICF). ICF was calculated as a weighted sum of primary and scatter components. 110 ICFp distribution in the presence of air cavities was calculated employing de-tailed electron transport. Primary Compton dose was calculated using the Fermi-Eyges multiple scattering theory as in the framework of the "photon-electron cascade" model [110]. The steps that were taken involve determining the primary photon fluence distribution incident to every layer of the medium, calculating from the photon fluence the Compton electron distribution exit-ing the given layer, transporting the electrons to the downstream layers to obtain electron planar fluence distribution and converting it to dose, and sum-ming the dose contributions over all electron energy intervals to obtain the primary Compton dose. A comprehensive study of the different assumptions made, i.e.neglecting delta-ray (5 -ray) production and forward directed sec-ondary electrons was presented. The calculated primary Compton dose agreed with Monte Carlo simulation over the range of energies investigated for mo-noenergetic photons (1-20 MeV) and polyenergetic (6-18 MV) photon beams considering the statistical fluctuations of the simulation results. Since electron transport in scatter dose calculation is not as crucial as in the primary case, a semi-empirical method for ICFS calculation is proposed. A relationship to the ETAR correction method was established and depth dose curves in inhomogeneous phantom with air cavities were derived. The total ICF was calculated and found to be in agreement to within 4% with Monte Carlo simulations and experiments in the second buildup region seen beyond the distal water/air interface. Our primary correction model as it stands would be an ideal method for inhomogeneity correction in radiosurgery where photon irradiations with field size smaller than 5x5 cm2 are employed and more electronic disequilibrium effects are seen. However, for the larger field sizes the scatter correction method (ICFS) needs to be incorporated as 111 discussed. A method for minimizing computation time was devised. For obtaining an ICF distribution, it was found that the choice of parameters of integration does affect the absolute dose distribution but not the ICF distribution. The latter parameters were investigated and timings of calculation were within a few minutes even for the high energy limit (20 MeV photons). Furthemore, a numerical technique was devised for the accurate and fast calculation of primary Compton dose. An optimization formula, which relates the different integration differential variables, was derived and the dose function can now be accurately evaluated with a minimal number of intervals for the variables. Proper modeling of photon and electron transport in photon and elec-tron beams in the low megavoltage energy range can be done using GEANT3. Comparisons to EGS4 calculated photon and electron depth dose curves in clinically relevant cases gave results within 2%. The three electron transport modes in GEANT3 give it diversity and more options in which accuracy can be traded with speed. Extra care should be taken while calculating electron depth dose curves when using fluctuation modes in GEANT3. In these modes delta rays are not produced below a parameter DCUTE and energy loss straggling is employed instead based on the Landau/Vavilov/Gaussian models. Under this option it is recommended to use large voxel size for dose scoring (around 0.5x0.5x0.5 cm3) that can accommodate the range of the large number of delta rays produced under 1 MeV. Timing comparisons between GEANT3 and EGS4 show that significant saving in computation time can be achieved under the "full fluctuations" mode in GEANT3 version 96a. 112 5.2 Future directions Much work is presented in this thesis on several topics of dose calculation in inhomogeneous phantoms. Nevertheless, there are still many avenues of investigation which could be followed to continue this work and improve ac-curacy of calculations. In the process of calculating primary Compton dose distribution numerical integration techniques used to evaluate integrals were fairly basic ones, improvements in accuracy and computation time should be possible when using more specialized techniques like the Gaussian quadrature method. In the same light, optimization of the code (Appendix B) would also increase the speed of this implementation. There is also more work that could be done validating the clinical useful-ness of our method with regards to other inhomogeneity correction for instance lungs. The present approach was used for semi-infinite slab geometry. This approach is applicable to dose calculations for cases with three dimensional inhomogeneities. It is worthwhile to point out that a number of cases were investigated but were too few to be included in this thesis. On the calculation of a total inhomogeneity correction factor other ways can be considered such as detailed electron transport at areas of interest and semi-empirical calculations (using Batho or ETAR corrections) elsewhere. In inhomogeneous phantoms it is sufficient to do the calculation only at the in-homogeneity site and at the second buildup region beyond the distal interface of water/air. Hence electrons generated inside the inhomogeneity area and around it within a margin of dmax (maximum distance for electron travel at the given energy) need only to be transported which can further reduce the 113 computation time. This approach was adopted with the Monte Carlo and the convolution/superposition methods [108] and found promising in clinically relevant cases. 114 Bibliography [1] A. Ahnesjo. Collapsed cone convolution of radiant energy for photon dose calculation in heterogeneous media. Med. Phys., 16:577 - 592, 1989. [2] A. Ahnesjo and M. M. Aspradakis. Dose calculations for external photon beams in radiotherapy. Phys. Med. Biol., 44:R99-R155, 1999. [3] P. Andreo and A. Brahme. Restricted energy loss straggling and multiple scattering of electrons in mixed Monte Carlo procedures. Radiat. Res., 100:16 - 29, 1984. [4] F. H. Attix. Introduction to radiological physics and radiation dosimetry. New York: Wiley, 1986. [5] F. H. Attix and W. C. Roesch. Radiation Dosimetry. Academic Press, New York and London, second edition, 1968. [6] H. F. Batho. Lung corrections in cobalt 60 beam therapy. J. Can. Assn. Radiol, 15:39 - 83, 1964. [7] J. J. Battista, T. R. Mackie, E. El-Khatib, and J. W. Scrimger. Lung dose corrections for 6 MV and 15 MV X-rays: anomalies. In Proceed-ings of the 8th International Conference on the Use of Computers in Radiation Therapy, 1984. 115 [8] J. L. Beach, M. S. Mendiondo, and 0. A. Mendiondo. A comparison of air-cavity inhomogeneity effects for cobalt-60, 6 and 10 MV x-ray beams. Med. Phys., 14:140 - 144, 1987. [9] R. Bellman and G. M. Wing. An introduction to Invariant Embedding. New York: Wiley, 1975. [10] M. J. Berger. Monte Carlo calculation of the penetration and diffusion of fast charged particles. Methods Comput. Phys., 1, 1963. [11] H. A. Bethe. Arm. Phys., 5:325, 1930. [12] H. A. Bethe. Phys. Rev., 89:1256, 1953. [13] H. J. Bhabha. The scattering of positrons by electrons with exchange on Dirac's theory of the positron. Proc. Roy. Soc. (London), A154:195, 1936. [14] A. F. Bielajew, A. E. Nahum, W. R. Nelson, and D. W. O. Rogers. "A course on Radiation Transport Calculations using EGS4". National Research Council of Canada, Ottawa. [15] A. F. Bielajew and D. W. O. Rogers. PRESTA: the parameter reduced electron-step transport algorithm for electron Monte Carlo transport. Nucl. Instr. Methods, B 18:165 - 181, 1987. [16] A. F. Bielajew and D. W. O. Rogers. A standard timing benchmark for EGS4 Monte Carlo calculations. Med. Phys., 19:303 - 313, 1992. [17] F. Bloch. Z. fur Phys., 81:363, 1933. [18] C. Bogers. Complexity of Monte Carlo and deterministic dose-calculation methods. Phys. Med. Biol, 43:517 - 528, 1998. 116 [19] A. L. Boyer and E. C. Mok. A photon dose distribution model employing convolution calculations. Med. Phys., 12:169 - 177, 1985. [20] A. L. Boyer and E. C. Mok. Calculation of photon dose distributions in an inhomogeneous medium using convolutions. Med. Phys., 13:503 -509, 1986. [21] A. Brahme, J. Chavaudra, T. Landberg, E. C. McCullough, F. Nusslin, J. A. Rawlinson, G. Svensson, and H. Svensson. Accuracy requirements and quality assurance of external beam therapy with photons and elec-trons. Acta Oncol., Suppl. 1, pages 1 - 76, 1988. [22] J. F. Briesmeister. MCNP-A general Monte Carlo N-particle transport code, Version ^ A. Los Alamos National Laboratory Report, LA-12625, 1993. [23] R. Brun, F. B. Ruyant, M. Maire, A. C. McPherson, and P. Zanarnit. GEANTS-Detector description and simulation tool (Reference Manual). March 1994. [24] K. J. Cassell, P. A. Hobday, and R. P. Parker. The implementation of a generalised Batho inhomogeneity correction for radiotherapy planning with direct use of CT numbers. Phys. Med. Biol, 26:835 - 833, 1981. [25] A. Clouvas, S. Xanthos, M. Antonopoulos-Domis, and J. Silva. Monte Carlo based method for conversion of in-situ gamma ray spectra ob-tained with a portable Ge detector to an incident photon flux energy distribution. Health Phys., 2:208 - 210, 1998. 117 [26] J. R. Cunningham. Current and future development of tissue inhomo-geneity corrections for photon beam clinical dosimetry with the use of CT. Raven Press, 1983. [27] J. Cygler, J. J. Battista, J. W. Scrimger, E. Mah, and J. Antolak. Elec-tron dose distributions in experimental phantoms: A comparison with 2D pencil beam calculations. Phys. Med. Biol, 32:1073, 1987. [28] J. J. DeMarco, T. D. Solberg, and J. B. Smathers. A CT-based Monte Carlo simulation tool for dosimetry planning and analysis. Med. Phys., 25:1 - 11, 1997. [29] M. Dollinger, E. H. Rosenbaum, and G. Cable. Everyone's guide to Cancer Therapy. Somerville House publishing, Toronto, second edition, 1995. [30] J. Van Dyk, R. B. Barnett, J. E. Cygler, and P. C. Shragge. Commis-sioning and quality assurance of treatment planning computers. Int. J. Radiat. Oncol. Biol. Phys., 26:261 - 273, 1993. [31] E. El-Khatib. The present status of lung dosimetry in radiation treat-ment planning. AAMD J., 10:9 - 15, 1985. [32] E. El-Khatib and J. J. Battista. Improved lung dose calculation using tissue-maximum ratios in the Batho correction. Med. Phys., 11:279 -286, 1984. [33] E. R. Epp, A. L. Boyer, and K. P. Doppke. Underdosing of lesions result-ing from lack of electronic equilibrium in upper respiratory air cavities irradiated by 10 MV x-ray beams. Int. J. Radiat. Oncol. Biol. Phys., 2:613 - 619, 1977. [34] R. D. Evans. The Atomic Nucleus. New York: McGraw-Hill, 1952. [35] L. Eyges. Multiple scattering with energy loss. Phys. Rev., 74:1534, 1948. [36] U. Fano. Note on the Bragg-Gray cavity principle for measuring energy dissipation. Radiat. Res., 1:237 - 240, 1954. [37] A. Ferlito. Cancer of the Larynx. CRC Press, Inc., 1985. [38] M. Fippel. Fast Monte Carlo dose calculation for photon beams based on the VMC electron algorithm. Med. Phys., 26:1466 - 1475, 1999. [39] B. Gottschalk, R. Platais, and H. Pagenetti. Nuclear interactions of 160 MeV protons stopping in copper: A test of Monte Carlo nuclear models. Med. Phys., 26:2597 - 2601, 1999. [40] D. Harder. Spectra of primary and secondary electrons in material irra-diated by fast electrons. Biophysical Aspects of Radiation Quality, IAEA Technical Report Ser. No. 58 (International Atomic Energy Agency, Vi-enna), 1966. [41] D. Harder. On electron dosimetry. In A. Zupinger and G. Poretti, editors, Symposium on High Energy Electronics, page 260. Springer, Berlin, 1970. [42] C. L. Hartmann-Siantar, W. P. Chandler, N. W. Albright, L. J. Verhey, S. M. Hornstein, L. J. Cox, J. A. Rathkopf, and M. M. Svatos. Valida-tion and performance assessment of the PEREGRINE all-particle Monte Carlo code for photon beam therapy. Med. Phys., 23:1128, 1996. [43] C. L. Hartmann-Siantar, W. P. Chandler, M. B. Chadwick, H. M. Blann, L. J. Cox, D. A. Resler, J. A. Rathkopf, T. R. Mackie, J. V. Siebers, 119 M. A. Ross, P. M. DeLuca, K. A. Weaver, and R. M. White. Dose distributions calculated with the PEREGRINE all-particle Monte Carlo code. Med. Phys., 22:994, 1995. [44] K. R. Hogstrom, M. D. Mills, and P. R. Almond. Electron beam dose calculations. Phys. Med. Biol, 26:445 - 459, 1981. [45] T. M. Jenkins, W. R. Nelson, and A. Rindi (editors). Monte Carlo transport of electrons and photons. Plenum Press, New York, 1988. [46] R. Jeraj, P. J. Keall, and P. M. Ostwald. Comparisons between MCNP, EGS4 and experiment for clinical electron beams. Phys. Med. Biol, 44:705 - 717, 1999. [47] D. Jette. The application of multiple scattering theory to therapeutic electron dosimetry. Med. Phys., 10:141 - 146, 1983. [48] D. Jette. Electron dose calculation using multiple-scattering theory. A. Gaussian multiple-scattering theory. Med. Phys., 15:123 - 137, 1988. [49] D. Jette. Photon dose calculation based on electron multiple-scattering theory: practical representation of dose and particle transport integrals. Med. Phys., 26:924 - 930, 1999. [50] H. E. Johns and J. R. Cunningham. The Physics of Radiology. Charles C Thomas, fourth edition, 1983. [51] C. J. Karzmark, C. S. Numan, and E. Tanaba. Medical Electron Accel-erators. Mcgraw-Hill, Inc., 1993. [52] K. R. Kase, B. E. Bjarngard, and F. H. Attix (editors). The dosimetry of ionizing radiation, volume III. New York: Academic, 1968. 120 [53] I Kawrakow and A. F. Bielajew. On the condensed history technique for electron transport. Nucl. Instr. Methods, 13:253 - 280, 1998. [54] P. J. Keall and P. H. Hoban. Superposition dose calculation incorporat-ing Monte Carlo generated electron track kernels. Med. Phy., 23:479 -485, 1996. [55] E. Keil, E. Zeitler, and W. Zinn. Zur Einfach- und Mehrfachstreuung geladener Teilchen. Z. Naturforsch, 15a:1031, 1960. [56] F. M. Khan. The physics of radiation therapy. Williams and Wilkins, second edition, 1994. [57] E. E. Klein, L. M. Chin, R. K. Rice, and B. J. Mijnheer. The influence of air cavities on interface doses for photon beams. Int. J. Radiat. Oncol. Biol. Phys., 27:419 - 427, 1993. [58] T. Knoos, C. Ceberg, L. Weber, and P. Nilsson. The dosimetric verifi-cation of a pencil beam treatment planning system. Phys. Med. Biol, 39:1609 - 1628, 1994. [59] R. O. Kornelsen and M. E. J. Young. Empirical equations for the repre-sentation of depth dose data for computerized treatment planning. Med. Phys., 48:739 - 748, 1975. [60] W. Kwa. Asymmetric Collimation: Dosimetric Characteristics, Treat-ment Planning Algoritm, and Clinical Applications. The University of British Columbia (Ph.D. thesis), 1998. [61] L. Landau. On the energy loss of fast particles by ionization. J. Phys. USSR, 8:201 - 205, 1944. 121 [62] A. Likar, T. Vidmar, and B. Pucelj. Monte Carlo determination of gamma-ray dose rate with the GEANT system. Health. Phys., 2:165 -169, 1998. [63] P. A. Love, D. G. Lewis, A. M. Al-Affan, and C. W. Smith. Comparison of EGS4 and MCNP Monte Carlo codes when calculating radiotherapy depth doses. Phys. Med. Biol, 43:1351 - 1357, 1998. [64] C. M. Ma, E. Mok, A. Kapur, T. Pawlicki, D. Findley, S. Brain, K. Forster, and A. L. Boyer. Clinical implementation of a Monte Carlo treatment planning system. Med. Phys., 26:2133 - 2143, 1999. [65] C. M. Ma, P. J. Reckwerdt, M. Holmes, D. W. O. Rogers, and B. Geiser. DOSXYZ Users Manual. National Research Council of Canada Report No. PIRS-509B, (NRCC, Ottawa, Canada 1995). [66] T. R. Mackie, E. El-Khatib, J. J. Battista, J. Scrimger, J. Van Dyk, and J. R. Cunningham. Lung dose corrections for 6 MV and 15 MV X-rays. Med. Phys., 12:327- 332, 1985. [67] T. R. Mackie, J. Scrimger, and J. J. Battista. A convolution method of calculating dose for 15 MV x-rays. Med. Phys., 12:188 - 196, 1985. [68] T. R. Mackie, J. Scrimger, J. J. Battista, and E. El-Khatib. A con-volution method for calculating dose in situations of lateral electronic disequilibrium. Med. Phys., 11:397, 1984. [69] P. Metcalfe, T. Kron, and P. Hoban. The Physics of Radiotherapy X-Rays from Linear Accelerators. Medical Physics Publishing, Madison, Wisconsin, 1997. 122 [70] R. R. Million. The larynx ... so to speak: Everything I wanted to know about laryngeal cancer I learned in the last 32 years. Int. J. Radiat. Oncol. Biol. Phys., 23:691 - 704, 1992. [71] R. Mohan and C. Chui. Energy and angular distributions of photons from medical linear accelerators. Med. Phys., 12:592 - 597, 1985. [72] R. Mohan and C. Chui. Differential pencil beam dose computation model for photons. Med. Phys., 13:64 - 73, 1986. [73] G . Z. Moliere. Theorie der Streuung schneller geladener Teilchen I: Einzelstreuung am abgeschirmten Coulomb-Feld. Z. Naturforsch., 2a:133, 1947. [74] G . Z. Moliere. Theorie der Streuung schneller geladener Teilchen II: Mehrfach- und Vielfachstreuung. Z. Naturforsch., 3a:78, 1948. [75] C. Moller. Zur Theorie des Durchgangs schneller Electronen durch Ma-terie. Ann. Phys., 14:568, 1932. [76] R. Nelson, H. Hirayama, and D. W. O. Rogers. The EGS4 Code Sys-tem. Stanford Linear Accelerator Center Report, No. SLAC-265, (SLAC, Stanford, C A , 1985). [77] A . Niroomand-Rad, K . W. Harter, S. Thobejane, and K . Bertrand. A i r cavity effects on the radiation dose to the larynx using Co-60, 6 M V , and 10 M V photon beams. Int. J. Radiat. Oncol. Biol. Phys., 29:1139 -1146, 1994. [78] J . E . O'Connor. The variation of scattered x-rays with density in an irradiated body. Phys. Med. Biol, 1:352 - 369, 1957. 123 [79] International Commission on Radiation Units and Measurements (ICRU). Radiation dosimetry: Electron beams with energies between 1 and 50 MeV. Report No. 35, (ICRU Washington, D C , 1984). [80] R M . Ostwald, T Kron, and C. Hamilton. Assessment of mucosal under-dosing in larynx irradiation. Int. J. Radiat. Oncol. Biol. Phys., 36:181 -187, 1996. [81] S. Ozard and E. El-Khatib. Portal detector scatter dose calculation: Comparison of Convolution/Superposition with Monte Carlo. Med. Phys., 26:1431, 1999. [82] N . Papanikolaou, T. R. Mackie, C. Merger-Wells, M . Gehring, and P. Reckwerdt. Investigation of the convolution method for polyenergetic spectra. Med. Phys., 20:1327- 1336, 1993. [83] D. J . Perry and J . G. Holt. A model for calculating the effects of small inhomogeneities on electron beam dose distributions. Med. Phys., 7:207 - 212, 1980. [84] W. H . Press, W. T. Vetterling, S. A . Teukolsky, and B. P. Flannery. Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press, second edition, 1992. [85] D. W. O. Rogers. Low energy electron transport with EGS. Nucl. Instr. Meth., A227:535 - 548, 1984. [86] D. W. O. Rogers and A . F. Bielajew. Differences in electron depth-dose curves calculated with EGS and E T R A N and improved energy-range relationships. Med. Phys., 13:687 - 694, 1986. 124 [87] D. W. 0. Rogers, B. A. Faddegon, G. X. Ding, C. M. Ma, J. We, and T. R. Mackie. BEAM: a Monte Carlo code to simulate radiotherapy treatment units. Med. Phys., 22:503 - 524, 1995. [88] B. Rossi and K. Greisen. Rev. Mod. Phys., 13:240, 1941. [89] H. H. Rossi and W. C. Roesch. Field equations in dosimetry. Radiat. Res., 16:783 - 795, 1962. [90] B. Schorr. Programs for the Landau and the Vavilov distributions and the corresponding random numbers. Comp. Phys. Comm., 7:216, 1974. [91] S. M. Seltzer and M. J. Berger. Technical Report SP 3012, NASA, 1964. [92] S. M. Seltzer and M. J. Berger. Energy loss straggling of protons and mesons. In studies in Penetration of Charged Particles in Matter. Nu-clear Science Series 39, Nat. Academy of Sciences, Washington DC, 1964. [93] B. H. Shahine and E. El-Khatib. Experimental evaluation of interface doses in the presence of air cavities compared with treatment planning algorithms. Med. Phys., 26:350 - 355, 1999. [94] M. R. Sontag and J. R. Cunningham. Corrections to absorbed dose calculations for tissue inhomogeneities. Med. Phys., 4:431 - 436, 1977. [95] M. R. Sontag and J. R. Cunningham. The equivalent tissue-air ra-tio method for making absorbed dose calculations in a heterogeneous medium. Radiology, 129:787 - 794, 1978. [96] G. Taccini, F. Cavagnetto, G. Coscia, S. Garelli, and A. Pilot. The determination of dose characteristics of ruthenium opthalmic applicators using radiochromic film. Med. Phys., 24:2034 - 2037, 1997. [97] S. J. Thomas. A modified power-law formula for inhomogeneity correc-tion in beams of high-energy x-rays. Med. Phys., 18:719 - 723, 1991. [98] P. V. Vavilov. Ionisation losses of high energy heavy particles. Soviet Physics JETP, 5:749, 1957. [99] R. G. Waggener, M. M. Blough, J. A. Terry, D. Chen, N. E. Lee, S. Zhang, and W. D. McDavid. X-ray spectra estimation using attenua-tion measurements from 25 kVp to 18 MV. Med. Phys., 26:1269 - 1278, 1999. [100] L. Wang and D. Jette. Photon dose calculation based on electron multiple-scattering theory: primary dose deposition kernels. Med. Phys., 26:1454 - 1465, 1999. [101] L. Wang, M. Lovelock, and C. S. Chui. Experimental verification of a ct-based monte carlo dose-calculation method in heterogeneous phantoms. Med. Phys., 26:2626 - 2634, 1999. [102] S. Webb and R. A. Fox. Verification by Monte Carlo methods of a power law tissue-air ratio algorithm for inhomogeneity corrections in photon beam dose calculations. Phys. Med. Biol., 25:225 - 40, 1980. [103] C. M. Wells, T. R. Mackie, M. B. Podgorsak, M. A. Holmes, N. Pa-panikolaou, P. J. Reckwerdt, J. Cygler, D. W. O. Rogers, A. F. Biela-jew, and D. G. Schmidt. Measurements of the electron dose distribution near inhomogeneities using a plastic scintillation detector. Int. J. Radiat. Oncol. Biol. Phys., 29:1157 - 1165, 1994. 126 [104] B. L. Werner, F. M. Khan, and F. C. Deibel. A model for calculating electron beam scattering in treatment planning. Med. Phys., 9:180 -187, 1982. [105] J. W. Wong and J. A. Purdy. On methods of inhomogeneity corrections for photon transport. Med. Phys., 17:807 - 814, 1990. [106] M. K. Woo and J. R. Cunningham. The validity of the scaling method in primary electron transport for photon and electron beams. Med. Phys., 17:187- 194, 1990. [107] M. K. Woo, J. R. Cunningham, and J. J. Jezioranski. Extending the concept of primary and scatter separation to the condition of electronic disequilibrium. Med. Phys., 17:588 - 595, 1990. [108] M. K. Woo, D. J. Scora, and E. Wong. The regional monte carlo method: a dose calculation method based on accuracy requirement. Med. Phys., 25:1866 - 1871, 1998. [109] M. E. J. Young and R. O. Kornelsen. Dose corrections for low-density tissue inhomogeneities and air channels for 10 MV x-rays. Med. Phys., 10:450 - 455, 1983. [110] C. X. Yu, T. R. Mackie, and J. W. Wong. Photon dose calculation incorporating electron transport. Med. Phys., 22:1157 - 1165, 1995. 127 Appendix A Derivation of generalized scattering moments for n layers A . Scattering moments in homogeneous phantom The ith scattering moment at a given depth z in a medium having a scattering power k(z) is defined as Tables of scattering power and stopping power as a function of electron energy are listed in ICRU Report No 35 [79]. To convert from energy to depth, we use Harder's equation [41] which assumes that the energy of an electron decreases linearly with depth as it travels down the medium. This relationship is given by where E0 is the energy of the incident electrons, Rp is the practical range of the (1) E = E0(l - z/Rp) (2) 128 where the constants are given by ICRU35. In order to evaluate the above integral (Ai(z)), numerical techniques can be employed. However, if we can evaluate Aj(z) explicitly (analytically), a great saving in computation time can be achieved. To do that, we need to convert the table values of the scattering power into an analytical function. An appropriate parametric function would be k(E) = AE~B, (4) where A and B can be estimated by using the least squares method. Best estimates derived for water are 3.2 radian2 cm2 j g and 1.6 for A and B respec-tively. In this thesis, scattering moments up to the second order are of interest and their expressions were derived using Mathematical^ Version 3 as ^ W - ^ U J V ( l - £ ) ( 2 - f l ) (1-B)(2-B) J { 6 ) •E0\-»( -2{R,-z)*-B M Z ) = A{R-) {(1-B)(2-B)(3-B) + RP1-B{2RV2 - 2(3 - B)RpZ + (6 - 5B + B2)z2\ (1 - B)(2 - B)(3 - B) )' (7) B. Scattering moments in inhomogeneous phantom For a multilayered geometry, Equation 1 cannot be evaluated explicitly; a solution in a series form is sought. Since z can be expressed as z = h +12 + • • • + tn (8) the ith moment will be the sum of n integrals, hence Mz) = / k(z')(z - z'Ydz' + / k(z')(z - z'Ydz' + JO Jti 129 • + f" k{z')(z-z'ydz'. (9) Making the following substitution for scattering moment of a given layer n as / • t i + t 2 + - + t „ M*n)= k(z')(z - z'Ydz' (10) and using recursion we arrive at a general form for Ai(z) at the depth z in terms of the lower order moments of the overlying layers. Hence M*) = E ( E W ) - ( £«™( E (ii) 1=1 \j=0 / \m=0 h=l+l / where we have set the ith power factor as follows, fl+ E t h \ = ± a m ( ± th)m, (12) \ h=l+l / m=0 h=l+l am is the contant factor in the power expansion and Sij+m is the Kronecker delta, defined by 1 V = Q (13) 0 p^q 5pg = < For example, the second moment at a depth z = ti +12 +13 +1± in a four layer phantom can be expressed as A2(z) = 4)(*i)(i2 + h + Uf + 2A1(t1)(t2 + «3 + *4) + Mti) + + A0{t2)(h + Uf + 2A1(t2)(t3 + u) + A2(t2) + + A0(t3)(U)2 + 2A1(t3)t4 + A2(t3) + A2(U), (14) where ai = 1, a2 = 2, and 03 = 1 for i = 2. For i = 3, ax = 1, a2 = 3, 03 — 3, and 04 = 1. 130 Appendix B The primary Compton dose calculation code B A T O U L . c This code calculates primary Compton dose in a homogeneous and heterogeneous phantoms. It is selfsufficient, requiring only stopping powers and scattering powers of materials presently drawn from ICRU 35. It outputs two 3D dose grids written to two ASCII text files. It requires a " C" compiler with a math library at time of linking. 131 #include <stdio.h> #include <stdlib.h> #include <string.h> #include <math.h> #define pi 3.141592 #define twopi 6.2831853 #define sqrt2pi 2.5066283 #define eps 0.001 #define n3 150 #define si 50 #define n 1 #define M 1.6022e-10 #define cso 66.525e-26 #define data 28 #define dwater 1.00 #define dair 1.205e-3 #define New 3.343e23 #define Neair 3.006e23 #define Zw 7.51 #define Zair 7.78 #define Aw 3.2004 #define Bw 1.62655 #define Aair 0.0038604464 #define Bair 1.62704 #define xphan 4.0 #define xlphan -3.9901 #define x2phan 4.101 #define yphan 4.0 #define ylphan -3.9901 #define y2phan 4.101 #define zlphan 4.1 #define z2phan 4.2 #define xlcut -2.499 #define x2cut 2.501 #define ylcut -2.499 #define y2cut 2.501 #define dslab 3.0 #define gap 5.0 #define dX 0.2 /* MeV/g = 1.6022e-10 Gy */ /* cm2/el */ /* g/cm3 */ /* e/g */ /*ex -3.9901, 4.101 or 0.0001, 0.0( 132 # d e f i n e dY 0 . 2 # d e f i n e dZ 0 .2 # d e f i n e Nx 41 # d e f i n e Ny 41 # d e f i n e Nz 90 # d e f i n e d i v 1 0 . 0 # d e f i n e l a m d a l 0 . 8 5 / * 0 . 8 , 0 . 7 - 0 . 7 - 0 . 9 */ # d e f i n e lamda2 1.0 / * 0 . 7 - 0 . 7 - 0 . 9 */ # d e f i n e E v e n t l e 6 # d e f i n e E f l a g 1 / * i=mono , 2=po l y */ #def i n e Ihnu 5 # d e f i n e E c u t 0 .01 /* 1.0 - 0 . 0 4 , 0 . 1 - 0.1 - 0 .02 * / # d e f i n e dEo 0 .793993 / *0 503997-11 .3002-0 .793993-0 .493169-0 .5272741 0 .326 - 0 . 3 0 5 6 - 0 . 2 0 7 * / / * PROTOTYPES * / e x t e r n f l o a t d c s d E ( f l o a t e ) ; e x t e r n f l o a t d c s d E s p t ( f l o a t e , f l o a t h n ) ; e x t e r n f l o a t R c s d a ( f l o a t e o ) ; e x t e r n i n t i n t e n e r g y ( f l o a t e ) ; e x t e r n f l o a t e n e r g y a i r ( f l o a t z , f l o a t e o ) ; e x t e r n f l o a t energyw ( f l o a t z , f l o a t e o ) ; e x t e r n f l o a t Aow ( f l o a t z , f l o a t e o ) ; e x t e r n f l o a t Alw ( f l o a t z , f l o a t e o ) ; e x t e r n f l o a t A2w ( f l o a t z , f l o a t e o ) ; e x t e r n f l o a t A o a i r ( f l o a t z , f l o a t e o ) ; e x t e r n f l o a t A l a i r ( f l o a t z , f l o a t e o ) ; e x t e r n f l o a t A 2 a i r ( f l o a t z , f l o a t e o ) ; e x t e r n f l o a t s i g m a i n h ( f l o a t z , f l o a t e o ) ; e x t e r n f l o a t A 2 a i r C ( f l o a t z , f l o a t e o ) ; e x t e r n f l o a t A2wC ( f l o a t z , f l o a t e o ) ; e x t e r n f l o a t e r f c c ( f l o a t x ) ; e x t e r n f l o a t e r f s ( f l o a t x , f l o a t y , f l o a t s i g m a , f l o a t w t ) ; e x t e r n f l o a t exps ( f l o a t x , f l o a t y , f l o a t s i g m a , f l o a t w t ) ; e x t e r n f l o a t s c o l a i r C ( f l o a t e ) ; e x t e r n f l o a t s co lwC ( f l o a t e ) ; / * DATA * / / * Ihnu 0 1 2 3 4 5 6 7 8 9 10 11 12 * / /* II I I I I I I I I I I I */ s t a t i c f l o a t h n u [ ] = { l , 2 , 4 , 6 , 1 0 , 2 0 , 3 0 , 4 , 6 , 1 0 , 1 5 , 1 8 , 2 4 } ; 133 /* hn > 30 -> spectrum */ static float Muair[]={0.0636,0.0445,0.0308,0.0251,0.0205,0.0171,0.0163, 0.051393,0.045026,0.034811,0.028347,0.022477,0.021286}; static float Muw[]={0.0707,0.0494,0.0340,0.0276,0.0222,0.0182,0.0171, 0.057154,0.050074,0.038714,0.031525,0.024997,0.031472}; static float CS[]={2.112e-25,1.464e-25,9.598e-26,7.323e-26,5.099e-26}; static float E[]={0.01,0.015,0.02,0.03,0.04,0.05,0.06,0.08, 0.1,0.15,0.2,0.3,0.4,0.5,0.6,0.8, 1.0,1.5,2.0,3.0,4.0,5.0,6.0,8.0, 10.0,15.0,20.0,30.0,40.0}; static float scolw[]={22.56,16.47,13.17,9.653,7.777,6.603, 5.797,4.757,4.115,3.238,2.793,2.355, 2.148,2.034,1.963,1.886,1.849,1.823, 1.824,1.846,1.870,1.892,1.911,1.943, 1.968,2.014,2.046,2.089}; static float scolair[]={19.75,14.45,11.57,8.492,6.848,5.819, 5.111,4.198,3.633,2.861,2.470,2.084, 1.902,1.802,1.743,1.683,1.661,1.661, 1.684,1.740,1.790,1.833,1.870,1.931, 1.979,2.069,2.134,2.226}; static float sradw[]={3.898e-3,3.944e-3,3.963e-3,3.984e-3, 4.005e-3,4.031e-3,4.062e-3,4.138e-3, 4.228e-3,4.494e-3,4.801e-3,5.514e-3, 6.339e-3,7.257e-3,8.254e-3,1.043e-2, 1.280e-2,1.942e-2,2.678e-2,4.299e-2, 6.058e-2,7.917e-2,9.854e-2,1.391e-l, 1.814e-l,2.926e-l,4.086e-l,6.489e-l}; static float sradair[]={3.897e-3,3.937e-3,3.954e-3,3.976e-3, 3.998e-3,4.025e-3,4.057e-3,4.133e-3, 4.222e-3,4.485e-3,4.789e-3,5.495e-3, 6.311e-3,7.223e-3,8.210e-3,1.036e-2, 1.271e-2,1.927e-2,2.656e-2,4.260e-2, 5.999e-2,7.838e-2,9.754e-2,1.376e-l, 1.795e-l,2.895e-l,4.042e-l,6.417e-l}; static float Tw[]={5.91e3,2.92e3,1.77e3,8.69e2,5.16e2,3.57e2, 2.61e2,1.59e2,1.09e2,5.57el,3.50el,1.84el, 1.18el,8.45,6.42,4.17,2.99,1.61,1.03,5.40e-l, 3.36e-l,2.31e-l,1.69e-l,1.03e-l,6.95e-2,3.37e-2, 2.00e-2,9.54e-3}; static float Tair[]={5.94e3,2.93e3,1.77e3,8.73e2,5.28e2,3.59e2, 134 2 . 6 2 e 2 , 1 . 6 0 e 2 , 1 . 1 0 e 2 , 5 . 6 0 e l , 3 . 5 1 e l , 1 . 8 5 e l , 1 . 1 9 e l , 8 . 4 8 , 6 . 4 5 , 4 . 1 9 , 3 . 0 0 , 1 . 6 2 , 1 . 0 4 , 5 . 4 2 e - l , 3 . 3 7 e - l , 2 . 3 2 e - l , 1 . 7 0 e - l , 1 . 0 3 e - l , 6 . 9 8 e - 2 , 3 . 3 8 e - 2 , 2 . 0 1 e - 2 , 9 . 5 7 e - 3 } ; /# * / f l o a t a l p h a , E m a x , Z , E G [ 5 ] [ 3 ] ,wtG[5] [ 3 ] , d p , W t a i r , W t w , M u , a r r s p e c t [ 2 ] [100] , d c s d E a r r [ 1 0 0 ] ; /* */ m a i n ( ) i i n t i , j , k , I , J , J o , i n t X c , i n t Y c , i n t Z , I E , I E m a x , m a x b i n ; f l o a t l , c , x , y , d y , n x , n y , n z , n b v , A , C , E i , E N , E N a i r , E N w , C F , F w , F a i r , S , S i g m a , X c , Y c , S c o l w , S c o l a i r , S c o l , D O S E [ N x ] [ N y ] [ N z ] , D O S E i n h [ N x ] [ N y ] [ N z ] , d e s , Nem2w, N e m 2 a i r , Cons tw , C o n s t a i r , p f l u e n c e , d E , n b , b i n , sumwt, h n , emax; c h a r m , s [ s l ] , s l [ s i ] , s 2 [ s l ] , s 3 [ s l ] , s 4 [ s l ] , s 5 [ s l ] ; F I L E * f i n , * f i n s p t , * f d a t a , * f o u t a i r , * f o u t w , * f C F ; p r i n t f ( " E n t e r t h e name o f t h e 3ddose f i l e : " ) ; s c a n f ( "7,s" ,&s) ; s t r c p y ( s l , s ) ; s t r c p y ( s 2 , s ) ; s t r c p y ( s 3 , s ) ; s t r c p y ( s 4 , s ) ; s t r c p y ( s 5 , s ) ; s t r c a t ( s l , " . 3 d d o s e " ) ; s t r c a t ( s 2 , " . c s d a t " ) ; s t r c a t ( s 3 , " . i n h 3 d d o s e " ) ; s t r c a t ( s 4 , " . w 3 d d o s e " ) ; s t r c a t ( s 5 , " T C F . 3 d d o s e " ) ; f i n = f o p e n ( s l , " r " ) f d a t a = f o p e n ( s 2 , " w " ) f o u t a i r = f o p e n ( s 3 , " w " ) foutw= f o p e n ( s 4 , " w " ) f C F = f o p e n ( s 5 , " w " ) f i n s p t = f o p e n ( " / h o m e / b i l a l / e g s 4 / e n s r c _ s p e c t r a / m o h a n 4 . s p e c t r u m 1 Nem2w= New*dwate r*dZ ; Nem2air= N e a i r * d a i r * d Z ; p f l u e n c e = E v e n t / ( ( x 2 c u t - x l c u t ) * ( y 2 c u t - y l e u t ) ) ; / * e/cm2 * / / * photons/< 135 Constw =pf luence*M*Nem2w; C o n s t a i r = p f l u e n c e * M * N e m 2 a i r ; a l p h a = h n u [ I h n u ] / 0 . 5 1 1 ; Emax = h n u [ I h n u ] * 2 * a l p h a / ( l + 2 * a l p h a ) ; f o r ( i = 0 ; i < 1 0 0 ; i + + ) d c s d E a r r [ i ] = a r r s p e c t [ 0 ] [ i ] = a r r s p e c t [ 1 ] [ i ] = 0 . 0 ; i f ( E f l a g = = l ) •C I E=0; f o r ( E i = d E o / 2 . 0 ; E i < = E m a x ; E i = E i + d E o ) •C d c s d E a r r [ I E ] = d c s d E ( E i ) ; f p r i n t f ( f d a t a , " °/,g °/.g\n" , E i , d c s d E ( E i ) ) ; IE++; } I Emax=IE- l ; } i f ( E f l ag==2) •C j = 0 ; sumwt=0.0; w h i l e ( ( n b = f s c a n f ( f i n s p t , " °/.f, °/,f " ,&x,&y) )==2) i a r r s p e c t [ 0 ] [ j ] = x ; a r r s p e c t [ 1 ] [ j ] = y ; sumwt=sumwt+y; } b i n = a r r s p e c t [0] [ 1 ] - a r r s p e c t [0] [0] ; m a x b i n = j - l ; I E=0; f o r ( E i = d E o / 2 . 0 ; E i < = E m a x ; E i = E i + d E o ) { f o r (k=0;k<=maxbin;k++) •C i f (k>0) b i n = a r r s p e c t [ 0 ] [ k ] - a r r s p e c t [ 0 ] [ k - 1 ] ; h n = a r r s p e c t [ 0 ] [ k ] - b i n / 2 . 0 ; e m a x = ( 2 * h n * h n / 0 . 5 1 1 ) / ( l + 2 * h n / 0 . 5 1 1 ) ; 136 i f (emax>Ei) d c s d E a r r [ I E ] = d c s d E a r r [ I E ] + d c s d E s p t ( E i , h n ) * a r r s p e c t [1] [k] /sumwt; } f p r i n t f ( f d a t a , " % f J/.g\n" , E i . d c s d E a r r [ I E ] ) ; IE++; } I Emax=IE- l ; p r i n t f ( " ° / . d °/,f °/,f °/,g °/,g MV S p e c t r u m \ n " . m a x b i n . b i n , s u m w t , d c s d E a r r [ I E m a x ] . a r r s p e c t [ 0 ] [ m a x b i n ] ) ; } / * GAUSSIAN QUADRATURE DATA * / / * 1 MeV p h o t o n s lhnu=0 * / E G [ 0 ] [ 0 ] = 0 . 0 8 9 7 9 ; E G [ 0 ] [ 1 ] = 0 . 4 1 0 9 ; E G [ 0 ] [ 2 ] = 0 . 7 2 8 3 ; w t G [ 0 ] [ 0 ] = 0 . 2 5 7 2 ; w t G [ 0 ] [ 1 ] = 0 . 3 9 0 8 ; w t G [ 0 ] [ 2 ] = 0 . 3 5 2 ; / * 2 MeV p h o t o n s Ihnu=l * / EG [1] [0] = 0 . 2 0 9 1 ; EG [1] [1] = 0 . 9 4 7 1 ; EG [1] [2] = 1 . 6 3 6 ; w t G [ 1 ] [ 0 ] = 0 . 2 2 0 8 ; w t G [ 1 ] [ 1 ] = 0 . 3 7 6 2 ; w t G [ 1 ] [ 2 ] = 0 . 4 0 3 ; / * 4 MeV p h o t o n s E G [ 2 ] [ 0 ] = 0 . 4 6 0 3 ; w t G [ 2 ] [ 0 ] = 0 . 1 8 7 9 ; Ihnu=2 * / EG[2] [ 1 ]=2 .073 ; w t G [ 2 ] [ 1 ] = 0 . 3 5 1 9 ; EG[2] [2] = 3 . 5 0 3 ; w t G [ 2 ] [ 2 ] = 0 . 4 6 0 2 ; / * 6 MeV p h o t o n s E G [ 3 ] [ 0 ] = 0 . 7 1 7 ; w t G [ 3 ] [ 0 ] = 0 . 1 7 1 4 ; Ihnu=3 * / EG[3] [ 1 ]=3 .223 ; w t G [ 3 ] [ 1 ] = 0 . 3 3 5 2 ; EG [ 3 ] [ 2 ] = 5 . 3 9 3 ; wtG [ 3 ] [ 2 ] = 0 . 4 9 3 4 ; / * 10 MeV p h o t o n s Ihnu=4 * / E G [ 4 ] [ 0 ] = 1 . 2 3 7 ; E G [ 4 ] [ 1 ] = 5 . 5 5 2 ; E G [ 4 ] [ 2 ] = 9 . 2 0 5 ; w t G [ 4 ] [ 0 ] = 0 . 1 5 3 7 ; w t G [ 4 ] [ 1 ] = 0 . 3 1 3 3 ; w t G [ 4 ] [ 2 ] = 0 . 5 3 3 ; / * * / f s c a n f ( f i n , " °/.g 4/.g %g" ,&nx,feny ,fcnz) ; f p r i n t f ( f o u t w , " '/.g 4/.g */,g\n" , n x , n y , n z ) ; f p r i n t f ( f o u t a i r , " °/„g 4/,g °/,g\n" , n x , n y , n z ) ; f p r i n t f ( f C F , " 7.g °/.g 7 . g \ n " , n x , n y , n z ) ; f o r ( i = 0 ; i < = n x ; i + + ) •C 137 f s c a n f ( f i n , " 4 /.g" ,&x) ; f p r i n t f ( f o u t w , " °/.g", x ) ; f p r i n t f ( f o u t a i r , 1 1 7.g", x ) ; f p r i n t f ( f C F , " 7,g\x) ; } f p r i n t f ( f o u t w , " \ n " ) ; f p r i n t f ( f o u t a i r , " \ n " ) ; f p r i n t f ( f C F , " \ n " ) ; f o r ( i= 0 ; i<=ny; i++) •C f s c a n f ( f i n , " 7.g",&x); f p r i n t f ( f o u t w , " % g " , x ) ; f p r i n t f ( f o u t a i r , " °/,g" , x ) ; f p r i n t f ( f C F , " 7.g",x); > f p r i n t f ( f o u t w , " \ n " ) ; f p r i n t f ( f o u t a i r , " \ n " ) ; f p r i n t f ( f C F , " \ n " ) ; f o r ( i= 0 ; i<=nz; i++) { f s c a n f ( f i n , " 7 „g" ,&x) ; f p r i n t f ( f outw, " ' / .g " , x) ; f p r i n t f ( f o u t a i r , " 7,g" , x) ; f p r i n t f ( f C F , " ° / , g " , x ) ; } f p r i n t f ( f o u t w , " \ n " ) ; f p r i n t f ( f o u t a i r , " \ n " ) ; f p r i n t f ( f C F , " \ n " ) ; /* f o r ( i= 0 ; i<Nx; i++) f o r ( j=0 ; j<Ny;j++) f o r (k=0;k<Nz;k++) •C f s c a n f ( f i n , " ° / . g " , & y ) ; DDOSE [ i ] [ j ] [k ]=y; } f o r ( i= 0 ; i<Nx; i++) f o r ( j=0 ; j<Ny;j++) f o r (k=0;k<Nz;k++) 138 •c f scanf ( f i n , " °/,g",&dy); > */ f o r (i=0;i<Nx;i++) f o r (j=0;j<Ny;j++) f o r (k=0;k<Nz;k++) D0SE[i] [j] [k]=DOSEinh[i] [j] [k]=0.0; f o r (Z=zlphan;Z<z2phan;Z=Z+dZ) { intZ=Z/dZ; /* */ Ei=-dEo/2.0; for(IE=0;IE<=IEmax;IE++) •C Ei=Ei+dEo; dcs=dcsdEarr [IE] *dEo; /* */ /* for(IE=0;IE<3;IE++) { Ei=EG[Ihnu] [IE] ; dcs=CS[Ihnu]*wtG[Ihnu][IE]; */ /* */ Wtw=Constw*dcs; Wtair=Constair*dcs; Scolw=scolwC(Ei); S c o l a i r = s c o l a i r C ( E i ) ; f o r (dp=dZ;dp<=Z;dp=dp+dZ) •C ENw=energyw(dp,Ei); i f (ENw<Ecut) break; Mu=exp(-dwater*Muw[Ihnu]*(Z-dp)); sigma=sqrt(A2w(dp,Ei)); Scol=scolwC(ENw); f o r (Xc=xlphan;Xc<=x2phan;Xc=Xc+dX) { intXc=(Xc+xphan)/dX; 139 for (Yc=ylphan;Yc<=y2phan;Yc=Yc+dY) •C intYc=(Yc+yphan)/dY; DOSE[intXc][intYc][intZ]=D0SE[intXc][intYc][intZ] +Scol*Mu*erfs(Xc,Yc,sigma,Wtw); > } > C=Scolw*exp(-dwater*Muw[Ihnu]*Z)*Wtw; for (Xc=xlcut;Xc<=x2cut;Xc=Xc+dX) { intXc=(Xc+xphan)/dX; for (Yc=ylcut;Yc<=y2cut;Yc=Yc+dY) •C intYc=(Yc+yphan)/dY; DOSE[intXc][intYc][intZ] =D0SE[intXc][intYc] [intZ] +C; } } i f (Z<dslab) for (i=0;i<Nx;i++) for (j=0;j<Ny;j++) D0SEinh[i] [j] [intZ] =D0SE[i] [j] [intZ] ; else if(Z<dslab+gap) •C for (dp=dZ;dp<=Z;dp=dp+dZ) i f (dp<(Z-dslab)) { / * Electron originating in air */ ENair=energyair(dp,Ei); i f (ENair<Ecut) break; Mu=exp(-dair*Muair[Ihnu]*(Z-dslab-dp)-dwater*Muw[Ihnu]*dslab); sigma=sqrt(A2air(dp,Ei)); Scol=scolairC(ENair); for (Xc=xlphan;Xc<=x2phan;Xc=Xc+dX) •C 140 intXc=(Xc+xphan)/dX; for (Yc=ylphan;Yc<=y2phan;Yc=Yc+dY) •C intYc=(Yc+yphan)/dY; DOSEinh[intXc][intYc][intZ] =DOSEinh[intXc][intYc][intZ] +Scol*Mu*erfs(Xc,Yc,sigma,Wtair); } } } else •C / * Electron originating in water */ EN=energyw(dp-Z+dslab,Ei); i f (EN<Ecut) break; ENair=energyair(Z-dslab,EN); i f (ENair<Ecut) break; Mu=exp(-dwater*Muw[Ihnu]*(Z-dp)); sigma=sigmainh(dp,Ei); Scol=scolairC(ENair); for (Xc=xlphan;Xc<=x2phan;Xc=Xc+dX) i intXc=(Xc+xphan)/dX; for (Yc=ylphan;Yc<=y2phan;Yc=Yc+dY) { intYc=(Yc+yphan)/dY; DOSEinh[intXc][intYc][intZ]=D0SEinh[intXc][intYc][intZ] +Scol*Mu*erfs(Xc,Yc,sigma,Wtw)*lamdal; } } > A=Scolair*exp(-dair*Muair[Ihnu]*(Z-dslab)-dwater*Muw[Ihnu]*dslab)*Wtair; for (Xc=xlcut;Xc<=x2cut;Xc=Xc+dX) i intXc=(Xc+xphan)/dX; for (Yc=ylcut;Yc<=y2cut;Yc=Yc+dY) { intYc=(Yc+yphan)/dY; 141 DOSEinh[intXc] [intYc] [intZ] = DOSEinh[intXc] [intYc] [intZ]+A; } } } else { for (dp=dZ;dp<=Z;dp=dp+dZ) i f (dp<Z-dslab-gap) { / * Electron originating in waterll * / ENw=energyw(dp,Ei); i f (ENw<Ecut) break; Mu=exp(-dair*Muair[Ihnu]*(gap)-dwater*Muw[Ihnu]*(Z-gap-dp)); sigma=sqrt(A2w(dp,Ei)); Scol=scolwC(ENw); for (Xc=xlphan;Xc<=x2phan;Xc=Xc+dX) •C intXc=(Xc+xphan)/dX; for (Yc=ylphan;Yc<=y2phan;Yc=Yc+dY) •C intYc=(Yc+yphan)/dY; DOSEinh[intXc][intYc][intZ]=D0SEinh[intXc][intYc][intZ]+Scol*Mu*erfs(Xc,Yc.sign } } } else i f (dp<Z-dslab) •C / * Electron originating in air */ ENair=energyair(dp-Z+dslab+gap,Ei); ENw=energyw(Z-dslab-gap,ENair); i f (ENw<Ecut) break; Mu=exp(-dair*Muair[Ihnu]*(Z-dslab-dp) -dwater*Muw[Ihnu]*dslab); sigma=sigmainh(dp,Ei); Scol=scolwC(ENw); 142 for (Xc=xlphan;Xc<=x2phan;Xc=Xc+dX) •C intXc=(Xc+xphan)/dX; for (Yc=ylphan;Yc<=y2phan;Yc=Yc+dY) •C intYc=(Yc+yphan)/dY; DOSEinhEintXc][intYc][intZ]=D0SEinh[intXc][intYc][intZ] +Scol*Mu*erfs(Xc,Yc,sigma,Wtair); } } } else •C / * Electron originating in water I */ EN=energyw(dp-Z+dslab,Ei); i f (EN<Ecut) break; ENair=energyair(gap,EN); i f (ENair<Ecut) break; ENw=energyw(Z-dslab-gap,ENair); i f (ENw<Ecut) break; Mu=exp(-dwater*Muw[Ihnu]*(Z-dp)); sigma=sigmainh(dp,Ei); Scol=scolwC(ENw); f or (Xc=xlphan;Xc<=x2phan;Xc=Xc+dX) { intXc=(Xc+xphan)/dX; for (Yc=ylphan;Yc<=y2phan;Yc=Yc+dY) { intYc=(Yc+yphan)/dY; DOSEinh[intXc][intYc][intZ]=D0SEinh[intXc][intYc][intZ] +Scol*Mu*erfs(Xc,Yc,sigma,Wtw)*lamda2; } } } A=Scolw*exp(-dair*Muair[Ihnu]*(gap)-dwater*Muw[Ihnu]*(Z-gap))*Wtw; for (Xc=xlcut;Xc<=x2cut;Xc=Xc+dX) 143 intXc=(Xc+xphan)/dX; for (Yc=ylcut;Yc<=y2cut;Yc=Yc+dY) •C intYc=(Yc+yphan)/dY; DOSEinhCintXc] [intYc] [intZ] = DOSEinh[intXc][intYc][intZ]+A; > } } } } / * OUTPUT * / for (k=0;k<Nz;k++) for (j=0;j<Ny;j++) for (i=0;i<Nx;i++) { fprintf(foutw," 7,g",D0SE[i] [j] [k]/pfluence); i f <<i % 5)==0) fprintf(foutw,"\n"); fprintf(foutair," 7.g" ,D0SEinh[i] [j] [k]/pf luence); i f ((i '/. 5)==0) fprintf(foutair,"\n"); fprintf (fCF," c/,g", (DOSEinh [i] [j] [k] +le-30)/ (D0SE[i] [j] [k]+le-30)); i f ((i I 5)==0) fprintf (fCF, , ,\n"); } /* return EXIT.SUCCESS; } FUNCTIONS *****************************/ int intenergy (float e) { int i , j ; for (i=0;i<data;i++) i f (e>=E[i] && e<E[i+l]) return j ; } 144 float dcsdE (float e) •C float hn,tl,t2,t3; hn = hnu[Ihnu]; t l = e/(alpha*(hn-e)); t2 = t l * t l ; t3 = e*e/(hn*(hn-e)); return (3/8.0)*(cso/(alpha*hn))*(2-2*tl+t2+t3); > float dcsdEspt (float e,float hn) •C float t l , t2,t3,alp; alp = hn/0.511; t l = e/(alp*(hn-e)); t2 = t l * t l ; t3 = e*e/(hn*(hn-e)); return (3/8.0)*(cso/(alp*hn))*(2-2*tl+t2+t3); } float Rcsda (float eo) •C int i , j ; float sum.de; sum=0; j=intenergy(eo); de=E[0] ; for (i=0;i<=j;i++) { sum=sum+de/((scolw[i]+sradw[i])*dwater); de=E[i+l]-E[i] ; } return sum; } / ^ ^ ^ ^ ^ ^ ^ ^ i K ^ ^ ^ ^ ^ * * ^ ^ ^ * ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * / float energyair (float z, float eo) { int j ; float zeff,i,dz,sum,Rp,e; 145 sum=0; dz=z/div; j=intenergy(eo); for (i=0;i<z;i=i+dz) sum=sum+dz*(scolair[j]+sradair[j])*dair/((scolw[j]+sradw[j])*dwater); zeff=sum; Rp=-0.11+0.505*eo-0.0003*eo*eo; i f (eo<l) Rp=Rcsda(eo); I e=eo*(l-zeff/Rp); return e; > float energyw (float z, float eo) •C float Rp,e; Rp=-0.11+0.505*eo-0.0003*eo*eo; i f (eo<l) Rp=Rcsda(eo); e=eo*(l-z/Rp); return e; } float Aow (float z, float eo) •C int j ; float i,sum,dz,e; sum=0; dz=z/div; for (i=0;i<z;i=i+dz) •C e=energyw(i,eo); sum=sum+dz*Aw*pow(e,-Bw); } return sum; } float Alw (float z, float eo) •C int j ; 146 float i,sum,dz,e; sum=0; dz=z/div; for (i=0;i<z;i=i+dz) •C e=energyw(i,eo); sum=sum+dz*Aw*pow(e,-Bw)*(z-i); } return sum; > float A2w (float z, float eo) •C int j ; float i,sum,dz,e; sum=0; dz=z/div; for (i=0;i<z;i=i+dz) •C e=energyw(i,eo); sum=sum+dz*Aw*pow(e,-Bw)*(z-i)*(z-i); } return sum; } float Aoair (float z, float eo) •C int j ; float i,sum,dz,e; sum=0; dz=z/div; for (i=0;i<z;i=i+dz) •C e=energyair(i,eo); sum=sum+dz*Aair*pow(e,-Bair); > return sum; } float Alair (float z, float eo) 147 •c int j ; float i,sum,dz,e; sum=0; dz=z/div; for (i=0;i<z;i=i+dz) •C e=energyair(i,eo); sum=sum+dz*Aair*pow(e, -Bair) * (z-i); > return sum; } float A2air (float z, float eo) •C int j ; float i,sum,dz,e; sum=0; dz=z/div; for (i=0;i<z;i=i+dz) { e=energyair(i,eo); sum=sum+dz*Aair*pow(e,-Bair)*(z-i)*(z-i); } return sum; } /*******************************************^ float sigmainh ( float dpt, float eo) •C float t1,t2,t3,g,ewl,ew2,eair,A2,K,L; tl=dpt+dslab-Z; ewl = energyw(tl.eo); i f (Z>=dslab && Z<(dslab+gap)) { t2=Z-dslab; A2=Aow(tl,eo)*t2*t2+Alw(tl,eo)*2*t2+A2w(tl,eo)+A2air(t2,ewl); > i f (Z>=dslab+gap) •C t3=Z-dslab-gap; 148 i f (dpt<(t3+gap)) •C t2=dpt-t3; eair=energyair(t2, eo); A2=Aoair(t2,eo)^3*t3+Alair(t2,eo)*2*t3+A2air(t2,eo)+A2w(t3,ewl); } else •C t2=gap; eair= energyair(gap.ewl); K = Aow(tl,eo)*(t2+t3)*(t2+t3)+Aoair(t2,ewl)*t3*t3; L = Alw(tl,eo)*2*(t2+t3) + Alair(t2,ewl)*2*t3; A2=K+L+ A2w(tl,eo) + A2air(t2,ewl) + A2w(t3,eair); > } return sqrt(A2); } **********************^ float erfs(float x,float y, float sigma, float wt) •C double Ixgauss.Iygauss; Ixgauss=2-erfcc((-xlcut+x)/sigma)-erfcc((x2cut-x)/sigma); Iygauss=2-erfcc((-ylcut+y)/sigma)-erfcc((y2cut-y)/sigma); return 0.25*wt*Ixgauss*Iygauss; } /******************************************^ float erfcc(float x) •C float t.z.ans; z=fabs(x); t=l.0/(1.0+0.5*z); ans=t*exp(-z*z-l.26551223+t*(1.00002368+t*(0.37409196+t*(0.09678418+ t*(-0.18628806+t*(0.27886807+t*(-1.13520398+t*(1.48851587+ t*(-0.82215223+t*0.17087277))))))))); return x >= 0.0 ? ans : 2.0-ans; } float exps(float x,float y, float sigma,float wt) •C 149 float I , i , j ,dx,dy; 1=0.0; dx=dy=0.01; for (i=xlcut+dx/2.0;i<=x2cut-dx/2.0+eps;i=i+dx) for (j=ylcut+dy/2.0;j<=y2cut-dy/2.0+eps;j=j+dy) I=I+exp(-((x-i)*(x-i)+(y-j)*(y-j))/(sigma*sigma))*dx*dy*wt; return I/(twopi*sigma*sigma/2.0); } /*********************************^ float A2airC (float z,float eo) •C int j ; float i,dz,zeff,Rp,e,sig2,tl,t2,t3,t4,sum,Zp,dZp; sum=0; dZp=z/div; for (Zp=0;Zp<z;Zp=Zp+dZp) sum=sum+pow(energyair(Zp,eo),-Bair)*(z-Zp)*(z-Zp)*dZp; return Aair*sum; } float A2wC (float z, float eo) •C float Rp,e,sig2,tl,t2,t3,t4,sum,Zp,dZp; i f (eo>=l) Rp=-0.11+0.505*eo-0.0003*eo*eo; else Rp=Rcsda(eo); e=eo*(l-z/Rp); sum=0; dZp=0.01; for (Zp=0;Zp<=z;Zp=Zp+dZp) sum=sum+pow(1-Zp/Rp,-Bw)*(z-Zp)*(z-Zp)*dZp; return Aw*pow(eo,-Bw)*sum; } float scolairC (float e) •C int i ; float a,b,y; 150 for (i=0;i<data;i++) i f (e>E[i] && e<=E[i+l]) i a=(scolair[i]-scolair[i+1])/(E[i]-E[i+1]); b=scolair[i]-a*E[i]; return a*e+b; } > float scolwC (float e) { int i ; float a,b,y; for (i=0;i<data;i++) i f (e>E[i] && e<=E[i+l]) •C a=(scolw [i]-scolw[i+1])/(E[i]-E[i+1]); b=scolw[i]-a*E[i] ; return a*e+b; } } 151 Appendix C The GEANT Monte Carlo usercode DOSE.f ************************************* This program simulates the dose distribution in a water tank. Instruction to run the program: 'nohup rundose < dose.input' Put originally by: D. Axen In present form by: B. Shahine ********************************************************************** 152 program dose * PARAMETER (NMEM0R=6060000) COMMON/PAWC/H(NMEMOR) COMMON/GCBANK/Q(8000000) * * CALL TIMEST(1E6) * CALL GZEBRA(8000000) CALL HLIMIT(-NMEMOR) cal l hplint(O) * Geant initialization CALL UGINIT * Test printings. * CALL GPRINT('PART',0) CALL GPRINT('MATE',0) CALL GPRINTCTMED',0) CALL GPRINTCVOLU',0) * Start events processing CALL GRUN * End of run CALL UGLAST * END C SUBROUTINE GUHADR C. Q. ************************************** c. * * C. * User routine to generate one hadronic interaction * C. * * C. * ==>Called by : GTHADR.GTNEUT * C. * * C. ****************************************************************** c. CALL GHEISH 153 END CDECK ID>, GUKINE. SUBROUTINE GUKINE * ******************************************** * * * GEANT3 user routine to generate Kinematics * * for primary tracks * * * ************************************************************************ * COMMON/GCFLAG/IDEBUG,IDEMIN,IDEMAX,ITEST,IDRUN,IDEVT,IEORUN + ,IE0TRI,IEVENT,ISWIT(10),IFINIT(20),NEVENT,NRNDM(2) C COMMON/GCKINE/IKINE,PKINE(10),ITRA,ISTAK,IVERT,IPART,ITRTYP + ,NAPART(5).AMASS,CHARGE,TLIFE,VERT(3),PVERT(4),IPAOLD C common/flags/flword(20) common/even/nev,ide,ipoint common/source/einc,bin,enflag Common/runpar/iaxis,matcav,igeom common/posit/xpos,ypos,zpos,xcut,ycut common/tank/xtank,ytank,ztank common/phan/iphan,ilung,zphan,rotang common/cavi/cavpos,xcav,ycav,zcav common/tumor/itumor,rtumor,xtumor,ytumor,ztumor,tratio DIMENSION VERTEX(6),PLAB(3),RNDM(6) * C reset the flagwords cal l vzero(flword,20) C set the incident particle type IK=IKINE C calculate the incident gamma energy if(enflag.eq. 0) then egamma=einc else if(einc.eq. 0.006) then cal l mohan6(egamma) else if(einc.eq. 0.004) then cal l mohan4(egamma) else if(einc.eq. 0.010) then 154 cal l mohanlO(egamma) else if(einc.eq. 0.015) then cal l mohanl5(egamma) else if(einc.eq. 0.018) then ca l l wagl8(egamma) else if(einc.eq. 0.024) then call mohan24(egamma) end i f cal l hfilKl,egamma,0. ,1.) CALL VZERO(VERTEX,6) C determine i f point source or parallel or pencil source C ipoint=l —> point source chosen if(ipoint.eq.l) then vertex(1) = xpos vertex(2) = ypos vertex(3) = -l.*(zpos - cavpos - zcav) thmax = atan(sqrt( xcut**2 + ycut**2)/zpos) 29 continue CALL GRNDM(RNDM,2) ar = 1. - cos(thmax) theta = acos(l. - ar*rndm(l)) trakln = zpos/cos(theta) phi = rndm(2)*2.*3.14159 x = trakln*sin(theta)*cos(phi) if(abs(x).gt.xcut) go to 29 y = trakln*sin(theta)*sin(phi) if(abs(y).gt.ycut) go to 29 z = zpos flen = sqrt( x**2 + y**2 + z**2 ) end i f if(ipoint.eq.2) then CALL GRNDM(RNDM,2) x=2*xcut*(rndm(l)-O.5) y=2*ycut*(rndm(2)-0.5) z = zpos flen = z vertex(1) = x vertex(2) = y vertex(3) = -l.*(zpos - cavpos - zcav) 155 x=0 y=o end i f if(ipoint.eq.3) then x = xpos y = ypos z = zpos flen = z vertex(1) = x vertex(2) = y vertex(3) = -l.*(zpos - cavpos - zcav) end i f * PLAB(l) = egamma*x/flen PLAB(2) = egamma*y/flen PLAB(3) = egamma*z/flen * CALL GSVERT(VERTEX,0,0,0,0,NVERT) CALL GSKINE(PLAB,IK,NVERT,0,0,NT) * * Kinematic debug (controled by ISWIT(l)) * IF(IDEBUG.EQ.1.AND.ISWIT(1).EQ.l) THEN CALL GPRINT('VERT',0) CALL GPRINTCKINE',0) ENDIF * END CDECK ID>, GUOUT. SUBROUTINE GUOUT * ************************************ * * * GEANT3 user routine called at the end of each event. * * * ************************************************************************ * * COMMON/GCUNIT/LIN,LOUT,NUNITS,LUNITS(5) INTEGER LIN,LOUT,NUNITS,LUNITS 156 COMMON/GCMAIL/CHMAIL CHARACTER*132 CHMAIL C COMMON/GCFLAG/IDEBUG,IDEMIN,IDEMAX,ITEST,IDRUN,IDEVT,IEORUN + ,IEOTRI,IEVENT,ISWIT(10),IFINIT(20),NEVENT,NRNDM(2) C C COMMON/GCKINE/IKINE,PKINE(10),ITRA,ISTAK,IVERT,IPART,ITRTYP + ,NAPART(5).AMASS,CHARGE,TLIFE,VERT(3),PVERT(4),IPAOLD C * common/flags/f1word(20) common/runpar/iaxis.matcav,igeom common/posit/xpos,ypos,zpos,xcut,ycut common/phan/iphan,ilung,zphan,rotang dimension rndm(4) * Logic for displaying the event IF(IDEBUG.EQ.l) THEN IF(ISWIT(2).EQ.l) CALL GPRINT('JXYZ', 0) IF(ISWIT(3).EQ.l) THEN CALL GDSHOW(l) CALL GDXYZ (0) * print * , ' print any key to show event' * read (5,'(al)') CALL ICLRWK(O.O) ENDIF ENDIF 40 continue 30 continue * end of display logic END * CDECK ID>, GUPHAD. SUBROUTINE GUPHAD C. Q _ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * c. * * C. * User routine to compute Hadron. inter, probabilities * C. * * C. * ==>Called by : GTHADR.GTNEUT * 157 c. c c. c. ********************************************* c. COMMON/GCPHYS/IPAIR,SPAIR,SLPAIR,ZINTPA,STEPPA + ,ICOMP,SCOMP,SLCOMP,ZINTCO,STEPCO + ,IPHOT,SPHOT,SLPHOT,ZINTPH,STEPPH + ,IPFIS,SPFIS,SLPFIS,ZINTPF,STEPPF + ,IDRAY,SDRAY,SLDRAY,ZINTDR,STEPDR + ,IANNI,SANNI,SLANNI,ZINTAN,STEPAN + ,IBREM,SBREM,SLBREM,ZINTBR,STEPBR + ,IHADR,SHADR,SLHADR,ZINTHA,STEPHA + ,IMUNU,SMUNU,SLMUNU,ZINTMU,STEPMU + ,IDCAY,SDCAY,SLIFE .SUMLIF.DPHYSl + ,ILOSS,SLOSS,SOLOSS,STLOSS,DPHYS2 + ,IMULS,SMULS,S0MULS,STMULS,DPHYS3 + ,IRAYL,SRAYL,SLRAYL,ZINTRA,STEPRA COMMON/GCPHLT/ILABS,SLABS,SLLABS,ZINTLA,STEPLA + ,ISYNC + ,ISTRA * C IF (IHADR.NE.4) THEN CALL GPGHEI ELSE CALL FLDIST ENDIF END CDECK ID>, GUSTEP. SUBROUTINE GUSTEP * ************************************************************************ * * * GEANT3 user routine called at the end of each tracking step * * * ************************************************************************ * PARAMETER (KWBANK=69000,KWW0RK=5200) 158 COMMON/GCBANK/NZEBRA,GVERSN,ZVERSN,IXSTOR,IXDIV,IXCONS,FENDQ(16) + ,LMAIN,LRI,WS(KWBANK) DIMENSION IQ(2),Q(2),LQ(8000),IWS(2) EQUIVALENCE (Q(l) ,IQ(1) ,LQ(9)), (LQ(1) ,LMAIN) , (IWS(l) ,WS(D) EQUIVALENCE (JCG.JGSTAT) COMMON/GCLINK/JDIGI ,JDRAW ,JHEAD .JHITS ,JKINE ,JMATE ,JPART + ,JROTM ,JRUNG ,JSET ,JSTAK ,JGSTAT,JTMED ,JTRACK,JVERTX + .JVOLUM.JXYZ ,JGPAR ,JGPAR2,JSKLT C COMMON/GCTMED/NUMED,NATMED(5),ISVOL,IFIELD,FIELDM,TMAXFD,STEMAX + ,DEEMAX,EPSIL,STMIN,CFIELD,PREC,IUPD,ISTPAR,NUMOLD COMMON/GCTLIT/THRIND,PMIN,DP,DNDL,JMIN,ITCKOV,IMCKOV,NPCKOV C INTEGER MXGKIN PARAMETER (MXGKIN=100) COMMON/GCKING/KCASE,NGKINE,GKIN(5,MXGKIN), + TOFD(MXGKIN),IFLGK(MXGKIN) INTEGER KCASE,NGKINE ,IFLGK,MXPHOT,NGPHOT REAL GKIN,TOFD,XPHOT C PARAMETER (MXPH0T=800) C0MM0N/GCKIN2/NGPH0T,XPHOT(11,MXPHOT) C C0MM0N/GCKIN3/GP0S(3,MXGKIN) REAL GPOS C COMMON/GCFLAG/IDEBUG,IDEMIN,IDEMAX,ITEST,IDRUN,IDEVT,IEORUN + ,IEOTRI,IEVENT,ISWIT(10),IFINIT(20),NEVENT,NRNDM(2) C COMMON/GCKINE/IKINE,PKINE(10),ITRA,ISTAK,IVERT,IPART,ITRTYP + ,NAPART(5),AMASS,CHARGE,TLIFE,VERT(3),PVERT(4),IPAOLD C PARAMETER (MAXMEC=30) C0MM0N/GCTRAK/VECT(7),GET0T,GEKIN,V0UT(7),NMEC,LMEC(MAXMEC) + ,NAMEC(MAXMEC),NSTEP ,MAXNST,DESTEP,DESTEL,SAFETY,SLENG + ,STEP ,SNEXT ,SFIELD,TOFG ,GEKRAT,UPWGHT,IGNEXT,INWVOL + .ISTOP ,IGAUTO,IEKBIN, ILOSL, IMULL,INGOTO,NLDOWN,NLEVIN + ,NLVSAV,ISTORY PARAMETER (MAXME1=30) C0MM0N/GCTP0L/P0LAR(3), NAMECl(MAXMEl) 159 c COMMON/GCVOLU/NLEVEL,NAMES(15),NUMBER(15), +LV0LUM(15),LINDEX(15),INFROM,NLEVMX,NLDEV(15),LINMX(15), +GTRAN(3,15),GRMAT(10,15),G0NLY(15),GLX(3) character*4 names * c ommon/flags/flword(20) common/phan/iphan,ilung,zphan,rotang common/slice/slipos,slithk,nx,nz common/tank/xtank,ytank,ztank common/posit/xpos,ypos,zpos,xcut,ycut common/cavi/cavpos,xcav,ycav,zcav common/even/nev,ide,ipoint * 3Dhistogram common/xyzv/nbx,nby,nbz,fbinx,fbiny,fbinz common/xyzdos/xyzdose(90,90,100) common/count/counter * Something generated ? IF(NGKINE.GT.O) THEN DO 5 I=1,NGKINE ITYPA = GKIN(5,I) IF(ITYPA.LE.4) CALL GSKING(I) 5 CONTINUE * histogram the number of mechanism at this step ENDIF * is the tracking medium water tank? * 3D energy grid i f (abs(vect(D) .le.xtank) then if(abs(vect(2)).le.ytank) then if(abs(vect(3)).le.ztank) then wt = 1000.*destep i=INT((vect(l)+xtank)*fbinx+1) j =INT((vect(2)+ytank)*fbiny+1) k=INT((vect(3)+ztank)*fbinz+1) xyzdosed, j ,k)=xyzdose(i, j ,k)+wt end i f end i f end i f * Debug/plot event 160 CALL GDEBUG * END CDECK ID>, GUTREV. SUBROUTINE GUTREV * **************************************** * * * GEANT3 user routine to control tracking of one event * * * * Called by GRUN * * * ************************************************************************ * COMMON/GCFLAG/IDEBUG,IDEMIN,IDEMAX,ITEST,IDRUN,IDEVT,IEORUN + ,IE0TRI,IEVENT,ISWIT(10),IFINIT(20),NEVENT,NRNDM(2) COMMON/GCFLAX/BATCH, NOLOG LOGICAL BATCH, NOLOG * character*1,input CALL GTREVE * Debug and plot tracks. * * IF(IDEBUG.EQ.l) THEN * if(ninterl.gt.0) then * IF(ISWIT(2).EQ.l) CALL GPRINT('JXYZ', 0) * IF(ISWIT(3).EQ.l) THEN * CALL GDSHOW(l) * CALL GDXYZ (0) * print * , ' print any key to show event' * read (5,'(al)') * CALL ICLRWK(O.O) * ENDIF * end i f * ENDIF * END CDECK ID>, UFILES. SUBROUTINE UFILES ************************************************************************ 161 * to open fi les * ********************************************* * To open FFREAD and HBOOK files * CHARACTER*(*) FILNAM, FSTAT dimension r(2) PARAMETER (FILNAM='dose.dat') * PARAMETER (FSTAT='OLD') OPEN(UNIT=4,FILE=FILNAM,STATUS=FSTAT, + F0RM='FORMATTED') * * open postscript f i l e open(unit = 20,file='dose.ps',form='formatted',status='unknown') cal l igmeta(20,-112) cal l igrng(20.,15.) * open output data f i le open(unit=6,File='dose.out',form='formatted',status='unknown') * Open a HBOOK direct access f i le * CALL HR0PEN(34,'HBOOK','dose.hbook','N',1024,ISTAT) END SUBROUTINE UGEOM * ************************************************************************ * * * Routine to define the geometry of the set-up. * * * ************************************************************************ C COMMON/GCKINE/IKINE,PKINE(10),ITRA,ISTAK,IVERT,IPART,ITRTYP + ,NAPART(5).AMASS,CHARGE,TLIFE,VERT(3),PVERT(4),IPAOLD C C COMMON/GCFLAG/IDEBUG,IDEMIN,IDEMAX,ITEST,IDRUN,IDEVT,IEORUN + ,IE0TRI,IEVENT,ISWIT(10),IFINIT(20),NEVENT,NRNDM(2) COMMON/GCFLAX/BATCH, NOLOG 162 LOGICAL BATCH, NOLOG C common/runpar/iaxis.matcav,igeom common/posit/xpos,ypos,zpos,xcut,ycut common/phan/iphan,ilung,zphan,rotang common/tank/xtank,ytank,ztank common/cavi/cavpos,xcav,ycav,zcav common/tmedia/fieldp,ifielp,tmaxfp,dmaxmp,deemap,epsip,stmip character*1,input * material paramters * water paramters dimension awater(2),zwater(2),wwater(2) * muscle paramters dimension amusc(lO),zmusc(10),wmusc(10) * fat paramters dimension afat(5) , zfat(5),wfat(5) * bone paramters dimension abone(8),zbone(8),wbone(8) * polystrene parmaters dimension apoly(2),zpoly(2),wpoly(2) * parmaters for physical dimensions of objects dimension areapr(3),cstlpr(3) dimension phanpr(3) * water phantom par dimension watepr(3) * the back bone paramters dimension bonepr(3) * the water tank parameters dimension slabpr(3) * the davity paramters dimension cavlpr(3) * * basic parameters data awater/1.01, 16./ data zwater/1., 8. / data wwater/2.,1./ c * muscle see Johns page 720 * data amusc/1.008,12.Oil,14.007,16.994,22.99,24.305,30.974, 163 + 32.066,39.098,40.078/ data zmusc/1.,6.,7.,8.,11.,12.,15.,16.,19.,20./ data wmusc/.102,.123,.035,.7289,.0008,.0002,.002, + .005,.003,.00007/ * data afat/1.008,12.Oil,14.007,16.994,32.066/ data zfat/1. ,6. ,7. ,8. ,16./ data wfat/.112,.5732,.011,.3031,.00006/ * data abone/1.008,12.Oil,14.007,16.994,24.305,30.974, + 32.066,40.078/ data zbone/1.,6.,7.,8.,12.,15.,16.,20./ data wbone/.064,.278,.027,.410,.002,.070,.002,.147/ * data apoly/1.008,12.001/ data zpoly/1.,6./ data wpoly/.0774,.9226/ * * vacum paramters VAC = 10**(-16) RVAC = 10**16 * scattering area parameters data areapr/ 40., 40., 40./ data cstlpr/ 25., 20., .625 / data phanpr/ 15., 11.,15./ data watepr/ 0.,10.7,15./ data bonepr/O.,1.25,15./ data cavlpr/2.5,2.5,15./ * the slab of water data slabpr/15.,20.,.5/ * * point source geometry areapr(1) = xtank areapr(2) = ytank areapr(3) = zpos+ztank slabpr(l) = xtank slabpr(2) = ytank slabpr(3) = ztank cavlpr(l) = xcav 164 * BUGS cavlpr(2) = ycav cavlpr(2) = zcav * BUGS cavlpr(3) = zcav cavlpr(3) = ycav defines materials cal l gsmate( 1,'air $' cal l gsmate( 2,'vacum $' cal l gsmate( 3,'aluminum$', 26.98, 13. cal l gsmate( 4,'lead $', 207.19, 82 14.6,7.3,0.0012,30423.0,67500.0,0,0) VAC,VAC,VAC,RVAC,0.,0,0) 2.70, 8.9, 0., 0,0) 11.35, .56, 0., 0, 0) cal l gsmixt( 6,'water cal l gsmixt( 7,'muscle cal l gsmixt( 8,'bone cal l gsmixt( 9,'fat $' $' $> $' awater,zwater,l.00,-2,wwater) amuse,zmusc,1.04,10,wmusc) abone,zbone,l.65,8,wbone) afat ,zfat ,.916,5,wfat ) cal l gsmixt(11,'polystr $', apoly,zpoly,1.044,2,wpoly) defines user tracking media parameters fieldm = filedp i f ie ld = if ielp tmaxfd = tmaxfp dmaxms = dmaxmp deemax = deemap epsil = epsip stmin = stmip defines tracking media parameters. cal l gstmed( 1, 'air $ ' , 1, 1, i f ie ld * dmaxms, deemax, epsil , stmin, 0, 0) cal l gstmed( 2, 'vacum $ ' , 2, 0, 0, 0., 0., 500., 0., * 500., 500., 0, 0) cal l gstmed( 3, 'aluminum $ ' , 3, 1, i f ie ld , fieldm,tmaxfd, fieldm,tmaxfd, * dmaxms, deemax, epsil, stmin, 0, 0) cal l gstmed( 4, 'lead $ 4, 1, i f ie ld , fieldm,tmaxfd, * dmaxms, deemax, epsil , stmin, 0, 0) cal l gstmed( 6, 'water $ 6, 1, i f ie ld , fieldm,tmaxfd, * dmaxms, deemax, epsil , stmin, 0, 0) cal l gstmed( 7, 'muse $ 7, 1, i f ie ld , fieldm,tmaxfd, * dmaxms, deemax, epsil, stmin, 0, 0) cal l gstmed( 8, 'bone $ \ 8, 1, i f ie ld , fieldm,tmaxfd, * dmaxms, deemax, epsil, stmin, 0, 0) cal l gstmed( 9, 'fat $ 9, 1, i f ie ld , fie1dm,tmaxfd, * dmaxms, deemax, epsil , stmin, 0, 0) 165 c a l l gstmeddl, 'polystr $ ' , 11, 1, i f i e l d , fieldm.tmaxfd * dmaxrns, deemax, e p s i l , stmin, 0, 0) * define the ro t a t i o n matrix f o r the phantom xang = 90. + rotang yang = 180. + rotang * c a l l gsrotm(100,xang,90.,yang,270.,90.,0.) c a l l gsrotm(100,xang,0.,yang,0.,90.,90.) * defines geometry of the set-up * define the o v e r a l l area c a l l gsvoluCarea' , 'BOX ' , 2, areapr, 3, ivolu) * Cavity i n the water tank c a l l g s v o l u C c a v l ' , 'BOX ',matcav, cavlpr ,3, ivolu) * set defaults f o r igeom=l zphan = 0. x c o l l = 0. x c s t l l = 0. xcol2 = 32.5 - 1.6 xcstl2 = 32.5 + .625 zcol2 = zphan zest12 = zphan * the water slab if(iphan.eq . 5 ) then * define the ro t a t i o n angle f o r the water slab xang = 90. + rotang zang = rotang c a l l gsrotm(102,xang,0.,90.,90.,rotang,0.) c a l l g s voluCslab' , 'BOX ' , 6, slabpr, 3, ivolu) * p o s i t i o n the water slab c a l l gspos('slab' , 1 ,'area', 0., 0., 0., 102, 'ONLY') c a l l gsposCcavl' , 1 ,'slab' , 0., O.,cavpos, 100, 'ONLY') end i f * define geometry optimization c a l l gsord('area' ,3) * close geometry banks, mandatory system routine, c a l l ggclos * if(iswit ( 3 ).eq. 1) then * c a l l gdrawcCcoll' , 3 , 0 .0 ,10 . ,10. ,15.00,15.) * p r i n t * , ' p r i n t any key to continue' * read (5, ' ( a l ) ' ) 166 * cal l iclrwk(O.O) fmag = .15 cal l gdrawOarea' ,135. ,135. ,0. , 10. ,10.,fmag,fmag) cal l iclrwk(O.O) * * plot a section cal l gdrawcCarea' ,1,0.0,10. ,10., .200, .2) cal l iclrwk(O.O) * plot a section cal l gdrawcCarea' ,2,0.0,10. ,10., .200, .2) * print *,'print any key to continue' * read (5, '(al)') cal l iclrwk(O.O) end i f END CDECK ID>, UGINIT. SUBROUTINE UGINIT * ******************************************** * To init ial ise GEANT3 program and read data cards * * * ************************************************************************ * PARAMETER (KWBANK=69000,KWW0RK=5200) COMMON/GCBANK/NZEBRA,GVERSN,ZVERSN,IXSTOR,IXDIV,IXCONS,FENDQ(16) + ,LMAIN,LR1,WS(KWBANK) DIMENSION IQ(2),Q(2),LQ(8000),IWS(2) EQUIVALENCE (Q(l) ,IQ(1) ,LQ(9)) , (LQ(1) ,LMAIN) , (IWS(l) ,WS(D) EQUIVALENCE (JCG,JGSTAT) COMMON/GCLINK/JDIGI ,JDRAW ,JHEAD ,JHITS ,JKINE ,JMATE ,JPART + ,JROTM ,JRUNG ,JSET ,JSTAK ,JGSTAT,JTMED ,JTRACK,JVERTX + ,JVOLUM,JXYZ ,JGPAR ,JGPAR2,JSKLT C COMMON/GCFLAG/IDEBUG,IDEMIN,IDEMAX,ITEST,IDRUN,IDEVT,IEORUN + ,IE0TRI,IEVENT,ISWIT(10),IFINIT(20),NEVENT,NRNDM(2) COMMON/GCFLAX/BATCH, NOLOG LOGICAL BATCH, NOLOG C COMMON/GCUNIT/LIN,LOUT,NUNITS,LUNITS(5) 167 INTEGER LIN,LOUT,NUNITS,LUNITS COMMON/GCMAIL/CHMAIL CHARACTER*132 CHMAIL common/even/nev,ide,ipoint common/tank/xtank,ytank,ztank common/xyzv/nbx,nby,nbz,fbinx,fbiny,fbinz common/xyzdos/xyzdose(90,90,100) common/count/counter Open user fi les CALL UFILES read the run parmaters from f i le 1 cal l readin Initialize GEANT CALL GINIT Prints version number WRITE(LOUT,1000) Define a data card 'GAST' to change gas type CALL GFFGO number of events and number of events to debug nevent = nev idemax = ide itest = nev/10 Initialize GEANT/ZBOOK data structures CALL GZINIT Initialize drawing package IF(ISWIT(7).EQ.0)THEN CALL GDINIT ENDIF Geometry and materials description. CALL UGEOM Particle table definition and energy loss init ial ization. CALL GPART 168 CALL GPHYSI * Create a view bank IF(ISWIT(7).EQ.O)CALL VIEWYZ(l) CALL UHINIT * 1000 FORMAT(/,' DOSE VERSION 2.00.00 ( 02 December 1999 ) ',/) * •Init ial izing arrays do 97 i = l.nbx do 98 j = l.nby do 99 k = l.nbz xyzdosed, j ,k)=0. 99 continue 98 continue 97 continue counter=0 * fbinx=nbx/(2*xtank) fbiny=nby/(2*ytank) fbinz=nbz/(2*ztank) * END CDECK ID>, UGLAST. SUBROUTINE UGLAST * ********************************************* * Termination routine to print histograms and statistics * ************************************************************************ C * common/source/einc,bin,enflag common/phan/iphan,ilung,zphan,rotang common/tank/xtank,ytank,ztank common/voxel/dx,dy,dz common/voxell/dxl,dy1,dz1 common/voxel2/dx2,dy2,dz2 common/voxel3/dx3,dy3,dz3 169 common/slice/slipos,slithk,nx,nz common/xyzv/nbx,nby,nbz common/xyzdos/xyzdose(90,90,100) * CALL GLAST C * Print HBOOK histograms CALL HPRINT(O) C Save histograms cal l hplot(0,' ' , ' ',0) C Save histograms CALL HROUT(0,ICYCLE,' ') CALL HRENDOHBOOK') * Close GKS display f i l e * close the postscript fi les cal l igmeta(O.O) cal l iclwk(2) CALL IGEND open(unit=43,File='xyzdose',form='formatted',status='unknown') write(43,*) nbx, nby, nbz do 15 i = 1, 1 write(43,*) (-xtank+j*(2*xtank/nbx), j=0,nbx) 15 continue do 16 i = 1, 1 write(43,*) (-ytank+j*(2*ytank/nby), j=0,nby) 16 continue do 17 i = 1, 1 write(43,*) (j*(2*ztank/nbz), j=0,nbz) 17 continue do 18 k = l.nbz do 19 j = l.nby write(43,*) (xyzdose(i,j,k),i=l,nbx) 19 continue 18 continue close(43) END CDECK ID>, UHINIT. SUBROUTINE UHINIT * **************************************** 170 * To book the user's histograms * ********************************************* C common/source/einc,bin,enflag common/runpar/iaxis,matcav,igeom common/phan/iphan,ilung,zphan,rotang common/tank/xtank,ytank,ztank c ommon/pos it/xpo s,ypos,zpo s,xcut,ycut common/cavi/cavpos,xcav,ycav,zcav * emaxp=l.l*einc CALL HBOOKKl,'Bremsstrahlung spectrum$' ,bin,0. ,emaxp,0.) END CDECK ID>, VIEWYZ. SUBROUTINE VIEWYZ (IVIEW) C. fj, ****************************************************************** C. * * C. * Draw fu l l set up in 'view bank' mode. * C. * * C. ****************************************************************** C. Common/runpar/iaxis.matcav,igeom common/posit/xpos,ypos,zpos,xcut,ycut common/phan/iphan,ilung,zphan,rotang * CALL GSATT('*','SEEN', 1) C C Create bank for view YZ. CALL GDOPEN(IVIEW) fmag = .15 C if((iaxis.eq.l).or.(iaxis.eq.2))then CALL GDRAWCOarea',iaxis , 0.0, 10.,10.,fmag,fmag) end i f if(iaxis.eq.3) then ca l l gdraw('area',120.,150.,0., 10.,10.,.15,.15) end i f 171 c CALL GDCLOS END subroutine readin Q, *********************************************** C. * * C. * To read in runparams for this run * C. * * Q * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * common/source/einc,bin,enflag common/runpar/iaxis,matcav,igeom common/posit/xpos,ypos,zpos,xcut,ycut common/phan/iphan,ilung,zphan,rotang common/tank/xtank,ytank,ztank common/cavi/cavpos,xcav,ycav,zcav common/tumor/itumor,rtumor,xtumor,ytumor,ztumor,tratio common/even/nev,ide,ipoint common/tmedia/f ieldp,ifielp,tmaxfp,dmaxmp,deemap,epsip,stmip common/xyzv/nbx,nby,nbz * * data card 1 open(1,f ile='runparams',status='OLD') * write(6,*) ' ' write(6,*) 'Parameters for this run:' write(6,*) ' ' irunpr = 0 read(l,*) icard,nev,ide,ipoint write(6,*) ' data card ' , icard write(6,*) ' number of events for this run ',nev write(6,*) ' number of events plotted ' , ide write(6,*) ' point, parallel or pencilsource ' , ipoint write(6,*) ' ' * * data card 2 read(1,*)icard, einc, bin, enflag write(6,*) ' ' 172 write(6,*) ' data card ' , icard write(6,*) ' incident energy in Gev for photons = einc write(6,*) ' number of bins in energy spectrum = ' , bin * write(6,*) ' atomic number of production target = ' , ztar write(6,*) ' energy flag (0 for monoenergetic else Mohan)=',enflag write(6,*) ' ' * * data card 3 read(l,*)icard, xtank,ytank,ztank write(6,*) ' ' write(6,*) ' data card ' , icard write(6,*) ' water tank dimensions' write(6,*) ' x half length = '.xtank write(6,*) ' y half length = ',ytank write(6,*) ' z half length = '.ztank write(6,*) ' ' * data card 4 read(l,*)icard, cavpos,xcav,ycav,zcav write(6,*) ' ' write(6,*) ' data card ' , icard write(6,*) ' cavity position and cavity half lengths' write(6,*) ' cavity position = ' , cavpos write(6,*) ' cavity x half length = ' , xcav write(6,*) ' cavity y half length = ' , ycav write(6,*) ' cavity z half length = ' , zcav write(6,*) ' ' * * data card 5 read(1,*)icard,f ieldp,if ielp,tmaxfp,dmaxmp,deemap,eps ip,stmip write(6,*) > > write(6,*) ' data card ' , icard write(6,*) ' tracking media parameters ' write(6,*) ' fieldm = ' , fieldp write(6,*) ' i f ie ld = ' , i f ie lp write(6,*) ' tmaxfd = ' , tmaxfp write(6,*) ' dmaxms = ' , dmaxmp write(6,*) ' deemax = ' , deemap write(6,*) ' epsil = ' , epsip write(6,*) ' stmin = ' , stmip write(6,*) > > 1 7 3 * data card 6 read(l,*)icard, xpos.ypos.zpos write(6,*) ' ' write(6,*) write(6,*) write(6,*) data card ' , icard x and y position of the point source ,xpos,ypos * data card 7 read(1,*)icard,xcut,ycut write(6,*) ' ' write(6,*) ' data card write(6,*) ' +ve x and y write(6,*) ' ' icard acceptance for point source',xcut,ycut * data card 8 read(l,*)icard,nbx.nby,nbz write(6,*) ' ' write(6,*) write(6,*) write(6,*) data card ' , icard # of cells along X, Y & Z',nbx,nby,nbz * data card 10 read(l,*) write(6,* write(6,* write(6,* write(6,* write(6,* icard, matcav > > ' data card ' , icard ' idenfification number for material in the cavity write(6,*) ' 1 air ' write(6,*) ' 2 Nal ' write(6,*) ' 3 Aluminum ' write(6,*) ' 4 lead ' write(6,*) ' 5 glass ' write(6,*) ' 6 water ' write(6,*) ' 7 muscle ' write(6,*) ' 8 bone ' write(6,*) ' 9 fat ' write(6,*) '10 lucite ' write(6,*) '11 polystyrene ' 'cavity material number used = > > matcav 174 * data card 15 read(l,*) icard, iphan ,ilung,rotang write(6,*) data card ' , icard 5 = water tank ' 'phantom number = ',iphan write(6,*) write(6,*) write(6,*) write(6,*) write(6,*) write(6,*) ilung = ' , ilung, rotation angle = ( 1 for lungs in phantom 4)' rotang * data card 20 read(l,*) icard, iaxis.igeom write(6,*) write(6,*) write(6,*) write(6,*) write(6,*) write(6,*) write(6,*) data card ' , icard 1 = area viewed from positive x axis' 2 = area viewd from postive y axis' 3 = isometric view ' a x i s , l a x i s chosen' * data card 21 read(1,*) icard,itumor,rtumor,xtumor,ytumor,ztumor,tratio write(6,*) ' ' write(6,*) ' data card ' , icard write(6,*) ' itumor = 1, i f tumor is included ' write(6,*) ' tumor radius ' , rtumor write(6,*) ' tumor position ' , xtumor, ytumor, ztumor write(6,*) ' relative strenth of tumor ' , tratio write(6,*) ' ' close(l) write(6,*) ' ' write(6,*) 'End of reading run paramters card.' write(6,*) ' ' * check that both collimator and simulation are not specified * simultaneously end *********************************** * Varian linac spectra (Mohan) * 175 * * subroutine mohan4(egamma) dimension area(16),x(17),sumar(16) dimension rndm(2) data x/ + 0.0,0.25,0.5,0.75,1.,1.25,1.5,1.75,2.,2.25,2.5,2.75,3.,3.25, + 3.5,3.75,4./ data area/ + 0.0000273753,0.0446637,0.177792,0.133375,0.132117,0.138924, + 0.0855541,0.0646403,0.0614342,0.0418522,0.0345028,0.0173501, + 0.0217844,0.0217252,0.0217967,0.00246181/ data suinar/ + 0.00002738,0.04469112,0.22248290,0.35585757,0.48797445, + 0.62689818,0.71245231,0.77709265,0.83852688,0.88037910, + 0.91488190,0.93223195,0.95401632,0.97574149,0.99753819,1. / cal l grndm(rndm,2) do 10 i = 1,16 if(rndm(l).le.sumar(i)) then egamma = rndm(2)*(x(i+l)-x(i))+x(i) egamma = egamma*.001 return end i f 10 continue end *********************************************** * subroutine mohan6(egamma) dimension area(24),x(25),sumar(24) dimension rndm(2) 176 data x/ +0.0,0.25,0.5,0.75,1,1-25,1.5,1.75,2,2.25,2.5,2.75, +3,3.25,3.5,3.75,4,4.25,4.5,4.75,5,5.25,5.5,5.75,6./ data area/ + 0.00102054,0.03217721,0.11450595,0.11450595,0.11020442, + 0.10148627,0.10148627,0.06202282,0.05882261,0.04638703, + 0.03435115,0.03240995,0.03687929,0.02359230,0.03096995, + 0.02428053,0.02209113,0.01280682,0.01039471,0.01193327, + 0.00403651,0.00674605,0.00297958,0.00390971/ data sumar/ + 0.00102054,0.03319775,0.14770370,0.26220965,0.37241407, + 0.47390034,0.57538660,0.63740943,0.69623203,0.74261907, + 0.77697021,0.80938016,0.84625944,0.86985175,0.90082169, + 0.92510222,0.94719335,0.96000017,0.97039488,0.98232815, + 0.98636466,0.99311070,0.99609029,1./ cal l grndm(rndm,2) do 10 i = 1,24 if(rndm(l).le.sumar(i)) then egamma = rndm(2)*(x(i+l)-x(i))+x(i) egamma = egamma*.001 return end i f 10 continue end ********************************************** * subroutine mohanlO(egamma) dimension area(20),x(21),sumar(20) dimension rndm(2) data x/ + 0.0,0.5,1,1.5,2,2.5,3,3.5,4,4.5,5,5.5, + 6,6.5,7,7.5,8,8.5,9,9.5,10./ 177 data area/ + 0.10898780,0.11349740,0.09533682,0.11917857,0.09407601, + 0.09597514,0.06894617,0.05077439,0.04720343,0.04002970, + 0.04509900,0.02095725,0.02748013,0.01950877,0.01950877, + 0.01103236,0.01103236,0.00733260,0.00358300,0.00046034/ data sumar/ + 0.10898780,0.22248520,0.31782202,0.43700059,0.53107660, + 0.62705174,0.69599791,0.74677229,0.79397572,0.83400542, + 0.87910441,0.90006167,0.92754180,0.94705056,0.96655933, + 0.97759169,0.98862406,0.99595666,0.99953966,1./ cal l grndm(rndm,2) do 10 i = 1,20 if(rndm(l).le.sumar(i)) then egamma = rndm(2)*(x(i+l)-x(i))+x(i) egamma = egamma*.001 return end i f 10 continue end ********************************************** * subroutine mohanl5(egamma) dimension area(58),x(59),sumar(58) dimension mdm(2) data x/ + 0.0,0.25,0.5,0.75,1,1.25,1.5,1.75,2,2.25,2.5,2.75, + 3,3.25,3.5,3.75,4,4.25,4.5,4.75,5,5.25, + 5.5,5.75,6,6.25,6.5,6.75,7,7.25,7.5,7.75, + 8,8.25,8.5,8.75,9,9.25,9.5,9.75,10,10.25, + 10.5,10.75,11,11.25,11.5,11.75,12,12.25,12.5,12.75, + 13,13.25,13.5,13.75,14,14.25,14.5/ data area/ + 0.00279546,0.00330203,0.02583623,0.05108165,0.06093008, 178 + 0.06093008,0.05641301,0.05641301,0.05020961,0.05020961, + 0.03801635,0.03801635,0.03533825,0.03533825,0.02552386, + 0.02552386,0.02552386,0.02552386,0.01929309,0.01929309, + 0.01929309,0.01929309,0.01279252,0.01279252,0.01279252, + 0.01279252,0.01279252,0.01279252,0.01279252,0.01484022, + 0.01142497,0.01142497,0.01142497,0.00945127,0.00945127, + 0.00945127,0.00945127,0.00466090,0.00466090,0.00706059, + 0.00706059,0.00489676,0.00489676,0.00401299,0.00401299, + 0.00295336,0.00295336,0.00295336,0.00271848,0.00472148, + 0.00472148,0.00472148,0.00472148,0.00270465,0.00270465, + 0.00100643,0.00100643,0.00028533/ data sumar/ + 0.00279546,0.00609749,0.03193372,0.08301537,0.14394545, + 0.20487552,0.26128853,0.31770153,0.36791114,0.41812074, + 0.45613710,0.49415345,0.52949169,0.56482994,0.59035380, + 0.61587766,0.64140152,0.66692538,0.68621848,0.70551157, + 0.72480466,0.74409775,0.75689027,0.76968279,0.78247531, + 0.79526783,0.80806035,0.82085287,0.83364539,0.84848561, + 0.85991058,0.87133555,0.88276052,0.89221179,0.90166306, + 0.91111432,0.92056559,0.92522649,0.92988739,0.93694798, + 0.94400858,0.94890534,0.95380210,0.95781508,0.96182807, + 0.96478142,0.96773478,0.97068814,0.97340662,0.97812809, + 0.98284957,0.98757104,0.99229252,0.99499717,0.99770182, + 0.99870824,0.99971467,1./ cal l grndm(rndm,2) do 10 i = 1,58 if(rndm(l).le.sumar(i)) then egamma = rndm(2)*(x(i+l)-x(i))+x(i) egamma = egamma*.001 return end i f 10 continue end ********************************************** * 179 subroutine wag!8(egamma) dimension area(90),x(91),sumar(90) dimension rndm(2) data x/ +0.0,0.2,0.4,0.6,0.8,1,1.2,1.4,1.6,1.8,2,2.2, +2.4,2.6,2.8,3,3.2,3.4,3.6,3.8,4,4.2,4.4,4.6,4.8,5,5.2, +5.4,5.6,5.8,6,6.2,6.4,6.6,6.8,7,7.2,7.4,7.6,7.8,8,8.2, + 8.4,8.6,8.8,9,9.2,9.4,9.6,9.8,10,10.2, +10.4,10.6,10.8,11,11.2,11.4,11.6,11.8,12,12.2, +12.4,12.6,12.8,13,13.2,13.4,13.6,13.8,14,14.2, + 14.4,14.6,14.8,15,15.2,15.4,15.6,15.8,16,16.2, +16.4,16.6,16.8,17,17.2,17.4,17.6,17.8,18./ data area/ + 0.00250000,0.00560000,0.00680000,0.00820000, + 0.01000000,0.01175000,0.01385000,0.01600000, + 0.01825000,0.02075000,0.02290000,0.02510000, + 0.02670000,0.02830000,0.02900000,0.02960000, + 0.02820000,0.02570000,0.02460000,0.02350000, + 0.02310000,0.02150000,0.02130000,0.02110000, + 0.02075000,0.02060000,0.02050000,0.02040000, + 0.01940000,0.01875000,0.01860000,0.01860000, + 0.01825000,0.01670000,0.01650000,0.01640000, + 0.01475000,0.01470000,0.01375000,0.01375000, + 0.01350000,0.01325000,0.01320000,0.01260000, + 0.01190000,0.01180000,0.01060000,0.01050000, + 0.00950000,0.00925000,0.00825000,0.00800000, + 0.00750000,0.00700000,0.00650000,0.00640000, + 0.00625000,0.00580000,0.00560000,0.00550000, + 0.00510000,0.00480000,0.00460000,0.00450000, + 0.00440000,0.00430000,0.00425000,0.00380000, + 0.00370000,0.00330000,0.00330000,0.00310000, + 0.00270000,0.00260000,0.00230000,0.00200000, + 0.00180000,0.00160000,0.00140000,0.00120000, + 0.00100000,0.00090000,0.00080000,0.00070000, + 0.00060000,0.00050000,0.00040000,0.00030000, + 0.00025000,0.00020000/ 180 data sumar/ + 0.00250000,0.00810000,0.01490000,0.02310000,0.03310000, + 0.04485000,0.05870000,0.07470000,0.09295000,0.11370000, + 0.13660000,0.16170000,0.18840000,0.21670000,0.24570000, + 0.27530000,0.30350000,0.32920000,0.35380000,0.37730000, + 0.40040000,0.42190000,0.44320000,0.46430000,0.48505000, + 0.50565000,0.52615000,0.54655000,0.56595000,0.58470000, + 0.60330000,0.62190000,0.64015000,0.65685000,0.67335000, + 0.68975000,0.70450000,0.71920000,0.73295000,0.74670000, + 0.76020000,0.77345000,0.78665000,0.79925000,0.81115000, + 0.82295000,0.83355000,0.84405000,0.85355000,0.86280000, + 0.87105000,0.87905000,0.88655000,0.89355000,0.90005000, + 0.90645000,0.91270000,0.91850000,0.92410000,0.92960000, + 0.93470000,0.93950000,0.94410000,0.94860000,0.95300000, + 0.95730000,0.96155000,0.96535000,0.96905000,0.97235000, + 0.97565000,0.97875000,0.98145000,0.98405000,0.98635000, + 0.98835000,0.99015000,0.99175000,0.99315000,0.99435000, + 0.99535000,0.99625000,0.99705000,0.99775000,0.99835000, + 0.99885000,0.99925000,0.99955000,0.99980000,1./ cal l grndm(rndm,2) do 10 i = 1,90 if(rndm(l).le.sumar(i)) then egamma = rndm(2)*(x(i+l)-x(i))+x(i) egamma = egamma*.001 return end i f 10 continue end * subroutine mohan24(egamma) dimension area(48),x(49),sumar(48) dimension rndm(2) data x/ + 0.0,0.5,1,1.5,2,2.5,3,3.5,4,4.5,5,5.5, 181 + 6,6.5,7,7.5,8,8.5,9,9.5,10,10.5,11,11-5,12,12.5,13, + 13.5,14,14.5,15,15.5,16,16.5,17,17.5,18, + 18.5,19,19.5,20,20.5,21,21.5,22,22.5,23,23.5,24./ data area/ + 0.03397082,0.03400165,0.04259799,0.05822012,0.06330694, + 0.06949286,0.06012567,0.04966026,0.04966026,0.04966026, + 0.04012049,0.03495509,0.03495509,0.03122967,0.02335256, + 0.02204429,0.02429327,0.02479524,0.02494953,0.01956055, + 0.01956055,0.01602799,0.01465543,0.01714823,0.01472746, + 0.01472746,0.01051517,0.01015987,0.01060481,0.00849799, + 0.00898573,0.00615922,0.00727301,0.00546521,0.00636704, + 0.00600969,0.00328375,0.00234633,0.00488280,0.00396287, + 0.00204737,0.00497451,0.00361919,0.00297819,0.00100143, + 0.00195447,0.00047097,0.00064068/ data sumar/ + 0.03397082,0.06797246,0.11057045,0.16879057,0.23209751, + 0.30159037,0.36171604,0.41137630,0.46103656,0.51069682, + 0.55081731,0.58577239,0.62072748,0.65195715,0.67530971, + 0.69735400,0.72164727,0.74644251,0.77139204,0.79095259, + 0.81051313,0.82654112,0.84119656,0.85834478,0.87307224, + 0.88779970,0.89831487,0.90847475,0.91907956,0.92757755, + 0.93656328,0.94272249,0.94999550,0.95546071,0.96182775, + 0.96783744,0.97112120,0.97346752,0.97835032,0.98231319, + 0.98436056,0.98933506,0.99295426,0.99593245,0.99693388, + 0.99888835,0.99935932,1./ cal l grndm(rndm,2) do 10 i = 1,48 if(rndm(l).le.sumar(i)) then egamma = rndm(2)*(x(i+l)-x(i))+x(i) egamma = egamma*.001 return end i f 10 continue end 182 subroutine moh24(egamma) dimension fac(47),area(47),fkl(47),x(48),sumar(47) dimension rndm(2) data area/ 0.0345847,0.0389743,0.0512968,0.0618336,0.0675692, 0.0659506,0.0558597, 0.0505348,0.0505348,0.0456809,0.0381988,0.0355707, 0.0336752,0.0277717, 0.0230982,0.0235768,0.0249765,0.0253104,0.022647, 0.019905,0.0181076, 0.0156119,0.0161819,0.0162185,0.0149868,0.0128436, 0.0105196,0.0105652, 0.00971961,0.00889581,0.00770583,0.00683439, 0.00648127,0.00602031, 0.00629735,0.00472855,0.00286461,0.00367822, 0.00450072,0.00305804, 0.00357277,0.00437252,0.00335678,0.00202485, 0.00150398,0.00123407, 0.000565614/ data fkl / 7968.53,28.5789,15.726,48.2961,39.715,-26.227,-23.4748,1., 1.,-25.7526,-47.5614,1.,-65.9453,-31.1883,-187.786, 109.238,489.42, 1592.25,-45.5881,1.,-69.5456,-178.99,98.5534,-101.486, 1.,-58.3231,-691.454, 552.153,-116.609,503.697,-86.9175,220.574,-135.896, 272.418,-687.502,-90.1243,-262.074,96.8566,-267.057, -128.256, 83.9296,-181.267,-383.265,-124.281,257.781,-165.604, 1447.56/ data fac/ 550.429,0.977682,-0.136615,3.72265, 2.61702,-6.70938,-6.37259,-3.89226,-4.39226,-7.6028,-9.38358, -5.92416,-11.1914,-8.9823,-16.425,-3.09904,15.698, 71.3512,-11.8149,-9.95756,-13.2686,-16.8387,-8.56044, -15.5419,-12.468,-14.7482,-28.2976,-2.58281,-17.0168, 183 + -6.28841,-17.0895,-13.235,-18.5116,-13.9699,-26.4089, + -19.1023,-20.2515,-18.5375,-22.1539,-21.0344,-20.1503, + -22.8352,-24.3231,-22.7533,-21.9746,-23.6587,-22.1125/ data x/ +0.5.1-,1.5,2.,2.5,3.,3.5,4.,4.5,5.,5.5,6.,6.5,7.,7.5, + 8.,8.5,9.,9.5,10.,10.5, + 11. ,11.5,12.,12.5,13.,13.5,14.,14.5,15.,15.5,16., + 16.5,17.,17.5,18.,18.5,19., + 19.5,20.,20.5,21.,21.5,22.,22.5,23.,23.5,24./ data sumar/ + 0.0345847,0.073559,0.124856,0.186689,0.254259,0.320209, + 0.376069,0.426604, + 0.477139,0.522819,0.561018,0.596589,0.630264,0.658036,0.681134, + 0.704711, + 0.729687,0.754998,0.777645,0.79755,0.815657,0.831269,0.847451, + 0.86367, + 0.878656,0.8915,0.90202,0.912585,0.922304,0.9312,0.938906, + 0.94574,0.952222, + 0.958242,0.964539,0.969268,0.972132,0.975811,0.980311, + 0.983369,0.986942, + 0.991315,0.994671,0.996696,0.9982,0.999434,1./ cal l grndm(rndm,2) do 10 i = 1,47 if(rndm(l).le.sumar(i)) then i f (fkl(i) .ne.l.)then s i = 1. s = sign(si,fkl(i)) egamma = -fac(i) + s *sqrt( (fac(i) + x(i))**2 + + 2.*fkl(i)*area(i)*radm(2) ) egamma = egamma*.001 return else egamma = (x(i) + .5*rndm(2))*.001 return end i f 184 end i f 10 c o n t i n u e end ******************************************* * SUBROUTINE GTGAMA C . Q _ ****************************************************************** C. * * C . * P h o t o n t r a c k . Computes s t e p s i z e and p r o p a g a t e s p a r t i c l e * C . * t h r o u g h s t e p . * C . * * C . * ==>Ca l l ed by : GTRACK * C . * A u t h o r s R . B r u n , F . B r u y a n t L . U r b a n * * * * * * * * * C . * * Q ****************************************************************** c. PARAMETER (KWBANK=69000,KWW0RK=5200) COMMON/GCBANK/NZEBRA,GVERSN,ZVERSN,IXSTOR,IXDIV, IXCONS,FENDQ(16) + ,LMAIN,LRl.WS(KWBANK) DIMENSION I Q ( 2 ) , Q ( 2 ) , L Q ( 8 0 0 0 ) , I W S ( 2 ) EQUIVALENCE ( Q ( l ) , IQ(1) , LQ(9 ) ) , (LQ(1) ,LMAIN) , ( IWS( l ) ,WS(D) EQUIVALENCE ( JCG, JGSTAT) COMMON/GCLINK/JDIGI ,JDRAW ,JHEAD , JHITS , JK INE , JMATE , JPART + ,JROTM ,JRUNG , J SET , JSTAK , JGSTAT, JTMED , JTRACK, JVERTX + , JVOLUM,JXYZ ,JGPAR , JGPAR2, JSKLT C COMMON/GCCUTS/CUTGAM,CUTELE,CUTNEU,CUTHAD,CUTMUO,BCUTE,BCUTM + ,DCUTE ,DCUTM ,PPCUTM,T0FMAX,GCUTS(5) C COMMON/GCJLOC/NJLOC(2) , JTM, JMA, JLOSS, JPROB, JMIXT, JPHOT, JANNI + , JCOMP, JBREM, JPA IR , JDRAY, JPF IS , JMUNU, JRAYL + , JMULOF, JCOEF, JRANG C INTEGER NJLOC , JTM, JMA, J LOSS , J PROB , JM IXT , J PHOT, JANNI + , JCOMP, JBREM, JPA IR , JDRAY, JPF IS , JMUNU, JRAYL + , JMULOF, JCOEF, JRANG C COMMON/GCJLCK/NJLCK(2 ) , JTCKOV, JABSCO, JEFF IC , J INDEX, JCURIN 185 , JPOLAR,JTSTRA,JTSTCO,JTSTEN,JTASHO EQUIVALENCE (JLASTV,JTSTEN) INTEGER NJLCK,JTCKOV,JABSCO,JEFFIC,JINDEX,JCURIN H ,JPOLAR,JLASTV,JTSTRA,JTSTCO,JTSTEN H ,JTASHO DOUBLE PRECISION PI,TWOPI,PIBY2,DEGRAD.RADDEG,CLIGHT,BIG,EMASS DOUBLE PRECISION EMMU.PMASS.AVO PARAMETER (PI=3.14159265358979324D0) PARAMETER (TW0PI=6.28318530717958648D0) PARAMETER (PIBY2=1.57079632679489662D0) PARAMETER (DEGRAD=0.0174532925199432958D0) PARAMETER (RADDEG=57.2957795130823209D0) PARAMETER (CLIGHT=29979245800.D0) PARAMETER (BIG=10000000000.DO) PARAMETER (EMASS=0.0005109990615D0) PARAMETER (EMMU=0.105658387D0) PARAMETER (PMASS=0.9382723128D0) PARAMETER (AVO=0.60221367D0) COMMON/GCPHYS/IPAIR H ,ICOMP h ,IPHOT H ,IPFIS H ,IDRAY h ,IANNI h ,IBREM h , IHADR H ,IMUNU r- , IDCAY i- .ILOSS ^ .IMULS r ,IRAYL COMMON/GCPHLT/ILABS H .ISYNC r .ISTRA , SPAIR,SLPAIR,ZINTPA,STEPPA , SCOMP,SLCOMP,ZINTCO,STEPCO , SPHOT,SLPHOT,ZINTPH,STEPPH , SPFIS,SLPFIS,ZINTPF,STEPPF , SDRAY,SLDRAY,ZINTDR,STEPDR ,SANNI,SLANNI,ZINTAN,STEPAN , SBREM,SLBREM,ZINTBR,STEPBR , SHADR,SLHADR,ZINTHA,STEPHA , SMUNU,SLMUNU,ZINTMU,STEPMU ,SDCAY,SLIFE .SUMLIF.DPHYSl , SLOSS,SOLOSS,STLOSS,DPHYS2 , SMULS,SOMULS,STMULS,DPHYS3 , SRAYL,SLRAYL,ZINTRA,STEPRA , SLABS,SLLABS,ZINTLA,STEPLA 186 PARAMETER (NWSTAK=12,NWINT=ll,NWREAL=12,NWTRAC=NWINT+NWREAL+5) COMMON /GCSTAK/ NJTMAX, NJTMIN, NTSTKP, NTSTKS, NDBOOK, NDPUSH, + NJFREE, NJGARB, NJINVO, LINSAV(15), LMXSAV(15) EQUIVALENCE (ISTORD,NJTMIN) C COMMON/GCTMED/NUMED,NATMED(5),ISVOL,IFIELD,FIELDM,TMAXFD,STEMAX + ,DEEMAX,EPSIL,STMIN,CFIELD,PREC,IUPD,ISTPAR,NUMOLD COMMON/GCTLIT/THRIND,PMIN,DP,DNDL,JMIN,ITCKOV,IMCKOV,NPCKOV C COMMON/GCMULO/SINMUL(101).COSMUL(lOl),SQRMUL(101),OMCMOL,CHCMOL + ,EKMIN,EKMAX,NEKBIN,NEK1,EKINV,GEKA,GEKB,EKBIN(200),EL0W(200) C REAL SINMUL,COSMUL,SQRMUL,OMCMOL,CHCMOL,EKMIN,EKMAX,ELOW,EKINV REAL GEKA,GEKB,EKBIN INTEGER NEKBIN,NEK1 C PARAMETER (MAXMEC=30) C0MM0N/GCTRAK/VECT(7),GET0T,GEKIN,V0UT(7),NMEC,LMEC(MAXMEC) + ,NAMEC(MAXMEC),NSTEP ,MAXNST,DESTEP,DESTEL,SAFETY,SLENG + ,STEP .SNEXT .SFIELD.TOFG ,GEKRAT,UPWGHT,IGNEXT,INWVOL + ,ISTOP ,IGAUTO,IEKBIN, ILOSL, IMULL,INGOTO,NLDOWN,NLEVIN + ,NLVSAV,ISTORY PARAMETER (MAXME1=30) C0MM0N/GCTP0L/P0LAR(3), NAMECl(MAXMEl) C PARAMETER (EPSMAC=1.E-6) DOUBLE PRECISION ONE,XC0EF1,XC0EF2,XC0EF3,ZERO PARAMETER (0NE=1,ZERO=0) PARAMETER (EPCUT=1.022E-3) C. C. * * *** Particle below energy threshold ? Short circuit IF (GEKIN.LE.CUTGAM) GOTO 998 * *** Update local pointers i f medium has changed * IF(IUPD.EQ.O)THEN 187 IUPD = 1 JPHOT = LQCJMA-6) JCOMP = LQCJMA-8) JPAIR = LQ(JMA-IO) JPFIS = LQ(JMA-12) JRAYL = LQ(JMA-13) ENDIF * * *** Compute current step size * IPROC = 103 STEP = STEMAX GEKRT1 = 1 .-GEKRAT * * ** Step limitation due to pair devuction ? * IF (GETOT.GT.EPCUT) THEN IF (IPAIR.GT.O) THEN STEPPA = GEKRT1* Q(JPAIR+IEKBIN) +GEKRAT*Q(JPAIR+IEKBIN+1) SPAIR = STEPPA*ZINTPA IF (SPAIR.LT.STEP) THEN STEP = SPAIR IPROC = 6 ENDIF ENDIF ENDIF * * ** Step limitation due to Compton scattering ? * IF (ICOMP.GT.O) THEN STEPCO = GEKRT1*Q(JC0MP+IEKBIN) +GEKRAT*Q(JCOMP+IEKBIN+1) SCOMP = STEPCO*ZINTCO IF (SCOMP.LT.STEP) THEN STEP = SCOMP IPROC = 7 ENDIF ENDIF * * ** Step limitation due to photo-electric effect ? 188 IF (GEKIN.LT.0.4) THEN IF (IPHOT.GT.O) THEN STEPPH = GEKRT1* Q(JPHOT+IEKBIN) +GEKRAT+Q(JPHOT+IEKBIN+1) SPHOT = STEPPH*ZINTPH IF (SPHOT.LT.STEP) THEN STEP = SPHOT IPROC = 8 END IF END IF ENDIF ** Step limitation due to photo-fission ? IF (JPFIS.GT.O) THEN STEPPF = GEKRT1*Q(JPFIS+IEKBIN) +GEKRAT* Q(JPFIS+IEKBIN+1) SPFIS = STEPPF+ZINTPF IF (SPFIS.LT.STEP) THEN STEP = SPFIS IPROC = 23 ENDIF ENDIF ** Step limitation due to Rayleigh scattering ? IF (IRAYL.GT.O) THEN IF (GEKIN.LT.O.Ol) THEN STEPRA = GEKRT1*Q(JRAYL+IEKBIN) +GEKRAT*Q(JRAYL+IEKBIN+1) SRAYL = STEPRA*ZINTRA IF (SRAYL.LT.STEP) THEN STEP = SRAYL IPROC = 25 ENDIF ENDIF ENDIF IF (STEP.LT.O.) STEP = 0. ** Step limitation due to geometry ? IF (STEP.GE.SAFETY) THEN 189 CALL GTNEXT IF (IGNEXT.NE.O) THEN STEP = SNEXT + PREC INWVDL= 2 IPROC = 0 NMEC = 1 LMEC(1)=1 ENDIF * * Update SAFETY in stack companions, i f any IF (IQCJSTAK+3).NE.O) THEN DO 10 1ST = IQ(JSTAK+3),iq(JSTAK+l) JST = JSTAK +3 +(IST-1)*NWSTAK QCJST+ll) = SAFETY 10 CONTINUE IQCJSTAK+3) = 0 ENDIF * ELSE IQ(JSTAK+3) = 0 ENDIF * * *** Linear transport * IF (INWV0L.EQ.2) THEN DO 20 I = 1,3 VECTMP = VECT(I) +STEP*VECT(I+3) IF(VECTMP.EQ.VECTU)) THEN * * *** Correct for machine precision * IF(VECT(I+3).NE.O.) THEN VECTMP = VECT(I)+ABS(VECT(I))*SIGN(1.,VECT(I+3))* + EPSMAC IF(NMEC.GT.O) THEN IF(LMEC(NMEC).EQ.104) NMEC=NMEC-1 ENDIF NMEC=NMEC+1 LMEC(NMEC)=104 ENDIF 190 ENDIF VECT(I) = VECTMP 20 CONTINUE ELSE DO 30 I = 1,3 VECT(I) = VECT(I) +STEP*VECT(I+3) 30 CONTINUE ENDIF SLENG = SLENG +STEP *** Update time of flight TOFG = TOFG +STEP/CLIGHT *** Update interaction probabilities IF (GETOT.GT.EPCUT) THEN IF (IPAIR.GT.O) ZINTPA = ZINTPA -STEP/STEPPA ENDIF IF (ICOMP.GT.O) ZINTCO = ZINTCO -STEP/STEPCO IF (GEKIN.LT.0.4) THEN IF (IPHOT.GT.O) ZINTPH = ZINTPH -STEP/STEPPH ENDIF IF (JPFIS.GT.O) ZINTPF = ZINTPF -STEP/STEPPF IF (IRAYL.GT.O) THEN IF (GEKIN.LT.0.01) ZINTRA = ZINTRA -STEP/STEPRA ENDIF IF (IPROC.EQ.O) GO TO 999 NMEC = 1 LMEC(l) = IPROC ** Pair devuction ? IF (IPR0C.EQ.6) THEN CALL GPAIRG ** Compton scattering ? 191 ELSE IF (IPROC.EQ.7) THEN CALL GCOMP * * ** Photo-electric effect ? * ELSE IF (IPROC.EQ.8) THEN * Calculate range of the photoelectron ( with kin. energy Ephot) * CDA IF(GEKIN.LE.0.001) THEN IF(GEKIN.LE.0.00001) THEN JCOEF = LQ(JMA-17) IF(GEKRAT.LT.0.7) THEN II = MAX(IEKBIN-l.l) ELSE II = MIN(IEKBIN,NEKBIN-1) ENDIF II = 3*(I1-1)+1 XC0EF1 = Q(JC0EF+I1) XC0EF2 = Q(JC0EF+I1+1) XC0EF3 = Q(JC0EF+Il+2) IF(XC0EF1.NE.0.) THEN STOPMX = -XC0EF2+SIGN(0NE,XC0EF1)*SQRT(XC0EF2**2 - (XC0EF3-+ GEKIN/XC0EF1)) ELSE STOPMX = - (XC0EF3-GEKIN)/XC0EF2 ENDIF * * DO NOT cal l GPHOT i f this (overestimated) range is smaller * than SAFETY * IF (STOPMX.LE.SAFETY) GOTO 998 ENDIF CALL GPHOT * ** Rayleigh effect ? * ELSE IF (IPROC.EQ.25) THEN CALL GRAYL 192 * ** Photo-fission ? * ELSE IF (IPROC.EQ.23) THEN CALL GPFIS * ENDIF * GOTO 999 998 DESTEP = GEKIN •998 continue GEKIN = 0. GETOT = 0. VECT(7)= 0. ISTOP = 2 NMEC = 1 LMEC(1)= 30 999 END ************************************************ 193 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0085572/manifest

Comment

Related Items