Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A measurement of the branching ratio for the weak radiative decay of the lambda hyperon Noble, Anthony James 1990

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-UBC_1991_A1 N62.pdf [ 8.63MB ]
JSON: 831-1.0084962.json
JSON-LD: 831-1.0084962-ld.json
RDF/XML (Pretty): 831-1.0084962-rdf.xml
RDF/JSON: 831-1.0084962-rdf.json
Turtle: 831-1.0084962-turtle.txt
N-Triples: 831-1.0084962-rdf-ntriples.txt
Original Record: 831-1.0084962-source.json
Full Text

Full Text

A MEASUREMENT OF THE BRANCHING RATIO FOR T H E WEAK RADIATIVE DECAY OF THE LAMBDA HYPERON By  ANTHONY JAMES NOBLE B.Sc, University of New Brunswick, 1983 M.Sc, University of British Columbia, 1986  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF T H E REQUIREMENTS FOR T H E D E G R E E OF DOCTOR OF PHILOSOPHY  in T H E FACULTY OF G R A D U A T E STUDIES Department of Physics  We accept this thesis as conforming to the required standard  THE UNIVERSITY OF BRITISH COLUMBIA December 1990 © Anthony James Noble, 1990  In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department  or  by his  or  her  representatives.  It  is  understood  that  copying or  publication of this thesis for financial gain shall not be allowed without my written permission.  Department of The University of British Columbia Vancouver, Canada  DE-6 (2/88)  Abstract  In a measurement performed at the Brookhaven National Laboratory we have measured the branching ratio for the weak radiative decay of the A hyperon. Our result, rate(A->n + ) rate ( A —• anything ) 7  =  ^  ±  0  f  ^  is the second, but most precise, measurement of this quantity. It is in slight disagreement with the earlier measurement. Using a thick copper degrader, kaons of initial momentum 680 M e V / c were brought to rest in a liquid hydrogen target. Absorption of these kaons on target protons produced a A , either directly via the reaction K~p —> A + T T , or indirectly 0  via the reaction K~p —> S° + 7r° with subsequent decay, E° —> A + 7. The occurrence of each of these processes was established by observing the ir° decay photons in a modular N a l detector, the Crystal Box. This detector was used to measure the energy and position of the final state particles, and hence allowed the topology of the event to be reconstructed. A wide range of theoretical techniques have been employed to estimate hyperon weak radiative decay amplitudes, including pole models, symmetry arguments, P C A C , and direct quark level calculations. These calculations must combine facets of the strong, weak and electromagnetic interactions. Hence, hyperon decays provide a testing ground for theorists to measure their ability to bring together these components in a relatively simple system. We discuss the implications of our result for a representative selection of theoretical estimates.  1 1  Table of Contents Abstract  ii  List of Tables  vii  List of Figures  viii  Acknowledgements  x  1  Introduction.  1  1.1  Historical Perspective  1  1.2  The E811 Experimental Program  3  1.2.1  Overview of the Experiment: E811-II  4  1.2.2  Overview of E811-1  8  1.3 2  Thesis Outline  10  Theory and Motivation.  12  2.1  Introduction  12  2.2  The Standard Model, a Particle Physics Overview  14  2.3  Quark Models  18  2.3.1  Nonrelativistic Potential Quark Models  20  2.3.2  Bag Models  22  2.4  2.5  '  Hyperon Decays  24  2.4.1  Classification  24  2.4.2  Relative Phase and Magnitude of S- and P-Waves  25  2.4.3  The | A / | = \ Rule  28  Hyperon Weak Radiative Decay Theories  33  2.5.1  33  Baryon Pole Models iii  2.6 3  Quark Level Calculations  38  2.5.3  The P C A C and Current Algebra Approach  43  2.5.4  The Vector Dominance Symmetry Approach  45  Summary of Experimental and Theoretical Results  48  Experimental Method.  55  3.1  The A G S and Beam Line C 6 / C 8  55  3.2  The Detectors and Associated Apparatus  60  3.2.1  The Crystal Box  62  3.2.2  Beam Line Detectors  64  3.2.3  The Hydrogen Target and Octagon Veto Assembly  67  3.2.4  Charged Particle Rejection  69  3.2.5  Shielding  70  3.2.6  The Flasher System and Small N a l .  71  3.2.7  Temperature Control  72  3.3  4  2.5.2  Electronics and Data Acquisition  72  3.3.1  Triggers  75  3.3.2  The Fast Electronics  78  3.3.3  Data Acquisition  89  Analysis Preliminaries.  93  4.1  Clump Finding Algorithms  93  4.1.1  Energy Determination  93  4.1.2  Position Determination  96  4.1.3  Clump Timing  96  4.2  The Monte Carlo Routines  97  4.2.1  Introduction  97  4.2.2  The G E A N T Monte Carlo iv  100  4.3  4.4  4.5  5  Energy Calibration  105  4.3.1  Pedestals  105  4.3.2  Non-Linearities  105  4.3.3  Balancing the Crystal Box Gains  107  4.3.4  The Monte Carlo Resolution  109  4.3.5  Calibration Adjustments Based on Crystal Location  110  Treatment of Neutrons  Ill  4.4.1  Neutron Detection Efficiency  Ill  4.4.2  Particle Identification  114  4.4.3  Monte Carlo Energy Spectrum for Neutrons  118  Pile-up Studies  118  4.5.1  119  The Pile-up F E R A system  4.6  Degrader Tests and Target Size  125  4.7  The First Pass Skimming Procedure  129  4.7.1  Basic Cuts: Data Events  129  4.7.2  Other Event Types  132  4.7.3  Skimming the Monte Carlo Data  133  Data Analysis.  134  5.1  Considerations in the Selection of Data  134  5.2  General Analysis Techniques & A I D A  135  5.3  The Fitting Routine  137  5.4  New Basic Cuts Common to the 3-7 and 4-7 Analyses  140  5.5  Analysis of the 3-7 Data  144  5.5.1  Event Reconstruction  144  5.5.2  The Principal Channels Contributing to the 3-7 Events  5.5.3  Suppression of Backgrounds v  . . 145 147  5.6  6  5.5.4  Fitting the Data with the Monte Carlo  155  5.5.5  Analysis of Systematic Errors  161  Analysis of the 4-7 Data.  164  5.6.1  Event Reconstruction  164  5.6.2  The 4-7 Analysis and Data Reduction  165  5.6.3  Fitting the Data with the Monte Carlo in the 4 - 7 Analysis.  171  5.6.4  Estimation of Systematic Errors  174  Discussion and Conclusions.  176  6.1  A Critique of E811-II  179  6.2  Conclusions  181  6.3  Closing Remarks  182  Bibliography  183  vi  List of Tables 2.1 2.2 3.1 3.2 4.1 4.2 4.3 4.4 5.1 5.2  Theoretical and experimental branching ratios Theoretical and experimental asymmetries Properties of the beam line elements Typical event rates Processes considered for K~p decays occurring at rest Processes considered for K~p inflight interactions Resolution at 129.4 MeV using various forms to apply the nonlinear corrections Pile-up in random crystals at low energy Data taken during the short target running period Inflight contribution as a function of ENS4 and degrader thickness.  vii  54 54 65 77 101 102 107 121 135 151  List of Figures 1.1 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 5.1  Endpoint energy spectrum for the process K~ + p —> A + 7 Nonleptonic and radiative hyperon decays Some observed meson and baryon multiplets and their quark assignments Quark currents in the decay A —> p + TT~ Penguin diagram Some gluon radiative correction terms Contributions from the quark sea Pole diagrams Common quark transition mechanisms Processes considered for nonleptonic decays The A G S accelerator The C 6 / C 8 beam line Schematic drawing of the apparatus Phototube assembly. Components of the hydrogen target Data acquisition flow chart N a l analogue signal processing N a l hardware sums Beam logic Schematic of the charged particle detection logic Schematic of small N a l event generation Schematic of the generation of events 5, 7, 9 and 10 Schematic of the handling of event triggers Clump configurations Fits to the TT° opening angle to determine the level of hodoscope smearing required Comparison of various non-linear correction techniques F i t of Monte Carlo to data for pion runs Energy spectra from the K~p —> E 7 r reactions Comparison of the neutron energy spectra Response curves of the pile-up amplifiers Scatterplot of clump energy. Pile-up versus main F E R A s Fraction of events eliminated by pile-up cuts Pile-up in the Crystal Box 7 T ° energy spectrum when two 7r°s were found Opening angle of one 7r° when two 7r°s were found Relative timing between counters S3 and S4 T  viii  ±  9 13 19 29 31 32 32 34 39 47 56 58 61 63 68 74 79 81 83 85 86 87 88 95 104 108 112 116 118 120 123 124 125 127 128 141  5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14 5.15 5.16 5.17 5.18 5.19  The placement of the d E / d x cuts Kinematic variables describing the 3-7 process The main processes contributing to the 3-7 analysis The 7T° energy spectrum when a 7 r ° - 7 r ° pair has been found A fit to the 7 r ° energy spectrum when a 7 r ° - 7 r ° pair has been found. . The #A-y spectrum from the Monte Carlo The results of the 8^ cut on the Doppler corrected energy spectrum. R2 and R5 Monte Carlo spectra with all cuts applied The final fit to the data The data with all fitted backgrounds subtracted Monte Carlo fits to other variables The branching ratio as a function of the pile-up cuts Kinematic variables describing the 4-7 process The Monte Carlo predicted 7r° goodness of fit test for misassociated photon pairs The Doppler corrected energy spectra after basic cuts The spectrum for photons in the signal region The final fit to the data for the 4 - 7 analysis Monte Carlo predictions of other spectra in the 4 - 7 analysis  ix  143 144 146 149 150 152 154 156 157 159 160 163 165 168 169 170 172 173  Acknowledgements The success of this experiment was the result of contributions from about 25 physicists and I would like to express my sincere thanks to each and every one of them. I would like to give special thanks to my supervisor, David Measday, for introducing me to this experiment, for allowing me to gain invaluable experience, and for injecting insight and common sense into the proceedings at appropriate times. I would also like to thank him for the prompt and meticulous reading of this thesis. I would also like to extend my thanks to "E811 Hard-Core" , Bernd Bassalleck, 1  Dean Larson, and Jim Lowe, for their fanatical devotion to the analysis of E811, for their friendship, for enlightening discussions, and in particular for their willingness to bet beer on hopeless causes. I must also make special mention of Tim Warner, who stood above the other summer students in his eagerness to help. In fact he seemed to enjoy donating skin and blood while wrestling with cables and "hungers". I would like to make peace with Steve Chan and Chris Stevens in the TRIUMF detector facility.  Despite my unrealistic and unreasonable requests they always  produced superb work. Likewise I would like to thank the many staff at BNL who provided the hydrogen cryogenics and aided in installing this experiment while being pressured from all sides by physicists yet to appreciate the difference between in theory and in practice. I wish to thank my wife Shirley for enduring the Gulag, the months away, and the constant night shifts, and for her support and understanding over the many years of this project. Finally I would like to dedicate this dissertation to my parents, for their unwavering encouragement, and for nurturing my sense of curiosity and my interest in science.  This has nothing to do with the Twin Towers - explain that to Kay, Dean. X  Chapter 1 Introduction. 1.1  Historical Perspective.  The first unambiguous observation of A particles, by Rochester and Butler in 1947 [1], heralded the beginning of a new era in nuclear and particle physics. These particles exhibited a most perplexing property. They were being produced in prolific quantities, indicative of the presence of strong or electromagnetic processes, yet they decayed with a long life time reminiscent of the weak interaction. It was many years before a satisfactory explanation for this peculiar feature was developed, and its solution required the introduction of a new quantum number, aptly named "strangeness". The theory assumed that strangeness could be violated in weak processes, but that it was a conserved quantity in the strong and electromagnetic interactions. In the latter instances it was expected to behave much like the regular electric charge. Hence from strangeness the notion of hypercharge was developed, and baryons with strange content became known as hyperons. The ease with which these particles were produced was explained by the observation that they were produced in pairs with no net change in strangeness. This was coined the principle of associated production. The discovery of strangeness led to the development of a new classification scheme based on the existence of a fundamental symmetry existing between the most basic building blocks of nature. It was noticed that the mesons, hyperons, and other baryons could all be neatly classified by using group theoretic techniques, and by assuming that they were composed from a set of hypothetical "quarks", of which 1  there were postulated to be three types, or flavours. The subsequent discovery of the  firmly predicted by these symmetry arguments, was a most impressive and  convincing confirmation of quark flavour as an approximate symmetry in nature. The quarks were initially assumed to be fictitious mathematical particles. However, experiments conducted in the late 1960's were able to observe scattering of electrons off point-like objects within a proton, suggesting that quarks were real physical particles within the nucleon. The importance of these experiments was recently recognized by the awarding of the 1990 Nobel Prize in Physics to Friedman, Kendall and Taylor for their achievements. [2,3] While our understanding of the fundamental interactions between non-strange particles is still far from complete, the hyperons have given us the opportunity to explore new avenues, and to get a different perspective on the many outstanding problems in nuclear and particle physics. In the earliest experiments, a limited supply of hyperons made direct hyperonnucleon interactions very difficult to study. Hence the nuclear properties of hyperons were investigated through their binding in nuclei. This opened up the rich and active field of hypernuclear physics. In the present age, we have at our disposal intense beams of kaons which may be used to produce hyperons. A new generation of experiments has become feasible, and fresh ideas are being stimulated. A cornucopia of information is being released through the study of hyperons and hypernuclear physics. Hyperon weak radiative decays are an example of one such area currently under investigation.  These decays are weak by virtue of being strangeness non-  conserving, and are accompanied by the radiative emission of photons. In addition to these weak and electromagnetic processes, the presence of hadrons implies the existence of strong interaction effects as well. Hence a complete understanding of hyperon weak radiative decay processes requires an insight into the marriage of the 2  weak, electromagnetic and strong interactions. As a result, these processes have been studied theoretically and experimentally, with varying degrees of success, for over 20 years. Despite their relatively low branching ratios, resulting in a paucity of data, they have been particularly attractive to theorists as there are no final state interactions complicating the calculations.  1.2  The E811 Experimental Program.  This thesis reports on a measurement of the branching ratio for the weak radiative decay process; A-+n + 7  (1.1)  Prior to our measurement, this decay process had been observed, with poor statistical accuracy, in just one other experiment which obtained a branching ratio of 1.02 ± 0.33 x 1 0  -3  [4]. Our goal was to design an experiment capable of making a  substantial improvement in the accuracy and precision of this branching ratio. This measurement represented the culmination of an ambitious program to study a number of relatively rare processes which followed the capture of negative kaons on hydrogen and deuterium nuclei. The second half of this program, E811-II, was dedicated to studying the A weak radiative decay process. In the first phase of the experiment, hereafter referred to as E811-I, the following branching ratios were determined; 1. Radiative kaon capture on hydrogen. [5,6] A+ 7 I<-+p-+E° + 7  BR = (0.86l°;i?) x 10~  (1.2)  3  BR = (1.44 ± 0 . 2 3 ) x 1 0  (1.3)  -3  2. Radiative kaon capture on deuterium, [7,8] K~ + d-* A + n + 7  BR = (1.94 ± 0 . 1 2 ± 0 . 2 8 ) x 1 0 3  -3  (1.4)  where the errors given are the statistical and systematic uncertainties respectively. 3. Weak radiative decay of S . [9,10] +  + 7  BR = (1.45±S:M) X 10"  3  (1.5)  A common feature of all the reactions studied in E811 was the emission of high energy photons. The strength of E811 was its ability to detect these photons with either extremely good energy resolution, or by being able to discern the individual photons of high multiplicity events.  1.2.1  Overview of the Experiment: E811-II  The purpose of E811-II was to measure the branching ratio of the decay A — > n + 7. The A particles have a mean lifetime of 2.6 x 1 0  -10  seconds, and are neutral. Hence  they are not readily transported and must be produced at the core of the detector assembly. Each of the particles A, S° and K~ has strangeness S= —1, so a A can be created through the strong or electromagnetic interactions by stopping a negative kaon on a proton in a liquid hydrogen target. Specifically the principal production channels are: K-+p  ->  A+  K-+p  -»  S° + 7r° «-• A + 7  7T  0  6.7%  (1.6)  27.3% 100.0%  (1.7)  A detailed account of the mechanism by which the kaons are produced and transported to the experimental area will be presented in a later section. Our detector was chosen for its ability to discern individual photons in events with multiple decay products. Since a 7r° decays with a high probability to two  4  photons, (98.8%), our detector was an excellent 7r° spectrometer. This capabilitywas exploited to tag A production either directly, (1.6), or indirectly, (1.7). The A then proceeded to decay through either the dominant decay modes; A  -+  n+  A  -> p +  7r°  TT-  35%  (1.8)  65%  (1.9)  or, on occasion, underwent weak radiative decay, A — > n + 7. At this point it is useful to note that one can apply the condition that all final state products be electrically neutral. This permits a quick selection of potentially valid events. Included in this selection will be events from the decay (1.8), which is the logical choice to provide the normalization. The kinematics, and hence the analysis approach, are quite different depending upon the mode through which the A was produced. Unfortunately, the detector was sensitive to less than 50% of the total solid angle. A consequence of this was that although producing a A via (1.7) appears to have an advantage statistically, this was a significantly diminished effect; the result of having to detect four photons in the final state rather than three. In addition, having only three photons, and a monoenergetic A makes mode (1.6) a much more attractive channel from the point of view of being able to reconstruct the event. Both modes were examined in the analysis. The basic experimental procedure consisted of stopping kaons in a liquid hydrogen target and recording various items of information for each candidate event. For an event to have been considered as a candidate it must have satisfied our hardware trigger. The trigger used can be divided into 3 distinct conditions. These are: • The incoming particle must have been identified as being a kaon, and it should have stopped in the liquid hydrogen. This was accomplished by having a 5  sequence of counters in the beam flight path designed to track the particle's progress and investigate its nature as it proceeded towards the target. • There must have been a sufficient amount of energy deposited in the principal detector, the Crystal Box, a Nal(Tl) calorimeter surrounding the target. To fulfill this requirement a hardware sum of the total energy deposited in the Crystal Box was formed and compared against a lower level threshold. • All particles in the final state must have been neutral. This was realized with an array of scintillators positioned around the target and on the faces of the Crystal Box. The information gathered for candidate events was digitized, recorded on magnetic tape, and then filed away for subsequent analysis. One important aspect of the experiment involved calibrating the Crystal Box. A source of monoenergetic photons with an energy comparable to those from A WRD came from stopping pions on protons through radiative capture; TT~+p  -*  n+  40%  7  (1.10)  with E-y = 129.4 MeV. Other photons are also produced by charge exchange of stopping pions; Tv~+p  -»  n + 7T°  60%  «-» 7 + 7  (1.11)  98.5%  but these photons are limited to a maximum of about 83 MeV. To calibrate the Crystal Box the beam line was tuned to a low momentum suitable for stopping pions. Pion calibration data were collected at regular intervals throughout the run. Although E811-II was originally proposed as early as 1985, the period from inception to completion spanned only 2 years, beginning in 1987. 6  Very little of  this time was actually used for data collection, being subject to the availability of high energy protons at the Brookhaven Synchrotron. The course of E811-II can be broken into five distinct phases. These are; Construction. During the months from April 1987 to January 1988 we were actively involved in major construction projects. These included transporting the Crystal Box to BNL; installing it in the experimental hall; installing all the other detectors; assembling and connecting the electronics; and finally, testing all the various components as thoroughly as possible without beam. Engineering R u n . From February through May of 1988 we collected our first data. The primary objectives were to debug the apparatus and to perform a battery of tests designed to optimize the experiment for a high statistics production run. Despite frequently changing conditions we were able to acquire a reasonable amount of useful data. Modifications. Based on experiences gained during the engineering run, and from preliminary analyses thereafter, we used the period from June 1988 to January 1989 to make various hardware and software upgrades. The modifications improved the collection efficiency and data quality during the subsequent running period. Production R u n . With the experiment optimized we were now in full data taking mode and from January to April 1989 we concentrated on steady conditions, quality data and high statistics. Data Analysis. At the completion of a data taking run, the tapes were replayed and analyzed. This thesis concerns itself with an analysis of data obtained during the engineering run. The second data set, from the production run, is being analyzed independently at the University of New Mexico.  1.2.2  Overview o f E811-I.  While the apparatus, and much of the scientific motivation, was quite different for the first phase of E811, it is worth giving a brief description of these experiments since many of the lessons learned had direct applications to the second phase. The radiative capture measurements were very simple conceptually. Negative kaons were stopped in a liquid hydrogen or deuterium target and the resulting energy spectrum was measured by a single detector sitting to one side of the target. The detector was the Boston University Nal crystal (BUNI). It was designed to have the best energy resolution of any Nal crystal operating near our energies, and it achieved that goal. This excellent resolution allowed the monoenergetic capture peaks and the 7r° decay photons to be resolved. The latter were produced at much higher rates through the strong interaction processes listed below; K'+p  A + TT°  (1.12)  K'+p  -*  E ° + 7r°  (1.13)  K'+d  -•  A + n + 7T°  (1.14)  The photons from the reaction K~ + p —• A + 7 are the highest in energy and have a unique energy, and these properties may be used to identify the channel. The endpoint of the K~ + p single photon energy spectrum is shown infigure1.1. The resulting good fit required that the kinematics of the K~ + p system be well understood, and that all backgrounds be at an identifiable and manageable level. This was very instructive when designing the experimental setup for E8II-1I. The S  +  weak radiative decay was measured concurrently with the radiative  capture processes. The S  +  was identified through its production channel; K~+p  -»  E+ + 7T  -  (1.15)  The monoenergetic 7 r was tracked through a set of wire chambers positioned above _  8  500  240  2 5 0  2 6 0  2 7 0  2 8 0  2 9 0  3 0 0  Photon Energy. (MeV) Figure 1.1: Endpoint energy spectrum for the process K~ + p —> A + 7. The solid line shows the Monte Carlo fit to the data which determined the branching ratio.  9  and below the target, and its energy was determined by measuring its range in a stack of scintillators just outside these. The TT~ direction determined the E  +  direction. The photon was detected in an array of 49 crystals, the "49er", which was the prototype for the Crystal Box. Knowing the E  +  and photon directions allowed us to make a Lorentz trans-  formation to the rest frame of the E  +  . In this frame, the weak radiative decay  photon, and photons from the process; E  +  -»  p + T T  0  (1.16)  are separated by 14 MeV. However the relatively poor resolution of the detector, coupled with a high beam rate, spread the 7r° background into the signal region. The signal was still extractable, but the measurement was not as precise as had been originally anticipated. Our experience with the 49er was extremely helpful in estimating the expected performance of the Crystal Box. In addition, being able to track the pions back to their interaction vertex determined many properties of the beam line and measured our ability to predict the vertex location. These were useful ingredients in the design and analysis of E811-II.  1.3  Thesis Outline.  The next chapter, the theory chapter, has three main objectives. These are; (1) to discuss the physics which motivated this experiment; (2) to outline the current theoretical understanding of hyperon weak radiative decay processes, including a representative sample of the many calculational techniques, and (3) to review the experimental status of these processes prior to this measurement. The third chapter is devoted entirely to a description of the experiment. This includes the running procedure as well as the hardware and software elements. 10  Most of the basic analysis procedures are discussed in chapter 4. This includes accounts of the energy, position and timing calibration, along with a demonstration of the effects of pile-up and inflight interactions.  This chapter also contains a  detailed description of the Monte Carlo routines developed to simulate the data. The problem of neutrons in the Crystal Box, and the analysis of these with respect to the Monte Carlo and data is also discussed. Having laid the groundwork in Chapter 4, Chapter 5 concerns itself with the specific analysis of the data with a view to extracting the branching ratio. The fitting procedure and the error analysis are also contained in this chapter. The conclusions and discussion are reserved for Chapter 6 in which our results are compared with the theoretical predictions and the previous measurement. F i nally, drawing on our experience, a critique of E811 is presented, with suggestions towards improving future measurements.  11  Chapter 2 Theory and Motivation. 2.1  Introduction.  Particle physics involves the study of the interactions and attributes of the most fundamental constituents of matter. Currently there is a model in place, called the Standard Model, which is an elegant description of the physics of the strong, electromagnetic and weak interactions. This model has triumphed both experimentally and theoretically with no present experimental results appearing to be incompatible with it. While its successes have been great, and it will most likely remain the particle physics theory at energy scales below «100 GeV, there is widespread belief that it is incomplete. Such arguments are based on a conception of the ideal theory as one which describes all interactions in one highly symmetric, unified framework, with a, minimal number of fundamental particles and the constants associated with them. Such a theory should be able to explain the origin of various empirically observed phenomena which are not precluded by the Standard Model, but for which there is little or no understanding. Hence experiments are continuously probing the very essence of the model, searching for indications of new physics, accumulating new data, and attaining an ever clearer view of particle physics. The nonleptonic part of weak interactions, (i.e. weak interactions between hadrons only), may be investigated by studying various weak decay modes of strange baryons, the hyperons. In principle weak radiative decays of the form;  Bi - » B / + 12  7  are more attractive to study than the analogous decays which are accompanied by pion emission, of the general form; B{ —• Bf  + 7T  This is because calculations of electromagnetic processes are well understood and relatively easy compared to those of pion emission.  The major disadvantage of  studying the radiative decays arises from the relatively low rates. In the simplest terms, the two decay modes would look as depicted in figure 2 . 1 .  1  Weak  Strong  Weak  El  Figure 2.1: Schematic view of nonleptonic and radiative hyperon decays. This is of course an oversimplified picture as short distance effects can have the pion or photon emitted directly off the weak vertex, and one is never truly free of strong interaction effects when dealing with a system of quarks. Nonetheless it was the hope of isolating the nonleptonic weak interaction vertex which originally motivated studies of the weak radiative decays of hyperons. These decays have also generated theoretical interest as they are among the simplest processes combining facets of the weak, electromagnetic, and strong interaction physics. It is essential that these decays are fully understood if we are to have any success and confidence in calculations of more complex systems. Some of the present theoretical predictions are in qualitative agreement with the available data. Of the six experimentally accessible branching ratios only the decay E  +  —* p + 7 is known to better than ~ 2 0 % . Better experimental data on 13  other decay modes will tighten the constraints on free parameters in the theories. This will permit a more critical appraisal of the various theoretical approaches, and of the processes which they consider. The following section contains a brief overview of the Standard Model which will lay the foundations for a discussion on quark models and issues in hyperon decays. Then in section 2.5 the different theoretical techniques and results will be presented, and in section 2.6 the current experimental results will be compared with representative theoretical predictions.  2.2  The Standard Model, a Particle Physics Overview.  In the Standard Model, the most basic, fundamental, constituents of matter are the leptons and quarks. These are half-integral spin objects, fermions, and to the limit of current experimental capabilities, appear to be pointlike. The leptons and quarks interact with one another by exchanging various integral spin particles, bosons. There are 6 flavours of quark, grouped into three families;  Each quark on the top row has charge | while the rest have charge — | . The top quark has not been found experimentally yet. The masses of the quarks increase from left to right. Matter all around us is formed from the up and down quarks, since particles with quark content from the other two families are unstable to decay. Some transitions between different quark types (flavours) are allowed in weak interactions. Theseflavourchanging transitions may take place between quarks of different charge (flavour changing charged currents) by an exchange of W  +  or W~ bosons. In the  naive model, such transitions are only allowed between family members. However transitions between different families do occur due to a mixing of the d, s, and b mass states. On the other hand, the flavour changing neutral currents (with Z° 14  exchange) are forbidden in the Standard Model. There is an equivalent family structure present in the lepton sector;  U)' U ) ' U) where e,  and r each have charge —1 and their associated neutrinos are neutral.  There is no evidence of any mixing between families in the lepton sector. This is easily understood if the neutrinos are massless, although this condition is not required by the Standard Model. There are four known forces. Of these the gravitational force is so weak relative to the other forces that it is only a very minor factor at the microscopic level with presently attainable, or foreseeable, lab energies. The three interactions important on the microscopic scale are the weak, the electromagnetic and the strong. The strong force is a powerful, short range force between quarks. It is responsible for holding the quarks together within a hadron, and for describing the nature of the hadron-hadron interactions. The electromagnetic force is long range and accounts for the atomic, molecular and other extranuclear phenomena observed in nature. Prototypical weak interactions are the very slow /?-decays observed in various nuclei. The cleanest examples of weak interactions are the neutrino electron scattering processes. Other examples are muon decay and strangeness changing s-quark decays. Although these three interactions appear in nature as three disparate forces there is good reason to believe that they may actually be different aspects of a common unified theory. There is already a theory describing an electroweak unification and its authenticity has been confirmed in numerous experiments. In electroweak theory the field quanta are the massless photon, and the massive W , W and +  -  Z°. In the unified theory the weak and electromagnetic interactions have the same coupling, but the effective coupling in weak interactions is much smaller, the result 15  of having massive mediators rather than the massless photon. Symmetry principles have been paramount in the development of Standard Model theories.  Each interaction is associated with a special unitary symmetry  group with U ( l ) , SU(2), and SU(3) representing the electromagnetic, the weak and the strong parts respectively. That each interaction can be described using SU(n) symmetries is one example of the similarities of the underlying theories. The most important symmetry imposed upon the theories is that of local gauge invariance. Gauge invariance requires that the laws of physics remain the same at all locations in space regardless of arbitrary phase conventions chosen at each location. The field which conveys the convention chosen by nature is known as the gauge field. Applying the principle of local gauge invariance is very powerful.  For in-  stance, requiring local gauge invariance of the free electron Lagrangian leads to an interacting Lagrangian with a massless gauge particle, the photon, describing the electromagnetic interactions. Applying local gauge invariance to weak interactions forced the unification of the weak and electromagnetic. However, in the process it generated a Lagrangian with 4 massless vector bosons. Symmetry breaking is the mechanism which leads to the acquisition of mass of the vector bosons. It was not enough to simply add mass terms to the interaction Lagrangian, as this led to divergent integrals which could not be removed by the process of renormalization. The widely accepted symmetry breaking method is the minimal Higgs mechanism which requires the addition of a scalar isospin doublet to the electroweak Lagrangian. Transforming the field variables hides the symmetry of the original Lagrangian, and perturbing about the new ground state allows the vector bosons to become massive. However, such a mechanism predicts the existence of a physical scalar particle, the Higgs. The mass of the Higgs is not predicted by the theory and it has, so far, eluded discovery by experimentalists. The Higgs mechanism can also be used to generate masses for the fermions,  16  however only in terms of arbitrary parameters. The empirical masses must be put in by hand, an unsavory solution to the aesthetically minded. The Higgs sector is the least well understood part of the electroweak theory.  The theory for electromagnetic interactions, QED, (quantum electrodynamics), has proven to be extremely successful. It has been thoroughly checked and, so far, has shown no deviations from expected values at levels as low as 1 part in 10 . QCD (quantum chromodynamics), the theory for strong interactions, has been 9  patterned after QED, and appears to be the correct theory. The mediators of the strong force between quarks are 8 massless vector bosons, the gluons.  Hadrons are made up of quarks. The baryons are formed from 3 quarks, and the mesons from quark-antiquark pairs. Each quark may come in one of 3 colours, additional quantum numbers which were required to maintain a totally antisymmetric wavefunction for states like the A  +  +  which would otherwise have 3  identical up quarks, all in an s-state, violating Fermi statistics. Leptons do not have colour and so do not partake in the strong interaction.  All observable hadrons are colour neutral, but the gluons carry colour, and hence may interact with one another. This has very important and far reaching consequences. It makes the strong coupling constant a get larger at large distances, or s  small momentum transfers. It is believed, though not proven, that such a condition may force the quarks to be confined within hadrons explaining the experimental null result in free quark searches. Also, as the coupling strength a is of order 1, s  only high momentum transfer interactions can be calculated perturbatively where a has become sufficiently small. At low q , in the nonperturbative regime, one 2  s  must rely on models and numerical simulations. Such is the case for the hyperon decays discussed in this dissertation. 17  2.3  Quark Models.  A n understanding of hadronic weak interactions should develop from insights into the structure of hadrons and the properties of their constituent quarks. In principle Q C D theory provides this knowledge but as hadron structure is dominated by low q  2  effects, perturbative calculations cannot be made. Hence one must rely on various models or Monte Carlo calculations. Any discussion of quark models begins with the constituent quark model. In this model no assumption is made as to the nature of the confining potential. The hadrons are built from quarks, combined as qqq for baryons, and as qq pairs for mesons. For low mass hadrons (< 3 GeV) it is sufficient to consider only the u,d and s quark flavours. These are arranged to form the basis of the SU(3)/ group of which the mesons and baryons are irreducible representations.  The / denotes  flavour to distinguish it from the SU(3) colour group. C  Using SU(3)/ and group theory techniques, the baryons can be broken down into multiplets; 3(g)3(g)3=10©8^e85©l and the mesons into; 3 ® 3 = 8©1  (2.2)  Indeed the known mesons and baryons are found to occupy corresponding locations in octets and decuplets, the most familiar of which are shown in figure 2.2 As each quark has two possible spin orientations, spin may be incorporated through the SU(2) group, which when combined with SU(3)y yields the standard SU(6) quark model. The SU(6) combinations lead to 216 possible baryon states which may be arranged into irreducible multiplets through the decomposition shown schematically as; 6 <g> 6 <g> 6 = 20 © 70 © 70 © 56 18  (2-3)  K°  K  +  n  11  •  ds  A  P 1 -i  udd  •  uud  ddd  qq  1  ud  dds -  uds  1  « 1 -I  udd  2-  2"  ~ dds  uds  2  uus  •  su  -1  -J  •  sd  uuu  uus  uss  —o  K°  -  uud  — .0 dss  K  +  -1  -  -2  1  uss  dss  Figure 2.2: Some observed meson and baryon rnultiplets and their quark assignments. The third component of isospin, I3 is plotted along the horizontal axis, while the vertical axis represents the particle hypercharge, Y = B + S. The strangeness S is by convention -1 for strange quarks, and the baryon number B is 1/3 for all quarks. Antiquarks have the opposite quantum numbers.  19  The low mass states in the 56-plet are the J  p  = |  octet states and |  decuplets  with zero orbital angular momentum. The spin states may be isolated and expressed as 56 = 1 0 © 8 4  (2.4)  2  where the superscript refers to the 2S+1 possible spin orientations of triplet and singlet states. Likewise the 70-plet decomposes as 70 = 1 0 © 8 © 8 © 1 2  4  2  (2.5)  2  with the low mass states being the / = 1, 3 = | P  and |  baryon states.  Other orbital angular momentum excitations form different J states in each p  of the 56- and 70-plets, but for our purposes only the lowest level J = | p  and |  +  56-plet  70-plet states will be important. The one exception to this will be when  calculating decays of the O which is a J = | -  p  +  baryon. For this reason the 0~ is  not considered in many theoretical calculations. To go beyond predicting masses and magnetic moments, one must add more physics to the model. This can be done by considering a potential characteristic of the strong interaction - as is done in nonrelativistic potential models, or by confining the quarks to a finite region of space - as is done in relativistic bag models. These will be described in more detail below.  2.3.1  Nonrelativistic Potential Quark Models.  The most well known nonrelativistic potential quark model was developed by Isgur and Karl [11] and it is their model which will be outlined below. An extensive review of this and other potential models can be found in a report by Hey and Kelly [12]. Potential models assume a specific form for the confining potential, guided by 20  observed features of QCD. Isgur and Karl use a Hamiltonian,  H =H +H 0  (2.6)  hf  where H contains a long range universal confining potential which is spin indepenQ  dent. Hhf is treated as a perturbation about H and represents "hyperfine" splitting 0  terms which arise from a spin-spin interaction. To get mass splittings within SU(6) multiplets they allow theflavoursymmetry to be broken by taking the quark masses to be; m = m^ u  d  m  (2.7)  s  For Ho they assume a harmonic oscillator potential,  1=1  *  »<j  where K is the harmonic oscillator spring constant, and i and j are quark flavour indices. For Hhf they assume the short range part is spin dependent and follow the lead of de Rujula et al. [13] who conjecture that the spin dependent terms arise from one gluon exchange diagrams. In analogy with the magnetic-dipole-magneticdipole interactions of atomic spectroscopy they take the spin dependent terms to be in the form of the Breit interaction Hamiltonian [14]. Isgur and Karl drop the spin-orbit terms and only consider the hyperfine part; la'  (2.9)  * 3mirrij ^ ( r ^ S ; • S, + -1 (3S • fySj • ra - St • S )  h  8  3  —*  where S- is the spin of the ith quark and r,j the quark pair separation. The paramt  eter a' would be the strong quark-gluon coupling constant had the spin-orbit terms s  been included. Hence there are effectively four parameters in the theory; m , m , K u  s  and a' . s  The first term in Hhf, known as the contact term, operates on quark pairs in only a p-wave state, and the second term, called the tensor term, acts on only 21  s-wave pairs. Hence the negative parity states with 2 quarks in an / = 1 state and the third in a relative / = 0 state provide a particularly sensitive test of their model. The model has been remarkably successful which seems to indicate that the spin-orbit forces are not that important, even though they appear explicitly in the Breit interaction. Since the model has been so successful phenomenologically it has been suggested that the absence of spin-orbit forces might be due to some real physical processes [15]. A possible partial explanation has been put forth which suggests that the long-range confining force may produce an effective spin-orbit term in HQ of opposite sign to the omitted terms in H^f through a Thomas precession.  2.3.2  Bag Models.  The simplest version of the bag model was developed in pioneering work by Bogolioubov [16].  Later developments have mostly centred around the MIT bag  model [17] and its extensions. For a complete review of bag models see the article by Thomas [15]. In the model by Bogolioubov free quarks were artificially confined to a volume in space by introducing an infinite square potential-well. The quarks were assumed to be point-like and essentially massless, the later quality giving rise to the relativistic nature of the model. These early assumptions led to many problems, and the bag model has evolved as solutions were developed. These are discussed below. • The following correction terms were found to be required by the MIT bag model to get successful phenomonological fits of the baryon masses, magneticmoments, rms charge radius and the axial coupling parameter g^. 1. In the original bag models energy-momentum was not conserved at the bag walls. Quarks exert a force on the bag wall, but there is nothing to push back. Hence a positive energy density term, a bag pressure, had 22  to introduced into the Lagrangian. This energy-density parameter B is assumed to be a constant for all hadrons. 2. To break the degeneracy between hyperon and nucleon masses a small mass had to be given to the strange quark. This mass is a free parameter of the model, with the best fits suggesting a value of m « 300 MeV. s  3. In the nai've bag model the N and A are degenerate in mass. To break this symmetry a spin-spin hyperfine interaction term had to be added. This was postulated to arise from one gluon exchange terms. This introduced another parameter to the fit, the strong coupling constant a  s  4. In quantum field theory there are infinite renormalization terms which arise from the vacuum energy. However in a confined bag these terms will be finite, though incalculable. Hence a final correction term Z is introduced as a parameter to account for this zero-point energy. • The MIT bag model is not chirally symmetric as, when quarks reflect from the bag wall, their helicity is flipped. Hence left- and right-handed parts of the theory get mixed. However chiral symmetry is believed to be a reasonably good hadron symmetry. As a result chirally symmetric extensions of the MIT bag model have been developed. Of these the cloudy bag model is the most successful. In the cloudy bag model the original MIT bag model Lagrangian is forced to be chirally symmetric through the addition of new terms. These terms give rise to a massless Goldstone boson which is associated with the massive pion through symmetry breaking, and is consistent with the notion of the partially conserved axial current (PCAC). This forces the inclusion of pion fields into the model. Hence the picture of the cloudy bag model is one with three free quarks and a sea of qq pairs in and around the bag. 23  • Another concern with the MIT bag was the radius which at about 1.0 fm suggested considerable overlapping of adjacent bags for the nucleons inside the nucleus. With such overlaps it was hard to conceive how the independent particle shell model of hadrons could be successful. The cloudy bag model was an improvement as the pion fields outside the bag decreased its radius.  2.4  Hyperon Decays.  2.4.1 Classification. In standard texts on the subject, the hyperon decays are considered to belong to one of two classes. According to this classification they are either semileptonic or nonleptonic in nature. Examples of the former are; E" A  -+ A + e" + P —• p + u~ + — • E + e~ + u e  0~  e  These are well described in terms of a hadronic current, which may or may not be strangeness changing, and a leptonic current. Examples of nonleptonic decays are; A -> n + T T ° A —• p + 7 T ~ E + - • p + TT° These decays have proven to be much more difficult to calculate.  They can be  described in a current formulation only by considering individual quark lines. To this classification scheme we may add the weak radiative decays. Common examples of which are; A —• n + 7 E —• p + 7 E~ -> E - + 7 +  In the following sections the dynamics of hyperon decays will be outlined, illustrating the basic problems which have historically hampered an understanding of nonleptonic and weak radiative decays. These problems have motivated much of the theoretical interest in this field.  24  2.4.2  Relative Phase and Magnitude of S- and P-Waves.  The most general Lorentz invariant transition matrix for the radiative process  BiW  -> B (P ) f  f  + (*)  (2.10)  7  is given by M = U{P ) [(a + & 7 K „ F e " ] £/(P.) }  5  (2.11)  In this expression e is the photon polarization and a and b are the parity-conserving (p-wave) and parity-violating (s-wave) amplitudes respectively. For initially polarized hyperons there will be an asymmetry in the decay described by the parameter  a. The differential decay rate is given by dT  d cos 0  1 (ml2 - m 27r  y  2 \ ^ n  2 2  2  (|a| + |6| )(l + acos0)  2m,-  2  (2.12)  where 0 is the angle between the outgoing particle and the initial spin direction. The experimentally determinable asymmetry is 2Sfte(q*fr) a  = \ a J T W  ( 2  -  1 3 )  The decay rate is given by an integration over cos 6 as r = ^ ( ^ f The asymmetry for the decay S  +  i  )  3  ( H  2  + |i| ) 2  (2-14)  —> p + 7 has the measured value a = -0.83  ± 0.12 [18]. Such a large asymmetry has confronted the theories as in the limit of exact SU(3)/ a theorem by Hara [19] predicts a to be zero. The Hara theorem assumes CP invariance, left-handed currents, and a specific current-current form of the weak and electromagnetic interactions. The last ingredients essentially ensure that the transitions will observe the | A / | = | rule. With these assumptions he was able to show that b{j = —bji for any initial or final state baryons. In the limit 25  of SU(3)/, U-spin doublets are degenerate, hence i — f and the parity violating amplitude, and a, must vanish. Of course SU(3)/ is not an exact symmetry. However, other experimental results, for instance the good agreement of the hadron masses with SU(3) predictions, indicate that the symmetry breaking parameter A is small and according to the Ademollo-Gatto theorem [20], deviations in the amplitudes from the SU(3) limit are expected to be of order A . 2  Traditionally, the assumptions of the Hara theorem were believed to be valid and  theories have struggled to accommodate such large asymmetries. However  recent results from quark model calculations seem to indicate that the Hara theorem is still violated in the limit of SU(3). It remains an open question in the literature as to which of Hara's assumptions may be invalid. Semileptonic hyperon decays are quite well understood. Parameters which can  be related to differences between the parity-violating and parity-conserving  amplitudes may be determined in such decays. The normal choice is to evaluate D and F, which are independent couplings of the parity-conserving part of the weak Hamiltonian. Numerical values for F and D are normally taken from fits to known semileptonic decays as the fits are consistent between the various channels and in reasonable agreement with quark model calculations. As F and D are frequently used to parameterize the parity-conserving amplitudes their origin is outlined below. Semileptonic decays of the form B —* B'lv\ are described by the most general amplitude [21]  M  = % ( Zel ) C  V i B , )  i^ f  x  +  ' ""  if aX  q  ~ " ffl7  75  - ^^\  u(B)u (i- )v  g  nx  7s  n  (2.15) In this expression Uij^l — j^V^ is the leptonic current and cos# or sin# represent c  | A 5 | = 0 or Cabibbo suppressed | A S | = 1 transitions respectively. 26  c  The terms fx, f , <7i, and #3, are form factors and are in general a function 2  of the momentum transfer q . Factors fi and #3 are known to be small and can 2  in most cases be either ignored or corrected for. In this case the hadronic portion becomes the more familiar  M  x  %  =  ( sinl* )  Vi ')  - C V T J U(B)  13  (2.16)  In the framework of the quark model it is convenient to describe the baryons in terms of their octet as the matrix  (2.17)  B =  Then we can write; M M  A S = 0  = (cos 6 )(B'\(jx ± m)x + (91 ± i92)x\B)  (2.18)  = (smd )(B' |(j ± ij )  (2.19)  e  A 5 = 1 A  c  4  5  + (g ± ig ) \B)  x  4  5  x  in current algebra notation. The currents jx ± iJ2 are associated with the quark current and its conjugate, whereas j ± ij 4  Sj^U.  5  DyJJ  are associated with currents like  Similar relations hold for the g currents in terms of axial currents with j —> g  and 7^ —* 7 7 . The tensor product of two baryon octets may be written as M  5  8(8)8 = 2 7 ® 10 ©TO© 8588,1 0 1  (2.20)  It can be shown that the currents ji and gi may be expressed in terms of linear combinations of the members of 8s and 8.4. Hence we may write M  A  M  A S = 1  5  =  0  = (cos 6 )U(B') \ax2lx - (Dsx2 + Fa ) 7 A 7 ] U(B)  (2.21)  = (sin0 )tf (S') [ a  (2.22)  C  c  5  l2  137A  - (Ds  13  + Fa ) \ 13  7xls  U(B)  where the a and s factors are coefficients related to 8A and 8s and are determined from the matrix B and depend on the transition being considered. D and F are the 27  constants to be determined. Comparing equations 2.21 and 2.22 with 2.16 shows how D and F are related to CA and Cv within the framework of the quark model. A similar procedure may be developed when considering pseudoscalar meson emission from a baryon-baryon current. This gives rise to two new constants characterizing these processes. These are usually labeled / and d in analogy with the above. For nonleptonic decays the quark model predicts D/F = — 1. The p-wave data are in reasonable agreement with this, however a theory dependent interpretation of the s-wave data results in a value D/F = —0.4. Likewise if one uses as normalization the p-wave magnitudes then the s-wave amplitudes are too high by a factor of about 2 [22]. Hence it appears that some mechanism must be invoked in the theories to account for s-wave discrepancies. Gronau [23] suggested a K* pole term, (figure 2.8d), and was able to get sensible fits to both s- and p-wave amplitudes by fitting this contribution. However Le Yaunic et al.[22] point out that an explicit calculation of this term results in a value an order of magnitude too small, and that a large K* pole term would introduce a sizeable | A / | = | contribution. This we shall see in the next section is contrary to experimental results. They argue that the inclusion of low lying states from the 70-plet are sufficient to enhance the s-wave contribution to sensible values. Including these states has improved the theoretical situation substantially.  2.4.3  The |A7| = \ Rule.  The | A 7 | = | condition is an empirically observed selection rule which appears to hold, relatively well in the weak interactions. Consider figure 2.3 which is one possible Feynman diagram for the nonleptonic decay A —> p + ir~. This decay is characterized by US currents which transform like an isospinor ( | A J | = |) and by UD currents which transform as a | A J | = 1 isovector. Consequently the amplitude 28  d  d  s  u \Af \  d  u Figure 2.3: Quark currents in the decay A — > p + TT .  is expected to contain contributions from both the | A J | = | and | A J | = | terms. Note that in strangeness changing semileptonic decays only the current US is present, so the | A / | = - rule is quite natural. However as all strangeness changing nonleptonic decays have both currents, the naive expectation is for amplitudes in line with standard Clebsch-Gordan coefficient predictions. Experimentally it is observed that the | A / | = | portion is greatly enhanced. For example, in K° decays to either  TT~TT  or  +  7r°7r°,  the amplitude of the two com-  ponents are found to be in the ratio [21] A(\AI\ — -) ,,' ' = 0.045 ± 0.005 A(\AI\ = |)  (2.23) v  showing a suppression of the | A / | = | component by a factor of about 20. Likewise in A decay, a comparison of the two channels A — > n + ir° and A —> p + 7T indicates [24] that for s-wave amplitudes _  A  )\  A  r  \  fi  =  0.027±0.008  ^(|A/| = |) 29  (2.24  and for the parity conserving portion;  B  ^  A  I  \  =  = 0-030 ± 0.037  (2.25)  again indicating a strong | A / | = | enhancement. From the decomposition (eqn. 2.20) of two octets it can be shown that only the 27 and 8 5 contribute to |A5"| = 1 transitions. As the 8 5 contains only | A / | = | parts, this rule is commonly referred to as octet dominance. The | A / | = I puzzle for K decays appears to have a satisfactory explanation in terms of the so called tadpole diagrams [25]. However this mechanism in not applicable for baryon decays suggesting that there may not be a universal mechanism responsible for the | A J | = | rule. There have been several suggestions put forth as possible mechanisms to enhance the | A / | = | components, but present estimates indicate that these contributions are too small, by about an order of magnitude, to explain the effect. The most likely of these suggestions are based on strong Q C D effects giving rise to a | A / | = | enhancement. These are discussed below. Other exotic suggestions outside the minimal Standard Model have also been suggested as alternatives, the most popular of these being the presence of right handed currents.  • Penguin diagrams [26], an example of which is shown in figure 2.4 only contribute to the | A / | = I amplitude. It has been stressed [27,28] that penguin diagrams may dominate in weak radiative decays of the Q,~. Hence accurate measurements of this decay could determine the importance of penguin diagrams. Some support for this claim comes from the relatively large | A J | = | contributions from £l~ nonleptonic decays. It is also worth noting that Langacker and Sathiapalan [29] have investigated the possibility that the | A J | = | component is enhanced by the inclusion of  30  w d  S  u  u  q  q  q  q  Figure 2.4: Penguin diagram.  supersymmetric particles as intermediates in penguin-like graphs. While such contributions depend critically on the supersymmetry breaking parameters it appears that such diagrams could dominate in many such models. • The Pati-Woo theorem [30] states that, in the limit of SU(3)/, the amplitude will be purely octet when the weak interaction is between valence quarks, since the ground state baryons are members of the octet. The asymmetric octet combination 8A of the decomposition (eqn. 2.20) are automatically excluded as only symmetric combinations can combine with the totally asymmetric colour wavefunctions to create an overall asymmetric state. Hence one gets a | A J | = | enhancement from the valence diagrams. • Hard gluon corrections of the form shown in figure 2.5 have been shown to enhance the |Ai"| = \ amplitudes in small amounts [31,32]. Soft gluon, low q , contributions could be very important, however there are no estimates of 2  their amplitude due to the calculational difficulties. • Weak interactions with qq pairs in the quark sea have also been suggested as a 31  Figure 2.5: Some gluon radiative correction terms.  mechanism to enhance the | A J | = | contribution for baryons. Calculations by Donoghue and Golowich [33] show sizable contributions from uu sea quarks. The dominant mechanism is shown in figure 2.6a where the sea quarks are completely internal. In this case the weak current is of the form DS and hence is purely | A / | = | . Contributions from figure 2.6b contain both | A / | = | and | A J | = | components, and estimates show that they are relatively suppressed.  Figure 2.6: Contributions from the quark sea.  32  2.5  Hyperon Weak Radiative Decay Theories.  This section discusses the theoretical calculations specific to weak radiative decays. For the purpose of discussion the theories have been subdivided into categories based on the methods used in the calculations. This division is quite arbitrary and there are often significant overlaps between the various techniques. The earliest attempts were based on baryon pole models and were largely unsuccessful until it was realized that the low lying members of the 70-plet should be included. However it was still possible to use the unitarity principle to set lower limits on branching ratios. With the inclusion of more intermediate states modern pole models have proven to be quite successful. With the advent of the electroweak theory there have been many calculations at the quark-lepton level, with varying degrees of success. Others have employed current algebra techniques and P C A C , or have argued from symmetry principles alone.  2.5.1  Baryon Pole Models.  Baryon pole models assume that the hyperon decay processes are factorizable into a weak and an electromagnetic part with some intermediate baryon state acting as a propagator. This allows the class of diagrams shown schematically infigures2.7a and 2.7b, but excludes those graphs where the photon is emitted from an internal quark line (fig. 2.7c). Close and Rubinstein [34] concluded that contributions from the short distance interactions offigure2.7c were an order of magnitude smaller than the leading terms of the long distance diagrams and hence were negligible within the accuracy of most calculations. Also not considered are diagrams like those in figure 2.7d where a strange vector (axial-vector) meson is emitted with subsequent parity-conserving (parity-violating) decay to a vector meson. These vector mesons then couple to the photon. However, as pointed out earlier, these K* pole terms are 33  Figure 2.7: Pole diagrams.  believed to be small. Baryon pole models begin with the most general matrix element for the process B{ —> Bf + 7 which is written in the form; M ,_ B  B / 7  = U(B ) [( f  +a  fll  275  ) < v f c " e ' U(B ) J  {  (2.26)  where a and a are the parity-conserving and parity-violating amplitudes and e is a  2  the photon polarization vector. The matrix element for the weak interaction part of figure 2.7a can be expressed as BAB> = UiB'Kh + b )U(Bi)  M  2l5  (2.27)  The general form for the electromagnetic vertex is given [35] for interactions between J  p  = \  +  states as M  =U{B )\F +F ia k '\^U{B') l  i E M  I  ll  2  B 34  lllJ  (2.28)  where Fi and F2 are form factors evaluated at k = 0 for real photons. When the 2  intermediate state is a negative baryon, the form must be changed to maintain the parity-conserving nature of the electromagnetic interaction, and it is written as M  gl  „  E  Bj  = U(B )  [F  f  + F i<vfc"7 ] e»U(B')  lltil5  2  (2.29)  5  The propagator term is just ?w + M . I-P , - M ,  .P + MiM »7a - »M / ,  B  2  Bi  2  Taking as an example the J  +  ( -30) 2  2 2  Bt  —|  p  B  2  B  intermediates and the diagram in fig-  ure 2.7a, the full matrix element is  Ml = U(B ) S  [*W„ + F  2  uv*"]  ^ ( - 0 ^ | ^M]',  [BL + H2L  *  U ] ( B I )  (2  -  31)  Comparing this with equation 2.26 shows that the Fi terms do not contribute. Also and PB^^U (Bi) =  an application of the Dirac equation sets PBjJ^Bi) = M U(Bi) BI  —MB J U(Bi) i  reducing the expression to  5  Ml = U ( B , ) F ^ k - e  -  h M B  -  b  >  M  ° %  +  +  h M B  ^U(  B i  )  (2.32)  A similar procedure results in similar expressions for the negative baryon intermediaries and for the diagrams of figure 2.7b. A useful simplification is often made, which for the above expression amounts to setting b to zero, as it is predicted 2  to be in the SU(3)/ limit. This leads to the very simple relationship Ml = U(B )F a^e- ^ U(Bi)  (2.33)  B  f  2  MG  MG  Comparing equations 2.26 and 2.32 shows that the total amplitudes 0 ^ and a may 2  be expressed in terms of the nonleptonic amplitudes 61 and b , the electromagnetic 2  form factor F and the baryon masses. The estimation of these parameters for each 2  intermediate state considered is the last task to perform. 35  The earliest pole model calculations did not consider the negative parity states. In a calculation by Farrar [36] the imaginary part of the amplitude corresponding to on-shell intermediate states was considered. She then chose for intermediate states the known NTT states. As her choice dictated that these be on-shell, the amplitudes for this process were readily determined in terms of measured hyperon nonleptonic decays. The amplitudes for the subsequent process NTT —• N'~f were extracted from experimentally known inverse photoproduction amplitudes. The most important result from this paper was a lower bound on the branching ratio, obtained by imposing a unitary condition on the S-matrix. Relating the Smatrix to the forward scattering amplitude T and applying the unitary condition; Y,S: S f  = 6  ni  (2.34)  ft  n results in a term like [37] Tu -  = 22~m(T) oc a  (2.35)  TOT  Hence by writing the full matrix element M in terms of its imaginary (absorptive) and real (dispersive) parts, a calculation of the imaginary part will give a lower bound on the total amplitude.  Farrar calculated a lower limit for the A weak  radiative decay branching ratio (BR), obtaining: BR(A - » n + 7 ) > 8.5 x 1(T  4  (2.36)  Encouraged by a relatively high lower bound, Farrar attempted a full calculation using dispersion relations. In dispersion techniques, the requirement that the S-matrix be mathematically well behaved (analytic) allows one to equate the real part of the scattering amplitude at a particular energy in terms of the imaginary part integrated over all energies. The usual form of the dispersion relation is [37]  ifte/M = ~P 7T  f  5 ^ 2 ) ^ ' +C  J-00  U' — U>  36  (2.37)  where P / denotes the principal value of the complex integral with a pole at u = u / . The constant term does not, in general, vanish, but may be suppressed by a procedure of subtractions. If l i m ^ o o f(u>) = 0 then the constant will vanish and the process is said to obey an unsubtracted dispersion relation. Such relations were employed by Farrar. One still has the problem of understanding the off-shell behavior of the imaginary parts. To do this Farrar was guided by experience with general scattering problems. Her results are summarized in tables 2.1 and 2.2 at the end of this chapter. In a modern pole model calculation by Gavela et al. [38], a sum was performed over both positive and negative parity low mass baryons in the 56- and 70-plets respectively. In principal, s-wave transitions may proceed through either | intermediate states, but the |  +  +  or ^  transitions are forbidden in the limit of SU(3)/  and were consequently ignored. Likewise the contributions from the | were omitted from the p-wave amplitudes.  transitions  This amounts to setting b = 0 in 2  equation 2.32. As not all the masses are that well known, and, in some instances, the mass states are considerably mixed, the masses of the intermediate states were determined by parameterizing them in terms of 8m and u. Here 8m is a measure of the mass splitting between the SU(3) isospin multiplets and ui is the mass separation between the / = 0, |  +  and / = 1, |  baryons.  The nonleptonic amplitudes b\ for |  +  transitions were determined from fits  to known nonleptonic decay rates. For the \  transitions b\ cannot be calculated  in a model independent fashion. They related these to the measured nonleptonic decay rates via a harmonic oscillator potential model and used standard values for the free parameters. The electromagnetic form factor for real photons is directly proportional to 37  the baryon anomalous magnetic moment /c,  m  These are known for most |  2  = o) = ^  (2-38)  baryons, and the others are reliably predicted by  +  SU(6) relations. To calculate F for the | 2  states, another nonrelativistic harmonic  oscillator potential model, developed by Copley et al. [39] was utilized. Using the nonleptonic decay amplitudes and typical values for the quark masses, 8m, and o>, allowed them to determine the results listed in tables 2.1 and 2.2.  2.5.2  Quark Level Calculations.  With the development of the electroweak theory, theoretical interest was generated in trying to understand the radiative decays in terms of the baryon constituent quarks. The principal diagrams are the single-, two-, and three-quark transitions depicted in figures 2.8a, 2.8b and 2.8c.  Single Q u a r k T r a n s i t i o n s .  Early theoretical attempts to understand the radiative decays at the quark level restricted themselves to the single quark processes. Such a calculation by Gilman and Wise [40] began with the now familiar matrix element  Employing quark-model wavefunctions they were able to write the partial width as r  = iZfm's,  where \k\ M /-7r(2Ji 3  B}  5 -' -'* |c  + 1)M  B}  2(l  +M)  '  (2 39)  is a kinematic term of the baryon masses and initial  baryon spin J\. The Clebsh-Gordan-like spin dependent factor  C A  2  , + I  is determined  from the quark model and details of the weak and electromagnetic vertices are contained in the form factor F(k) and are not dealt with explicitly. 38  b).  a).  s  •  ^  d).  —  d s  c).  y  •  •  d  e).  y  S  i —  U  S  u  •  d  u  s  •—  f). ;  •  U  S  d  u  i  u  Figure 2.8: Common quark transition mechanisms: a,b,c) Single-quark, d) Two-quark, e) Three-quark transition diagrams, f) A two-quark transition with internal radiation. A l l permutations of these graphs must be included.  39  They assumed that the products (ai ± a,2)F(k) varied slowly as a function of typical photon momenta, and that their magnitude was essentially the same for all ground state baryons in the initial and final states.  This, they argued, was  reasonable as closely related calculations have proven to be successful. They fixed these factors by normalizing to the S  +  —• p + 7 branching ratio and were then able  to make predictions of other branching ratios. Their results (tables 2.1 and 2.2) suggest branching ratios far above established limits, particularly for A —> n + 7 and S~ —» E ~ + 7. Hence they concluded that single quark transitions cannot account for the amplitudes and that other transitions need to be considered. Kogan and Shifman [41] echoed these sentiments in a modern dispersion theory calculation, determining that single-quark transitions "cannot play an important role in weak radiative decays". However neither the Q~ nor H~ decays may proceed via the two- or three-quark transitions as they have no u quark, and in a calculation by Bergstrom and Singer [42] a large single-quark contribution is predicted for the Q~ —> E~ + 7 decay. This conflicts with the prediction by Eeg [28] of a penguin dominated Q,~ decay mechanism. However a measurement of the asymmetry in this decay may resolve the issue, as penguin calculations predict a large negative asymmetry, whereas Bergstrom and Singer predict it to be large and positive. In a totally different approach to the single quark transitions, Goldman and Escobar [43] estimated the amplitudes in terms of the Q C D sum rules. Q C D sum rules have the aesthetic value of beginning from first principles full perturbative Q C D theory. Correction terms arising from an operator expansion into the nonperturbative region have had their values fixed by phenomenological fits to low-lying resonance states. Results from these calculations suggested that earlier calculations of the single-quark process were underestimated by about an order of magnitude. However their values were still too small to account for the experimental results. 40  Two and Three Quark Transitions. The formalism for a complete analysis of the radiative decays in terms of all the diagrams in figure 2.8 was developed by Chong-Huah [44].  He claimed that a  proper analysis should be performed in a fully relativistic model complete with a confinement process.  Hence he used the MIT bag models to describe baryon  states. An important consequence of this paper was the recovery of the baryon pole model results from a quark model calculation, giving further credibility to the phenomenological pole model approach. Normalizing to the branching ratio, his estimate for the S  +  — > p+j asymmetry  was much too small and he surmised that some right-handed currents may need to be invoked.  A subsequent calculation by Gaillard et al. [45] also failed within  the framework of the MIT bag model, but they believed that their results were inconclusive and that the failure may have been due to accidental cancellation. Hence it may be that the MIT bag model is not suitable for such calculations. On the other hand, Picek [46] was able to obtain satisfactory results in a bag model calculation of the parity violating contribution from the J  p  = |  members of the 70-  plet. However only these contributions were considered, rather than a full analysis, so the relevance of the bag model remains an open question. Kamal and Verma [47] made a systematic study of the one-, two-, and threequark transitions. For the two-quark components they were able to determine the parity-conserving and parity-violating amplitudes in terms of £ and an overall scale factor c, where £ is a function of the quark masses, m, — m  i =—  u  2.40  They then parameterized the single quark transitions in terms of two free parameters a and b. Although they developed the formalism for the three-quark transitions in terms of one free parameter, they did not consider them further as 41  the paucity of data made a four parameter fit nonsensical. A t that time only the E  +  —* p + 7 branching ratio and asymmetry were known, with some upper limits  on other decays. They used the E  +  data and took the upper bound from the decay  E ~ —> E~7 branching ratio. Their expressions were quadratic in the parameters a,b and c so they obtained two solutions when fits to the data were made. These are. presented as solutions A and B in tables 2.1 and 2.2. They concluded that a three parameter fit gave a consistent description of the data and found no need to include the three-quark transition amplitudes. As the branching ratio S~ —> E ~ 7 has now been measured, and is a factor of about five lower than the upper limit they used a reanalysis of this method is required. As an example of the sensitivity of the results to this limit, a re-evaluation using the new branching ratio has been calculated by the author. One of the two solutions is shown in the tables 2.1 and 2.2 as solution C. The most recent calculation along these lines was a reanalysis of all the hyperon decays by Verma and Sharma [48]. They again parameterized the single-quark transitions in terms of a and b but obtained a parameter-free description of the twoquark process. They also considered the internal radiation diagram for two-quark processes, figure 2.8d, but found them to be highly suppressed. Contributions from the three-quark process were determined to be small and to contribute mainly to the parity-conserving amplitude. For the single-quark transitions they tried a variety of approaches to determine a and b. A straight forward estimate of the standard diagrams in figure 2.8a yielded a single-quark contribution many orders of magnitude lower than the twoquark transitions. They then included hard gluon corrections and long-distance Q C D effects. They did not include any such correction terms for the two-quark calculations, or penguins, even though these are known to to enhance the amplitudes. Their results showed that there were some contributions from single-quark 42  processes, but that the results were quite dependent on the ratio b/a which in turn were very sensitive to which effects they included. B y and large their predictions were consistent with the available data and future measurements of decay asymmetries should indicate which of the above processes are important in their model.  2.5.3  T h e P C A C and Current Algebra Approach.  The electromagnetic vector current is a conserved quantity. This is essentially a statement of conservation of charge. Likewise the vector portion of the hadronic current is believed to be conserved and this is the basis of the conserved vector current hypothesis. No such condition occurs for the axial current A ^ .  Indeed if  such a condition existed there would be a term of the form O = 0 * ( O | 4 | I T ) = ^ml  (2.41)  where II is the pion field. This would then require either a massless pion, or f  n  =0  which forbids pion decay. However the partial conservation of the axial current condition ( P C A C ) , is that in the limit m  ff  0, the axial current is conserved. This  is useful for many processes where a zero mass pion is a suitable approximation. Hence many intermediate energy processes may be considered in this soft pion limit. Using P C A C the amplitude for a nonleptonic decay process Bi —> Bj +  K (q ) l  2  may be written as [49] R(Bi -» BjTv\q j) = -i^ 2s  H (0)] \ B ) - lim {i^M^  {Bj |  w  (2.42)  t  where in the commutator F$ describes an axial "charge"  Fl  = -i J A d x  (2.43)  3  0  and the superscript i refers to the isospin of the pion. The last expression is zero when taken in the limit unless  is singular at 43  = 0. The term  may be  expanded as  _  (B \H (0)\n)(n\A (0) i  f  poJ )! o  + 8(Pi-P -q)'  P  n  which shows that  w  q  Bi)  '  )  'j  (- ) 2  44  is only singular for those intermediate states n which are  degenerate in mass with either the initial or final state baryon. Hence the last term is either zero, or a sum over single particle intermediate baryon states. The commutator term is known from current algebra techniques. In the SU(3) limit it vanishes for p-waves. For s-waves it is the main contributor, although there are some contributions from the pole terms. Hence, by employing P C A C and current algebra, the process £?, —> BJ + TX may be expressed i n terms of a 2?, —> B j part, plus calculable pole terms. Calculations of this nature, combined with fits to the known nonleptonic decay amplitudes, have determined the B i  —> B j parts.  Hence a similar treatment for weak radiative  decays results in terms which will include the now fixed hadronic contributions plus additional electromagnetic terms which are easily determined. This method is effectively demonstrated by Scadron and Visinescu [50]. They fix the weak hadronic parts in terms of phenomenological fits to the nonleptonic decays and are able to successfully describe 14 nonleptonic decay amplitudes, the Q,~ and S~ decay amplitudes, as well as a number of kaon decay parameters. A particularly pleasing aspect of such an approach was that the relative amplitudes of the p- and s-wave amplitudes were determined to have the correct relative amplitudes and the £ ~ asymmetry was naturally large and negative. This they claimed was due to the inclusion of the ^  intermediate states from the 70-plet [51].  They did not investigate other weak radiative decay modes, however Brown and Paschos [52] have used a similar approach in making estimates of other branch44  ing ratios and asymmetry magnitudes. As there are no decays of the form H° —> E° '+ 7r, the amplitude for the equivalent radiative process had to be related to other nonleptonic decays using SU(3) symmetry relations and was dependent on the values used for the f /d ratio.  2.5.4  The Vector Dominance Symmetry Approach.  A novel approach to predicting weak radiative decay amplitudes was recently put forth by Zenczykowski [53]. The basic idea was to make a connection between the nonleptonic and radiative decays by symmetry arguments. To accomplish this the similarity between the photon and vector meson quantum numbers was exploited. Formally the connection was made by making substitutions of the Lagrangian field variables of the form V, - »  (2.45) JVNN  Here  and  represent the vector meson and photon fields and  fvNN  is  form  a  factor describing the coupling of the vector meson to the hadron current.  The  connection between the vector meson sector and the pseudoscalar mesons is then established on the basis of SU(6) and the quark model, noting that both sectors have the same octet structure. Zenczykowski described the interactions in terms of the so called spurion language. In this language the process B i —> B / + TT may be conveniently expressed in terms of the process Bi• + S —• B j + TC where S is the fictitious massless spurion. This proves to be useful as choosing S to be either of the SU(3) generators A or A 6  7  guarantees that the process will observe the | A J | = | rule for the parity-violating and parity-conserving amplitudes respectively. B y expressing the baryons and mesons in terms of their octet matrices, as in 45  (2.17) he obtained nine possible SU(3) invariant traces J, which are given by [54]; Ji = Tr(SBfMBi),  J = Tr(SBiBfM),  J =  J = Tr(SBiMBf),  J = Tr(SMB Bi),  J = Tr(SBi)Tr(B M),  J = Tr(SB BiM),  J = Tr(SMBiBf),  J =  2  3  s  4  5  7  f  Tr(SB )Tr(BiM), f  8  6  (2.46)  f  9  Tr(SM)Tr(BiB ) f  The parity-violating and parity-conserving amplitudes for the process Bi + S —+ Bf + 7r may be expressed in terms of the sums J2k AkJk and J2k BkJk respectively. For parity-violating amplitudes he calculated the constants Ak for each process in terms of the possible diagrams shown in figure 2.9. He separated the parityviolating Hamiltonian into contributions from longitudinal and transverse currents; H  pv  = x H[  v  L  + xH  pv  T  (2.47)  where the xj_, and X? are scale factors a^^bi... depending on the diagram (a,b or c), under consideration. Quark model predictions give relationships between these scale factors of  = | « r - —a; bj = — bi, = b and CT = c with no CL component.  The Ak were then determined easily in terms of the three parameters a,b and c, with different results for the vector and pseudoscalar mesons. For the parity-conserving amplitudes he chose to revert to the pole model as this has proven to be successful at calculating the parity-conserving amplitudes of nonleptonic decays. He was then able to express the Bk in terms of the ratios f/d and F/D and an overall scale factor C. By a process similar to that for the parityviolating amplitudes, the parity-conserving amplitudes were related to f/d,  F/D,  and C through the BkFor F/D he accepted the value predicted by the well understood semileptonic decays. Experimental nonleptonic decays were used to determine the parameters b, c, f/d and C. The only remaining parameter, a, has been given a theoretical estimate. Hence SU(6) symmetry has allowed the calculation of vector meson am46  a).  B  f  M  b).  B-  B,  5  c).  M  M B  B  f  M  B,  }  M  Figure 2.9: Processes considered for nonleptonic decays. Here M is the meson emitted in the process, and B,- and B / are the baryons in the initial and final states.  47  plitudes in terms of parameters taken from known nonleptonic and semileptonic results. Finally the photon coupling is symbolically expressed in terms of the isoscalar vector mesons as 7  - >  0 +  J v r ° - ! "  -  (2 48)  where e « 0.35 and the usual mixing of the vector mesons is given by u° — ^/3u> + &  \/2cj). Hence, for example, the amplitudes for the decay A — > n + j may be expressed as a simple sum involving amplitudes of A — > - n + p°, A —* n + o>, and A —* n + u>°, 8  each of which has a symmetry relation with the pseudoscalar nonleptonic decays. His results are presented in tables 2.1 and 2.2 and show that symmetry predictions with all free parameters fixed by nonleptonic decays give a reasonable agreement with the data.  2.6  Summary, of Experimental and Theoretical Results.  We have shown in the preceding section that there have been numerous attempts to calculate weak radiative decays amplitudes, and that given the limited quantity and quality of the experimental results, many are in reasonable agreement with the data. However it is equally clear that there is little consensus on which processes are the most important, or which theoretical methods are the most suitable. Many of these issues could be resolved by a higher quality data set, but these are difficult measurements. Below we summarize the experimental standing of the various decay channels. The earliest experiments were done in the 1960's. These all investigated the process S  +  —• p + 7 by interacting kaons in a hydrogen bubble chamber. Recently  there have been four groups actively making measurements in hyperon weak radiative decays. Only for E  +  —• p + 7 and, with the present experiment A —• n + 7, 48  have any measurements been duplicated. Prior to 1985, with the exception of the £  +  decay, only a few upper limits existed. Since then all experimentally feasible  processes have been measured at least once. There has been no information forthcoming on any weak decays of the E ° , let alone weak radiative decays, as there is an allowed electromagnetic transition E ° —> A + 7 which dominates all other processes. The four experimental programs are discussed in more detail below.  The KEK Program. At K E K , in Japan, the branching ratio and asymmetry for the process E —» p + 7 were measured [55]. Their technique was optimized +  towards making the asymmetry part of the experiment. A high energy pion beam was steered onto a liquid hydrogen target and £  +  particles were created  through the associated production reaction; 7T+ + p The charged E and K +  +  £ + + K+  •  (2.49)  kinematics were measured with magnetic spectrome-  ters positioned downstream of the interaction target. These were used to analyze the K  +  and the secondary proton. The photons from the E decay were +  detected in converter and multi wire proportional chamber (MWPC) packages located above and below the beam axis. These were sensitive to photon position, but not energy. They were able to suppress the background from the E  +  —• p + 7T° channel by imposing harsh kinematic constraints on the proton  momentum. The branching ratio was determined to be (1.30 ± 0.15) x 10 , -3  consistent with other measurements. They also confirmed the large negative asymmetry previously observed for this decay with the result a = —0.86±0.14.  The Fermilab Program. The S° has two possible radiative decay channels, to either A+ 7 or E ° + 7. Both of these have been reported on recently by groups working at Fermilab. The analysis in each case differed, but the experimental 49  apparatus was essentially the same. They used a collimated S ° beam created by high energy protons interacting on a lead or tungsten target. Any charged particles in the beam were swept away from the experimental area with a large magnet. The H° was then allowed to decay in a long vacuum chamber. In both experiments the particles in the final state included a A. The charged secondaries from the decay A —* p + TT~ were momentum analysed using a dipole magnet and a set of M W P C s .  The pix~ pair was required to have  an invariant mass within 10 M e V of the A , and their tracks were required to intersect in the decay region. In both cases the photon was detected at forward angles using a converter and lead-glass array, and the normalization was made to the channel 5 ° —* A + 7r°. A detailed Monte Carlo was required to understand the detector acceptances and to model the lead-glass response. One of the main differences between the two measurements was that the process S ° —> A + 7 was measured concurrently with an experiment to measure the transition moment / / ( S ° — A) via the Primakoff effect. Hence they had to contend with a set of intermediate targets which were producing additional background effects that needed to be modeled carefully. The branching ratio and asymmetry for the decay 5 ° —• A + 7 were deduced on the basis of 116 ± 13 events [56]. The asymmetry, a = 0.43 ± 0.44, was not well determined, and their branching ratio, B R = (1.1 ± 0.2) x 1 0  -3  does not  agree with any of the current theories, at the one standard deviation level. In the other group, only 85 ± 10 decays of the process E° —• S ° + 7 were observed [57]. From these a branching ratio of B R = (3.6 ± 0.4) x 1 0  - 3  and  an asymmetry of a = 0.20 ± 0.32 have been determined. Again the results were inconsistent with all the theoretical models, usually at the three standard deviation level. The results of these experiments are compared against 50  theoretical estimates in tables 2.1 and 2.2.  The CERN Program.  At C E R N they have reported on measurements of four  weak radiative decays. These were a repeat measurement of E  +  —> p + 7, first  measurements of A —> n + 7 and E ~ —> £ ~ + 7, and an upper limit on the branching ratio for 0~ —+ E ~ + 7. The first three experiments used a C E R N SPS hyperon beam at a mean momentum of 116 G e V / c .  The hyperon beams were momentum dispersed,  so sets of wire chambers, located downstream of the last beam line element, were able to determine the momentum of incident hyperons. A differential Cerenkov counter was used to identify hyperon species. To measure the A —• n + 7 decay process they used an incident beam of 5~ particles. These decay virtually 100% of the time via S  -  —• A + -K~ . The E~  was allowed to decay in a vacuum chamber about 5 m long. The A momenta, and direction were determined from the kinematics of the incident E ~ and the departing 7r~, the latter being momentum analyzed by a magnet and MWPC assembly located downstream of the decay region.  Acceptable triggers re-  quired a E ~ incident on the decay region and a 7r~ track intersecting the E ~ trajectory within the decay volume. Decays A —> p + ir~ were rejected by requiring that there be only one charged particle in the final state. Downstream of the magnetic spectrometer another powerful magnet swept any charged particles away from the detectors. The photons were detected in either a liquid argon detector (LAD), or a leadglass array.  The lead-glass array detected only those photons outside the  solid angle subtended by the L A D . The L A D was centred on the beam axis. The L A D was segmented into a number of different radiators, so that hadron induced events would be distinguishable from photon events by differences in 51  their shower development. The anode collectors were located on strips which crossed the detector alternately horizontally and vertically. This allowed the position of the shower to be determined, and neutrons could be distinguished from photons. A significant reduction of the background arising from the process A —> n + 7r° was obtained by requiring that there be exactly one photon and one neutron in the LAD, and that the lead-glass record no events. In order to extract the branching ratio they normalized the number of events to the number of A —>• n + 7T° decays which had one photon go undetected in their apparatus. They had to rely on detailed Monte Carlo simulations of the two channels, including the different acceptances and efficiencies. They did not simulate the shower development in the LAD. They were unable to see a signal per se, but they did find that 42 of the « 13500 events were located in a region predicted by the Monte Carlo to be essentially free from the background channel. Of these they determined 11 to be definite background events, and estimated another 7.3 were likely to be, leaving a total of 23.7 events. Using the Monte Carlo to determine the relative acceptances they produced a branching ratio of BR = (1.02 ± 0.33) x 10~ [4]. 3  They were not sensitive to the asymmetry parameter. In the measurement of the E+ decay, they used a E  +  beam, and much of  the same apparatus. The only real differences between this and the above measurement were the requirements that a proton, rather than a neutron, be detected coincidently with the photon, and that there be only one charged particle in the drift region, the E . They measured a branching ratio of +  (1.27±QJ|) x 1 0  - 3  [58], consistent with all other measurements of this decay.  Likewise the H~ —> E ~ + 7 decay was measured in a very similar fashion. In 52  this case the incident beam was a E~, and the decay £  _  —• rnr~ was used to  tag the E~ decay. They required just the one photon and a neutron in the LAD. With an analysis similar to their other measurements, they found only 9 events and estimated the branching ratio to be (0.23 ± 0.10) x 10  -3  [59].  In an earlier experiment, investigating many properties of the Cl~, this same group had been able to set an upper limit on the branching ratio for the decay 0" —• E~ + 7. Their value was BR < 2.2 x 10~ with a 90% confidence level, 3  based on an observed 9 events. The B N L Program. We have now measured the two branching ratios A —• n + 7 and E  +  —* p + 7. The latter was described earlier in the first chapter, and  resulted in a branching ratio consistent with all other recent measurements. This is quite comforting as the three most recent results have been reached with very different techniques. This indicates that the systematic errors are reasonable. Our technique differed from the rest by virtue of being a stopped kaon experiment. Hence our hyperons were not very energetic, and decayed with photon energies suitable for Nal type measurements. This gave us an improved energy resolution and good event reconstruction ability. All experimental results, along with the theoretical predictions discussed in the preceding section, have been collected together in tables 2.1 and 2.2.  53  Table 2.1: Theoretical and experimental branching ratios in units of 10 Theory [36] [38] [40] [47]A [47]B [47] C* [50] [53] Data [Ref.]  S+^P7  0.34 ± 1.25 0.92 1.24* 1.24+ 1.24+ 1.24+ 0.66 0.90 1.26±0.07 [10]  A—• nj 1.9 ± 0.8 0.62 22. 5.97 1.70 2.18  S°-> A  3.21 1.02±0.33 [4]  E°-+ S ° 7  H —> S 7  3.0 4.0 1.80 1.36 1.37  7.2 9.1 1.48 0.23 0.10  11. 1.2+ 1.2+ 0.23+  41. 0.6 0.6 0.23  2.06 1.1±0.2 [56]  3.49 3.6±0.4 [57]  0.36 0.23±0.10 [59]  < 2.2 [60]  7  ft"->  S"  7  + These values used as input. * Calculated by the author using the model of reference [47].  Table 2.2: Theoretical and experimental asymmetries. Theory [36] [38] [47]A [47] B [47]C* [50] [53] Data [Ref.]  0.8 -0.8 -0.5+ -0.5+ -0.5+ -0.38 -0.56 -0.83 ± 0 . 1 2 [18]  A— > nj 0.5 -0.49 -0.87 0.25 -1.0 0.75  E°—> S ° 7  E--.S-7  -0.78 -0.96 -0.45 -0.97  -0.96 -0.3 -0.99 0.11  -0.87 0.56 -0.56  0.95 0.43 ± 0.44 [56]  -0.97 0.20 ± 0 . 3 2 [57]  S ^ 0  A  7  + These values used as input. * Calculated by the author using the model of reference [47].  54  -0.40  ft--*  E"7  -0.87 0.56 -0.56  Chapter 3 Experimental Method. 3.1  The A G S and Beam Line C6/C8.  The success of E811 depended upon, among other things, a high quality, relatively pure, source of kaons stopping in a liquid hydrogen target.  Through meticulous  fine tuning, such a beam of kaons was developed at the Alternating Gradient Synchrotron, (AGS), at the Brookhaven National Laboratory. These kaons were produced by accelerating protons to 24 GeV, directing them into beam lines, and focusing them onto various targets. Proton interactions within the C ' target produced secondaries, including kaons, which were then transported to the experimental area using the secondary beam line C8. Figure 3.1 shows the layout of the A G S and its beam lines. The acceleration of protons to 24 GeV was a complex process involving several stages of acceleration. In the preliminary stage, H ~ ions were accelerated to 750 keV by a Cockroft-Walton set. These were then passed to a linear accelerator which boosted their momentum to 650 M e V / c . By passing the ions through a thin foil, the electrons were stripped off, and the protons injected into the main A G S ring. The A G S is a synchrotron, meaning that the protons remained in a fixed orbit while the radio frequency fields, which provided the acceleration, and the magnetic fields, which constrained the protons, were increased in unison. The protons were accelerated in the A G S to 24 GeV. Having attained the desired energy there was a "flat top" period during which time the the protons were slowly extracted into the various beam lines. The spill length and cycle time 55  E811  Figure 3.1: The AGS accelerator components and various of the slow extracted beam lines.  56  were variable, with nominal values being about 1.6 seconds each for the flat top and acceleration phases. Ideally the extracted proton flux should be constant over the spill duration, but often the microstructure of the spill was such that there were momentary unacceptably high beam rates. Since high rates may lead to pile-up in the detectors, the electronics were managed in such a way as to accept data only during periods with reasonable rates. Typically C-line received about 5 x 10  12  protons per spill. The usual mode  of operation focused the majority of these on the C target. Protons which passed through the C target undeflected were reunited with protons skipped over the target and redirected into beam lines C l and C3. Nominal proton fluxes incident on the C target ranged from 1 to 2 x 10  12  per spill. The proton interactions in the C  production target produced a mixture of particles, including large numbers of pions and some kaons. The C6/C8 beam line is depicted infigure3.2. It consisted of a series of dipole, quadrupole and sextupole magnets, electrostatic separators, mass slits, collimators and diagnostic elements. As the pions and kaons emerging from the C target had a distribution in both space and momentum, the magnets were set to transport only those particles entering C8 with a desired momentum. Dipole magnets have their fields orthogonal to the beam axis and are used to steer charged particles. For a given particle trajectory, the radius of curvature is determined by the geometry of the beam line, and the magnetic field is proportional to momentum, so selecting the field of the first dipole magnet determined the canonical momentum of the beam line. The quadrupole magnets were used in pairs as magnetic lenses to keep the beam well focused while the sextupoles were used to correct higher order aberrations. The last quadrupole-dipole-quadrupole set, (Q6-D2-Q7), was rotatable to serve either the C6 or C8 areas without disturbing existing setups. 57  Vertical Focusing  Horizontal Focusing  c <Ct>  Pi • S  1  ]Q  2  <1  Sep. 1 ]Q  Extra Collimator  ]  Ef3  3  S.  3Q, Sep. 2 Mass Slit  <3  Figure 3.2: The C6/C8 beam line.  58  C8 is often referred to as LESB II, being the second Low Energy Separated Beam line built at the AGS. The separation between particle types of different mass but with the same momentum was achieved by using separators. These consisted of crossed magnetic and electric fields. By carefully adjusting the ratio of these fields one could arrange for the kaons to traverse straight through the separator undisturbed while the pions and other contaminants were vertically displaced. Then by passing the particles through a narrow mass slit centred on the beam axis most of the pions could be removed. A typical beam tuning procedure developed along the following lines; • The proton beam was tuned to provide a small and steady spot on the C' target with a minimal amount of radiation loss in C3. • The overall optics were set by turning off both separators and focusing the unseparated beam on the hydrogen target. A position sensitive hodoscope was used as a guide. • The mass slit was reduced, the first separator was turned on and its magnetic field swept. This determined the setting which produced the best n/K ratio while still maintaining a reasonable flux. The ir/K ratio was determined by the time of flight method. • Once satisfied with the first separator, it was left at the best determined set point and the second was optimized in the same manner. After much work tuning the beam line we were regularly able to achieve beams with a 7r/K ratio around 6:1, which was better than previous results. An additional factor which contributed to our success in getting a low 7r/K ratio arose from detailed calculations and Monte Carlos of the beam profile along C8. From this we were able to determine a few locations where the pions and kaons had somewhat different  59  spatial distributions and we installed additional collimators at these points. This work was carried out during phase I of E811,[7] and represented the first time that the C8 line had been operated at its designed capability. For the engineering run, the nominal momentum was selected to be 680 MeV/c. The advantages of having such a high momentum are that there are fewer K~ decays inflight, and hence a better 7r/K ratio, and the pions in the beam are well above the A(1232) resonance which could lead to background problems through interactions in the degrader. However, in order to stop the kaons in the target a considerable amount of degrading material is required. A large thickness of degrader contributes to more background through interactions within its volume. A more serious consequence is the large degrader induced momentum spread. This results in a lower fraction of kaons stopping in the hydrogen, and a higher number of inflight interactions. An examination of the spectra from the engineering run illustrated that the inflight problem was worse than anticipated. Hence the momentum was reduced to 600 MeV/c for the production run.  3.2  The Detectors and Associated Apparatus.  This section describes the physical apparatus. Each of the active components, the detectors, were specialized to perform one or more of a variety of tasks. Detectors in the beam were used to record the position, the energy, or the type, of particles that they encountered. The veto counters were merely required to register the presence of charged particles passing through them, but with a very high efficiency. The Crystal Box was selected for its suitability as a gamma-ray spectrometer. Also described will be the non-active components; the target; degrader; shielding blocks etc. A schematic showing the principal elements of the apparatus is shown infigure3.3. 60  Figure 3.3: A schematic drawing of the apparatus showing the beam line detectors and a cutaway view of the Crystal Box surrounding the hydrogen target. The various elements are described in the text.  61  3.2.1  The Crystal Box.  The Crystal Box is a modular array of 396 separate Nal(Tl) crystals. It was obtained from Los Alamos where it had been used on a number of rare muon decay experiments. Considerable experience on its performance was obtained during that time [61]. It combined a large solid angle coverage with reasonable energy resolution and good timing and position resolution . It was arranged so that there were four faces mounted transverse to one another to form the sides of a box surrounding the beam. Each face consisted of ninety individual crystals in an array ten crystals along the beam axis and nine crystals wide. Each face crystal had a cross sectional area 6.4 x 6.4 cm and was 30 cm deep. A 3 x 3 array of crystals filled each corner 2  between adjacent faces. Each corner crystal had the same cross sectional area as face crystals, but was 76 cm long. The major axis of the corner crystals ran parallel to the beam, as shown in the cutaway view of figure 3.3. Optical isolation was achieved by wrapping each crystal with a 50 fj.m layer of optical reflector reinforced with 75 /j,m of aluminized mylar. Due to the hydrophilic nature of Nal, the entire detector was encapsulated inside a hermetically sealed aluminum container. The front face of this container had to be thin, to reduce the number of interactions in this non-active area, yet strong to support the weight of Nal. It consisted of an aluminum "honeycomb" plate with aluminum skins bonded to either side. A photomultiplier tube (PMT) was coupled to the rear of each face crystal by a sequence of lucite light guides, plastic elastomers and glass plates as shown in figure 3.4. There was a PMT attached to each of the upstream and downstream ends of the corner crystals in much the same manner. The light guides were seated inside recessed holes which had a ring of rubber hugging the P M T assembly to prevent light from leaking into the detector. 62  Figure 3.4: A cutaway view illustrating the phototube assembly mounted on the Crystal Box.  63  A complication arose from the rather fragile mechanical stability of the phototube with ju-metal shield and iron support flange. These were separated by mylar from the P M T which was at a potential of -1400 volts. The mylar was often slightly damaged, or easily dislodged when a phototube was lightly knocked, resulting in conduction to ground. The area in the neighborhood of the PMTs was extremely congested as each base had two high voltage cables; a signal cable; a fibre optic cable and a large mounting post, and as they were held in place in groups of nine, replacing or repairing a phototube or base was a difficult and time consuming job. The PMTs used were of the Amperex XP2232B type with 12 stages. They are fast tubes and are generally used when accurate timing is a consideration. The bases, which distribute the high voltage to each stage of the P M T , were designed to be able to handle high counting rates. While high counting rates were not a problem in this experiment, it was still a convenient arrangement due to limitations in the availability of high current and high voltage power supplies. The high voltage chain was divided in such a way that the current in the last 7 dynodes was four times that of the first 5 dynodes. The differing current requirements were satisfied by applying -1400V to the entire chain with a low current supply, and boosting the last 7 dynodes with a -700V, high current supply. The high voltages were distributed through a network of power supplies and bus bars.  3.2.2  Beam Line Detectors.  The beam line detectors and other elements are shown in figure 3.3, and their properties are listed in table 3.1.  These detectors serve several very important  functions. They provide quick on-line information that there is a particle incident on the hydrogen target; that the particle is most likely a kaon; and that there is a good chance the particle has stopped in the hydrogen. In addition, for suitable events, more detailed information is collected and recorded on magnetic tape for 64  further scrutiny during the off-line analysis.  Table 3.1: Properties of the beam line elements.  Element Aperture Counter Si Cerenkov K Cerenkov C Counter S Counter S Degrader D dE/dx E i dE/dx E Hodoscope-X Hodoscope-Y Counter S4 Target L H 2  3  2  2  Cross Sectional Dimensions  Thickness  20 x 10 20 x 20  61.0 0.3  30 20 20 15  4.8  Scintillator Lucite  5.1 0.3 0.3 13.0 1.3 1.3 0.3 0.3 0.16 19.4  Lucite Scintillator Scintillator Copper Scintillator Scintillator Scintillator Scintillator Scintillator Hydrogen  x x x x  30 20 20 15  15 x 15 15 x 15 15 x 15 6x(15 x 2.5) 6x(15 x 2.5) 15 x 15 20<f>  Material  Gap to next element 2.9 2.5 20.0 3.8 26.7 20.3 0. 0. 1.3 0. 1.3  6.4  -  All dimensions in centimeters.  The time coincidence Si-S -S3-S signaled the presence of a beam particle that 2  4  had emerged from the collimator and had passed through all the counters and degrader and was now entering the target assembly. The two Cerenkov counters, K and C , were used to distinguish kaons from pions. If a particle traverses a material at a velocity greater than the speed of light in that material then it emits a cone of light, Cerenkov radiation. The opening angle for this cone of light in a given medium is inversely related to the velocity of the particle. At our beam momentum, and in lucite, kaons would emit light at about 34° whereas for pions the angle would be approximately 47°. The critical angle for total internal reflection in lucite occurs at 4 3 ° . Hence pion light will be totally internally reflected whereas the kaon light will escape from the face of the Cerenkov. The K Cerenkov had six 12.7 cm phototubes positioned to detect the 65  kaon light emerging from an aperture on the face of the lucite. The C Cerenkov had 3 phototubes mounted directly on the second block of lucite to detect internally reflected pion light. Hence an appropriate signal for the presence of a kaon consisted of a signal from K in anticoincidence with a signal from C. The degrader was used to slow kaons from a state of initially high momentum to one suitable for stopping kaons in the hydrogen target. The degrader thickness was selected to optimize the number of kaons stopping in the target while also considering the effects of multiple scattering, interactions in the degrader material, and energy straggling. The degrader was positioned as close as possible to the target window to minimize the number of kaon decays inflightand beam divergence effects due to multiple scattering. Immediately downstream of the degrader were two 1.25 cm thick scintillators Ei and E . With such thick counters one reduces Landau broadening, and so, gets a 2  better measure of the energy deposited in the scintillator. Hence these can be used to distinguish between pions and kaons, and to a lesser extent, potential inflight interactions. Following the E counters was a 12 element hodoscope. It had 6 scintillator strips in each of the X and Y directions. The hodoscope gives a measure of the X-Y position of stopping kaons with a position resolution of about 5 cm. This resolution was determined in a previous experiment where we were able to determine the vertex using wire chambers above and below the target.[9] A knowledge of the radial position of stopping kaons aids in the avoidance of events coming from stops in the walls of the aluminum vacuum assembly and is necessary for the reconstruction of events. The hodoscope can also help eliminate some potential pile-up by rejecting events with multiple numbers of incident particles. Each scintillator was coupled to a 10 stage Hamamatsu photomultiplier tube, type R1925. Also installed in the beam, before the second dipole, was an upstream scin66  tillation counter. This counter was mounted with two PMTs and was primarily designed for good timing response. It was very useful during the beam tuning process as a measure of the time of flight between it and S3 provided a quick and accurate measure of the 7r/K ratio. The time of flight was recorded for all valid events.  3.2.3  T h e H y d r o g e n Target a n d O c t a g o n V e t o A s s e m b l y  The liquid hydrogen target, its vacuum chamber, and the refrigeration unit were supplied by the BNL cryogenics support group. Various components of the hydrogen target and assembly are depicted in figure 3.5. The liquid hydrogen was contained inside a 0.35 mm mylar shell. The shell was comprised of two hemispherical end caps glued to a cylindrical portion where the fill and empty lines were attached. We used two different versions during the engineering run. The shorter version had a total length of 19.4 cm (figure 3.5a) compared with the longer version at 30.5 cm (figure 3.5b). Both targets were 20 cm in diameter. The mylar vessel was wrapped in "super insulation" and was supported only by the empty and fill lines which were held off from the sides of the container by polystyrene foam spacers. Carbon resistors inside the hydrogen vessel were used to determine the state of the target vessel, either full or empty. The entire vessel was kept inside a vacuum chamber to insulate the hydrogen. The aluminum vacuum assembly was inserted into the Crystal Box aperture and positioned so that the centre of the hydrogen target was centred within the Crystal Box, defining the origin of the coordinate system.  The aluminum walls  immediately surrounding the target were machined much thinner to reduce the number of photon conversions in the walls. The 0.4 mm stainless steel entrance window was also as thin as allowed to minimize the number of kaons interacting 67  Figure 3.5: Components of the hydrogen target; a) Configuration with the short target in place; b) Configuration with the large target in place and with the entire lower portion of the vacuum assembly shown as a side view; c) a perspective view of the octagon veto assembly from the downstream end. Further details in the text.  68  outside the hydrogen. Between the target flask and the inner walls of the vacuum chamber were 8 scintillators assembled into the shape of an octagonal cylinder. Each strip was optically decoupled from its neighbor and was 31.8 cm long. Four of the strips also folded over at the downstream end to form a cup around the hydrogen flask. Figure 3.5c shows the scintillators, but not the light guides which followed a tortuous route before being glued to the feedthroughs shown in figures 3.5a and 3.5b. This gave complete charged particle coverage except for the entrance and small holes where the fill and empty lines were fed through. The scintillators, known as the octagon counter, were used to veto any events with charged particles emitted from the target, including some protons which would not ordinarily escape through the aluminum walls to be detected externally. In addition, many particles in the beam halo will be detected by these counters.  3.2.4  Charged Particle Rejection  In addition to the octagon veto counter described in the previous section, each face of the Crystal Box was equipped with a 6.4 mm thick scintillator. These counters had cross-sectional area 83 x 50 cm and were butted against one another to provide 2  complete coverage of the crystal box. Any energy above threshold deposited in these counters caused the rejection of the event. On the upstream side of each face of the Crystal Box there was a 2.5 cm x 46 cm x 46 cm scintillator, mounted with a 12.7 cm photomultiplier tube. These counters were known as the guard counters, Gi through G 4 , and they were used to detect charged particles entering or leaving the Crystal Box. However they were used only in offline analysis to reject events as there could be shower leakage from the Crystal Box for valid events. The time signals from these counters were still useful in rejecting pile-up from particles entering the Crystal Box from the upstream side 69  by vetoing events with an energy deposition out of time relative to normal events.  3.2.5  Shielding.  An inevitable consequence of transporting high numbers of particles is the presence of background radiation. The bulk of this radiation is contained by tonnes of concrete surrounding the beam line, and especially, the production targets. Nonetheless, large volume detectors like the Crystal Box are particularly vulnerable and extra precautions were required. In the Crystal Box there were two mains sources of annoying background. The first originated from the spray of particles within C8, emerging from nooks and crannies in the vicinity of the beam pipe as it entered the experimental area. To protect the Crystal Box from these particles, the shielding around the beam pipe was fortified with lead bricks. Also, a 60 cm thick wall of concrete, iron, and lead was built between the detectors and the end of the beam line. The central portion of this wall was a 60 cm wide X 76 cm high collimator of lead, through which passed a 20 cm wide x 10 cm high aperture which was centred on the beam axis. On either side, extending out an additional 150 cm were two iron blocks. These were all supported from below by concrete blocks. The second source of background came from particle interactions in the C target. The Crystal Box was unfortunately in an ideal location to detect these. Hence the exposed side of the detector was shielded by building another wall of iron and concrete. In addition, as many of these particles were likely to be neutrons, several thin wooden crates packed with borax were stacked between the Crystal Box and the direction of the C target. 70  3.2.6  The Flasher System and Small Nal.  By studying the long term behavior of the photomultiplier gains it was determined that some of them had a tendency to drift with time. Others periodically made sudden jumps up or down for no apparent reason. To track such changes, the flasher system was developed. The basic ingredients were a network of fibre optic cables and an xenon flash bulb. The flash bulb was positioned at the centre of a hemispherical dome which was the terminus for the more than 400 fibre optic cables. The other ends of these cables were connected to fibres glued into notches cut into the light guides of each P M T on the Crystal Box.  Additionally, one optical fibre led to a small 5 cm (j> x 5 cm deep Nal crystal. The purpose of this small Nal was to monitor the flasher output so that changes in the flash bulb intensity would not be misinterpreted as gain shifts of the Crystal Box PMTs. It was positioned near a  2 2 8  T h source, well away from the other detectors  and the beam. It was heavily shielded from external radiations. By continuously recording the gamma ray spectrum from thorium decays, an overall normalization of the flasher output was obtained. A photodiode mounted on the hemispherical dome provided an additional check of the flasher intensity.  Unfortunately, it was learnt from the engineering run that the flasher system was inadequate. Many of the fibres were damaged or had poor transmission, while those few that had been replaced had light outputs which exceeded the ADC limits. It was not considered to be of sufficient importance to warrant delaying data acquisition during the engineering run, so an overhaul of the flasher system was postponed until the end of that run. 71  3.2.7  Temperature Control.  At Brookhaven the variations in ambient temperature during the run from January to May were quite severe. Also, the building temperature in the E811 area was not well maintained due to construction in the vicinity. It was important to keep the Crystal Box at a constant temperature, because the photomultiplier gains were known to be temperature dependent. In addition the crystals could not withstand severe thermal shock; the recommended rate of change being less than 0.1° C per hour. To maintain a constant temperature a tent was built around the Crystal Box into which dehumidified air at constant temperature was circulated. The safety requirement for hydrogen targets demanded a free flow of fresh air past the target assembly and into large venting ducts. Hence the temperature control tent was built in the shape of a square toroid. To distribute the massive heat load of the 432 PMTs there were banks of 20 cm fans on either side of each face drawing air across the tubes. This was the primary source of heat, and the temperature was kept at a constant 29° C primarily by controlling the venting. An optional heater, an air-conditioner and a buffer volume were added later for the production run with much improved control.  3.3  Electronics and Data Acquisition.  The electronics and the data acquisition system serve several very important functions. These are outlined below. • Fast logic decisions are made to determine whether the current event warrants further study. If it appears to be a suitable event, the data acquisition system is alerted to collect the information from the event. • Suitable events have their information digitized and stored on magnetic tape. 72  • The on-line software analyses a fraction of the incoming data to monitor the progress of the experiment and to check for irregularities or malfunctioning units. • Various parts of the electronics are used for diagnostic purposes only. Their information is continuously displayed on visual monitors for inspection by the experimenters. • Parts of the electronics may be used for additional tasks such as beam tuning or tests. The subject of the rest of this chapter will centre on the first three items above. A flow chart of the main units in the electronics and data acquisition system are depicted in figure 3.6. Normally the analogue signal from a detector was passed directly to an analogue to digital converter (ADC). For our particular choice of ADC this digitization was based on a charge integration, the result of which was a signal proportional to the energy deposited in the detector. Some analogue signals were converted to logic pulses by passing them through discriminator modules. If the analogue signal was above a certain adjustable threshold the discriminator would generate a standard NIM logic signal, a square pulse with voltage -0.8V and variable width. A logic signal might be sent directly to a time to digital converter (TDC) or other C A M A C  1  module. Some signals were used to form logic decisions to determine whether the event was a suitable candidate. Logic decisions are made by using standard Boolean operations, the presence or absence of a signal indicating the two possible states, and the operators being electronics modules in the form of an AND, OR, or NOT decision. A D C s , T D C s , coincidence registers etc, are collectively known as C A M A C modules, (Computer Automated Measurement A n d Control), as this is the well established standard used in most nuclear and particle physics labs for units which are read out into a computer. 1  73  Analogue Signals  Analogue to Logic V  M M  Logic Decisions  V  V  V  F o r m the Trigger  Enable  CAB  ADC.TDC's  Convert Data to  Digital  ////////  /  /—ZyL^LA^//  Memory Buffers  / / / / / 7-7  MBD  PDP 1 1 / 4 4  Mass Storage  1  Online Analysis  Figure 3.6: Data acquisition flow chart.  74  There were several different types of events of interest, data events, diagnostic events, calibration events, etc.  Once a valid event, or trigger, had been formed  it signaled the CAB BG (a very fast processor with a 200 ns cycle time) that an event was underway and enabled the CAMAC modules by starting the T D C clocks, sending gates to ADCs and coincidence registers etc. The event type dictated to the CAB BG which C A M A C modules were to be read. The CAB BG had 4 additional triple port memory units (TPM) attached directly to it, and was configured as a branch driver to control 4 crates of C A M A C modules. The CAB BG was a slave to the MBD (Microprogrammable Branch Driver). The MBD was connected to the PDP 11/44 unibus, and could write directly into PDP memory. It served as the interface between the PDP and the C A M A C crates. The MBD handled the readout of the TPMs and other crates on its branch. These data were then passed to the PDP. In the normal mode of operation the data were passed from the PDP directly to magnetic tape after buffering, although a fraction of the data were also analysed on-line. Under normal running conditions a T P M wasfilledduring the beam on period, with very few being read by the MBD as the CAB BG was too busy. These were then completely emptied during the subsequent beam off period. The MBD was much slower than the CAB BG so, when running at very high rates, the T P M would fill quickly and then fill only as fast as the MBD could empty. The maximum rate was about 130 events per spill, representing just over two cycles through each T P M .  3.3.1  Triggers.  The various trigger types and their purposes are listed below.  Event 10 Highest quality data events. These were candidate events for A WRD and were given absolute priority. 75  E v e n t 9 Calibration events. These were events collected during the pion running mode to calibrate the gains of the Crystal Box. During pion runs, event 9 was given top priority. E v e n t 7 Flasher events. These events were generated by triggering the xenon flash bulb. They were collected at a rate of about 1 Hz and were used to track gain drifts of individual crystals. E v e n t 5 Sample events. As the requirements for an event 10 were rather stringent, it was useful to have a sample of events with less severe conditions to monitor the entire spectrum. Since the rate for such events is naturally much higher, a random sample was formed by selecting only every 16th of such events using a prescaler. E v e n t 4 Pedestal and small Nal event. This event was triggered by the small Nal detecting disintegrations resulting from decays in a sample of a  2 2 8  T h . As  such it was completely random in time and was used to determine the ADC pedestals. Any signal above pedestal must have been purely accidental and hence these events were also a useful monitor of the pile-up rate. To reduce the event rate to a suitable level, a coincidence with a random pulser was required. E v e n t 8 Error events. For each event the CAB BG performed a number of software consistency checks. In the event of any error, usually an indication that the CAB code had been corrupted or found its way into an abnormal state, it triggered an event 8 which forced an end of run and alerted the experimenters by sounding an alarm and displaying a message on the terminal. E v e n t 1 1 Scaler events. Scalers simply counted the number of occurrences of a specific item, such as the number of counts in a particular scintillator. The 76  scaler modules were independent of the CAB BG, residing in a crate on the MBD branch. They were read and cleared at the end of each beam spill, and again at the end of each beam off period. They were extremely useful in detecting defective or nonfunctional devices and were also used to estimate rates and to determine normalizations. At the end of each run a summary of scaler totals and averages was printed out to be checked for irregularities. Event 12 Small Nal events. The total number of type 4 events collected in a run was quite small so that they would not interfere with the collection of event 10 triggers. As the the small Nal was intended to be used as a cross calibration of the flasher peaks a much larger number of events was needed. This was accomplished through the type 12 events. Small Nal events were analyzed using a QVT multi channel analyzer. The analogue signals were charge integrated and stored in QVT memory in the form of a histogram. An event 12 was triggered about every 5 minutes by software in the PDP which caused the memory contents of the QVT to be dumped. In this way a high rate of small Nal events could be handled. Typical rates per beam burst during kaon and pion runs are shown in table 3.2. Table 3.2: Typical event rates. Trigger Event 4 Event 5 Event 7 Event 8 Event 9 Event 10 Event 11 Event 12 Totals  Typical Rate per Beam Burst Kaon Run Pion Run 2.8 1.5 • 6.3 0.2 3.8 2.8 0.0 0.0 — 121. — 65. 2.0 2.0 0.006 0.006 « 128 « 80  77  3.3.2  The Fast Electronics.  In this section the details of the fast electronics will be described. Only the main modules used in the event generation process will be discussed. There were a lot of diagnostics, scalers and other details which will be ignored here for the sake of brevity.  Crystal Box Electronics. The Nal analogue signal was typically about 500 ns long. Integrating such a long signal makes one more susceptible to pile-up problems. To avoid this difficulty the signal was clipped to 250 ns as is shown schematically in figure 3.7. Part of the signal traveled to the end of a long clipping line where it was inverted and partially reflected due to an impedance mismatch. The resistance and delay time were chosen so that the inverted signal added destructively with the tail of the initial wave form, effectively clipping it. As the high voltage for each P M T was distributed via bus bars, all at a common voltage, the individual signal amplitudes were controlled by variable amplifiers. An approximate balance of all gains was obtained after several iterations using sources and cosmic rays. The amplified signals were then divided, with equal parts being distributed to the fast electronics to form a hardware sum, and to the ADCs. Due to the length of time it took to form a trigger, it was necessary to delay all the analogue signals destined for the ADCs so that they would arrive in coincidence with the event generated gate. This was accomplished using 400 ns Spiradel fixed delay units [62]. Signals occurring in detectors significantly out of time relative to the beam particle that triggered the event are likely to have arisen from other beam particles which did not trigger an event. Such conditions are referred to as pile-up. With 78  Analogue Input from PMT  Clipping Circuit  >2in Variable Gain 70% 15%  15% >  Not Used 3  Hardware Sums  400 ns Delay M a i n ADC  Variable Gain -|f  * P i l e - u p ADC  Figure 3.7: Nal analogue signal processing.  79  higher instantaneous beam rates, the probability of pile-up increases accordingly. Piled-up signals can be distinguished from regular signals by comparing the ratio of the amount of charge in the leading edge of the pulse to that in the entire pulse. To accomplish this the delayed signal was split again. The 90% portion went directly to the main ADCs with a 250 ns gate while the other 10% was amplified further and transmitted to pile-up ADCs which had a 50 ns gate on the leading edge (see figure 3.7). The amplifiers were suspected of having slight drifts in their DC levels. As the integration period was relatively long for both ADCs, DC levels could drastically compromise the data. To prevent this the signals were AC coupled to the ADCs. The ADCs used for the Nal signals were Fast Encoding and Readout ADCs, (FERA), with 11 bit resolution. With the gains set to give about 300 MeV full scale this represented about 0.15 MeV per channel. The FERAs were ideally suited for the Crystal Box having very fast conversion and read out times. Data were transmitted directly into the TPMs via a front panel port at a rate of 100 ns per word. This was much faster than possible using C A M A C dataways. In addition they were programmable and could be operated in a mode which suppressed pedestals, although this feature was not utilized during the engineering run. The Nal logic was very simple in that the only requirement was that the full energy deposited in the Crystal Box be above a set threshold. The hardware sum was formed by cascading linear fan-in units as shown in figure 3.8. A block was defined as a set of 30 crystals 3 rows deep and 10 crystals along the beam axis. Three blocks and a corner comprised a face, and a sum of the four faces produced the full energy sum. Due to the large uncertainty in the vertex location of a particular event, the amount of additional information to be gained by using a T D C for each individual crystal signal was minimal. This was especially true considering their slow con80  version time and the extra 400 words of data per event which would significantly reduce the data taking rate. Hence only the corner and block sums were analyzed by a T D C .  Figure 3.8: Nal hardware sum of 3 blocks and one corner to form one of the 4 faces which will be summed to give the full energy of the Crystal Box.  Again due to DC drifts, this time from the linear fans, it was necessary to AC couple the inputs to the discriminators of the hardware sums. The discriminators were set to correspond to roughly 300 MeV for the high level discriminator (Nal^jg^), and about 80 MeV for N a I j . This varied a bit depending on the loow  cation of the event within the Crystal Box as the hardware balancing was fairly rough. 81  Beam Logic. Most of the detectors in the beam line were included in the beam logic. The exceptions to this were the hodoscope elements which were just analyzed with ADCs and the upstream counter which was only analyzed with a T D C . A beam particle was identified by the coincidence Si^-Sa-S^ Where applicable, the timing was always defined by the S3 signal. A kaon was identified by requiring that at least 2 of the 6 kaon Cerenkov PMTs fire above threshold. Similarly, the pion condition demanded that at least 2 of 3 of the pion Cerenkov PMTs be above a 125 mV threshold. Alternatively the pion condition could be met if the sum of the three pion Cerenkov counters was above a 450 mV threshold. The light output from Cerenkov counters is relatively small and the correct Cerenkov angles are only obtained if the particles entered the lucite at normal incidence. Hence scattered kaons may trigger the pion Cerenkov, and vice versa. Nonetheless the kaon and pion Cerenkovs worked very well. By requiring at least 5 of the kaon tubes, many kaons are missed (about 70%), but the resulting beam is very pure, even without the pion Cerenkov as a veto. Requiring only 2 kaon tubes detects about 95% of the kaons but about 25% of the beam is comprised of pions. Adding the pion Cerenkov as a veto rejects a few kaons, but most of the pions. The final beam detects about 90% of the available kaons and the beam is about 99% pure. The two dE/dx counters had their signals discriminated at a high and a low threshold. This was to enable both kaon and pion modes of running. When tuned to kaons the discriminator levels were set low enough to see some of the pions. This aided in their identification in the off-line analysis. However when running in pion mode the thresholds were set high, accepting only pions, eliminating the electrons. These discriminators are labeled L and H in figure 3.9. The E counters were also 82  analyzed by TDCs twice, to look for other beam particles which arrived either early or late relative to a valid event. Events, with other particles close in time, are likely to be associated with piled-up events.  k, k  2  k  3  k, k  6  k  6  S, S  S  2  S  3  C|  4  C  2  3  A A A  • • • • • • DO  «  C=3 c±a  0  D  MLU A  C  n>l  ADC TDC Starts  Pion  k-Beam  77-Beam  Figure 3.9: Beam logic  The K-beam definition was formed of the coincidence; K • beam = ( S i • S • S • S ) • (kaon) • (E 2  3  4  • E ) • (Pion)  1L  2L  (3.1)  A similar coincidence was formed to identify pions when running calibration data which was; TT •  beam = (S • S • S • S ) • (E t  2  3  4  83  1H  • E ) 2JL  • (Pion)  (3.2)  Note that when running pions the momentum is much lower so that the "pion" Cerenkov is now actually detecting electrons and not pions, hence its use as a veto in equation 3.2.  Veto Counter Logic.  The logic diagram for charged particle detection is illustrated in figure 3.10. A charged event was simply an OR of the 8 octagon veto elements (Vsj, Vb,) and the 4 face veto counters (V,). The guard counters were recorded in TDCs and ADCs but were not part of the hardwired trigger. To search for possible piled-up events coming from particles which had entered through the sides of the Crystal Box there were two TDCs on each guard counter. The first examined the time about 300 ns early relative to the event, and the second looked for a similar length of time after the event. As the bent scintillator elements of the octagon had a complicated spectrum, their threshold was set quite high and the analogue signals put into ADCs for more detailed scrutiny off-line. The face veto counters were also analyzed by ADCs.  B e a m Definition.  Among the data words read in for each event were two bits which recorded whether the beam was on or off at that time. Although a beam spill was typically about 1.6 s in duration, the beam on period was defined to be about 1.5 s. This was done to avoid the first 100 ms when the rates were unstable. Naturally there were very few events during the beam off period. The main purpose of having a beam off bit was so that scaler events could be compared during beam on and beam off to look for noisy detectors. Hence the beam off window was a subset of the actual beam off time, to keep well clear of the beam on period. 84  G,  ADC TDCs  G  G  2  ADC TDCs  G,  3  ADC  TDCs  ADC  Vs,  Vb,  Vs  A  A  A  ADC  s  Vb  2  A  Vs.,  Vb  A  A  ADC  3  Vs  4  Vb  V  V,  4  2  V  3  V  4  A  ADC  ADC  ADC  ADC  ADC  TDCs  QP  •  ctagon  6  D  D  Face Veto  Charged  Figure 3.10: Schematic of the charged particle detection logic.  85  Event triggers. The event triggers are summarized by figures 3.11 and 3.12. For type 4 triggers the small Nal signal was treated much like the other Nal analogue signals, except that pile-up was not monitored. In addition, the analogue signal and a gate were sent to a Q V T . The data were stored in the Q V T as an accumulating histogram until an event 12, generated in software, read and cleared the Q V T . Small Nal Analogue Input  •o Clip  15%  70%  15%  400  Random Pulser  ns  ] ADC  QVT Input  QVT Gate  Figure 3.11: Schematic of small Nal event generation.  The event 5 triggers were a sample of all events and were a coincidence between (every 16th) N a I j  ow  and K-beam. The best candidates, event type 10, required 86  Nal^jgk, K-beam, and no charged signal. The type 9 events were similar, requiring N a I j , 7r-beam and no charged signal. ow  Random Pulser  Nal Low  Nal High  v Flash Bulb  Event 7 RT  ^Event 10/ RT  <Event 5 RT  Figure 3.12: Schematic of the generation of events 5, 7, 9 and 10.  Event 7s were driven by a random pulser which triggered the xenon flash bulb. The event 7 trigger required the pulser in coincidence with Nal^jg^.  Event Handling. It was necessary to apply an inhibit to freeze the rest of the fast logic whenever an event was in progress. Hence there were two trigger levels, the first being the raw trigger (RT), a non-inhibited trigger, and the second being a master gate (MG), 87  which was the same trigger inhibited by the computer busy. See figure 3.13. All master gates were processed by the data acquisition system, whereas only a fraction of the raw triggers were. The difference between the raw trigger rate and the master gate rate was indicative of the computer dead time.  \Event 10, RT  Gates  CAB Done  Veto  Scaler Inhibit <-  H> CAB Input D  Figure 3.13: Schematic of the handling of event triggers.  A logical OR of all master gates was formed, the event master gate. This in turn sent gates to the ADCs and coincidence registers, and started the clocks in the TDCs. The CAB BG was alerted by inputing a logic signal into a front panel port, and the event type was recorded by setting the appropriate bits in a coincidence register.  88  The event master gate also set two gate generators to a latched on state. This signal was then distributed back to the individual master gates to put them in an inhibited state. After any trigger this inhibit was set until explicitly lifted by the CAB BG issuing a "CAB done" signal. As the gate generator was quite slow to be set, there was a scheme whereby each master gate generated a quick veto which remained in effect until the latched on gate generator signal arrived.  3.3.3  Data Acquisition.  The data acquisition program was the Q system from LAMPF. It resided in a PDP 11/44 and communicated with the C A M A C crates via a microprogrammable branch driver, the MBD. The MBD was an integral part of the Q system, as was a C A M A C module known as the L A M P F trigger module.  The trigger module  alerted the MBD that an event was waiting to be transmitted, and the event type was indicated through the use of its front panel inputs. In order to use this system in conjunction with the CAB BG, it was necessary for the CAB BG to write to the trigger module the event type. This was accomplished by using a NIM output register.  C A B B G Software. Once an event had occurred the CAB BG and its software assumed control. The procedure is best described by outlining the software within the CAB BG. 1. Initialization. Performed once at the beginning of each new run. '  • Load data memory with various constants and commands which will be used to address the C A M A C modules on its branch. • Clear each crate on branch, lift inhibits and enable LAMs. • Clear all C A M A C modules. 89  • Load FERAs with pedestal values, station numbers and control words. Initiate data taking. • Lift computer inhibit to allow events to trigger CAB. • Enter idle loop waiting for an event. Event occurs during idle loop. Electronics are inhibited. • Find space in current T P M or start a new one. • Position pointer to address in T P M where data are to be stored. • Enable front panel ECL port of FERAs to initiate transfer of data to TPM. • Read coincidence register bits to determine event type. • Wait for FERAs to complete data transmission. • Write word into data stream indicating the end of FERA data. • Read other modules based on the event type. • Load header words for event indicating event type and number of words in the event. • Return to step 2 to get next event. Free time in idle loop, initiate MBD read of T P M . • Using NIM output register write bits to a coincidence register informing the MBD which T P M to read. • Using NIM output register trigger the MBD with the L A M P F trigger module. • Return to idle loop.  90  The M B D andP D P  11/44.  The MBD was programmed using a high-level language, the Q Acquisition Language (QAL). It was, in effect, an ordered list of C A M A C commands for each event type. After an interrupt was received from the L A M P F trigger module the appropriate list was executed. The first task was to read a coincidence register where the address of the T P M to be read was encoded. Then the data words from that T P M were transmitted to the MBD and control was relinquished. The MBD then loaded the event into PDP memory buffers. A full buffer constituted a record. The records were prefixed by a few words added by Q indicating the current time, date, and number of bytes in the record. The record was then spooled to one of two tape drives, being alternately filled, or rewound and remounted. All data were recorded on standard magnetic tapes at the maximum density of 6250 BPI.  On-Line  Analysis.  The analysis program Q had the option of analysing all or any of the events based on three possible flags set for each event type. The possible conditions were; must process; may process; or no process. In may process mode, the event was only analysed if the computer was not otherwise busy. The use of may process still tended to slow data acquisition as once an event had been selected for analysis, it was difficult to abort the process when a new interrupt was received. Events 11 and 8 were always in must process. The others were usually in no process although periodically various events were set to must or may process and the on-line analysis invoked to perform some diagnostic, troubleshooting or general monitoring services. The following programs were used in the on-line analysis; H o d o s c o p e . This program displayed the hodoscope hit pattern as a two dimen91  sional density scatterplot. This was used to ensure that the beam was centred on the target. Scalers. All type 11 events were analysed and at the end of each run a summary of the scaler, totals and averages was printed. In addition a process in the event 11 code triggered an event 12 every 5 minutes during the run. X B P E A K . When events of type 7 were analysed this routine checked that the flasher peaks had not shifted dramatically from previous values for any particular crystal. If they had, an alarm was sounded on the terminal. X B E F F . During the analysis of event types 10 or 5, a record was kept of the number of hits significantly above pedestal incurred by each crystal. This tally was summarized as a histogram and was useful for identifying channels which had stopped working, were inefficient, or were unduly noisy. General General histograms were kept to monitor the ADCs and TDCs of all detectors. These were filled periodically by switching to "must process" to check that all elements were still operating satisfactorily.  92  Chapter 4 Analysis Preliminaries. 4.1  Clump Finding Algorithms.  One of the most important tasks required of the analysis was its ability to reconstruct the geometry of each event. Typical events result in many particles depositing their energy around the Crystal Box. In general, particles interacting in the Nal crystals cause electromagnetic showers to develop. This shower spread across many crystals so that in the reconstruction, particle hits in the detector were identified as a cluster of crystals containing non-zero energy. Hence algorithms were developed to recognize clump patterns, and to determine their energy, their position, and their time of flight.  4.1.1 Energy Determination. For the moment we will ignore the complications which arise when clumps begin to overlap with one another, and describe the basic clump finding routine. This algorithm searched all elements of the Crystal Box to locate the crystal with the highest energy. Then a group of crystals surrounding the highest was used to define the clump. This procedure was then repeated by searching for the next highest energy in the rest of the detector. The process was continued until there remained no more crystals with at least 5 MeV deposited in them. Each clump was labeled by its high pulse height crystal (HPHC), and the set of crystals defining the clump was referred to as its neighborhood set. The energy of the clump was taken to be the sum over all crystals in the neighborhood set, although this had to be corrected  93  for the problem of overlapping clumps as will be described below. The neighborhood sets were defined as follows; • For clumps found in faces with the HPHC at least 2 rows from the corners, the neighborhood set was quite simply the 5 x 5 array of crystals with the HPHC at its centre.  For those clumps located near an upstream or downstream  edge of the Crystal Box, the arrays were truncated in the obvious manner. Examples of these are depicted in figure 4.1a. • When a clump had its HPHC in a face, but within one or two rows of a corner group, the neighborhood set was taken to be the truncated 5 x 5 array on the face, plus all corner elements, plus the 5 corresponding elements in the first row of the adjacent face, as is shown in figure 4.1b. • For clumps centred in one of the diagonal elements of a corner, the neighborhood set was comprised of all corner elements plus the elements from the first row of each adjacent face, as seen in figure 4.1c. • When the HPHC was located in a corner off diagonal element, the choice was the entire corner plus the elements of the first two rows of the nearest adjacent face. This case is illustrated in figure 4.1d.  Modifications to the basic code were needed to handle those cases in which clumps were found to be overlapping. In such cases the energy contained in each overlapping crystal was shared between the two clumps. This division was weighted according to the proximity of the overlapping crystal to each HPHC, and, to the energy of the respective clumps. The weight factors were experimentally determined by measuring the average fractional energy deposited in each neighborhood set element for clean single-photon events. 94  '77/7///,  a  c.  Y77////A '///////)  d  b)  Figure 4.1: Clump configurations for different high pulse height crystal (HPHC) locations. In a) two clumps are shown on a typical face. The darkest shading is the HPHC. b) face crystals bordering a corner. The diagram only shows the top right corner of the Crystal Box, looking end on. c) diagonal corner crystals, and d) off-diagonal corner crystals.  95  In addition, there were a set of algorithms which examined clump pairs with a significant amount of energy in their overlap region. Their purpose was to determine whether these should be treated as one or two clumps. The parameters of these algorithms were determined by adjusting them until the program was consistently agreeing with the conclusion reached through a visual inspection of the clump pattern.  4.1.2  Position  Determination.  The position of each clump was determined by taking an energy weighted sum of the position of each crystal within the neighborhood set. This was of the form;  where  was one of the three coordinates of the i  th  crystal, and Ei its energy. The  exponent was determined by Wilson et. al. [61] through a detailed comparison with Monte Carlo simulation. For clumps centred in corner elements, the z-coordinate was determined by examining the energy deposition in the first rows of adjacent faces. If these contained less than 1.0 MeV the z-coordinate was taken to be the centre of the target. A correction factor was also included to account for the fact that the electromagnetic shower would only begin to develop some distance into the Nal. This adds a sizeable correction to measured angles for photons with incident angles far from normal to the crystal faces. This depth correction was energy dependent and was also determined from Monte Carlo studies.  4.1.3  Clump Timing.  Putting each crystal into a T D C would require an additional 400 words of data. This would be costly and would significantly reduce the data acquisition rate. As typical 96  distances from the target to the crystal box were short and strongly dependent on an ill-determined vertex location, the T D C information was considered to be of marginal use. Hence TDCs were not used for individual crystals. Instead, the crystals were summed in groups of 30, 10 along the beam and 3 rows deep, with a single T D C for each group. This enabled the rejection of events when they were significantly out of time relative to good events. Although each signal cable was initially cut in length so that the signals from all crystals would arrive at the same time, the subsequent lowering of phototube voltage and other changes required that minor corrections be made. A "time zero" was defined by using the single gamma events when running in pion mode. The timing of an individual element was taken to be the time of the entire block when that crystal was the HPHC in the block. Using this technique the timing resolution was reduced to about 2 ns full width half maximum (FWHM). This was not sufficient to distinguish between neutrons and photons, but could be used to reject some events well out of time. Unfortunately the thresholds on the T D C discriminators were set quite high, and the linear fan-in units summing the groups of 30 proved to be extremely temperature sensitive. This meant that the effective thresholds varied quite a bit and the timing cuts could be applied to clumps of only relatively high energy, above « 60 MeV.  4.2  T h e Monte Carlo Routines.  4.2.1 Introduction. In particle physics, Monte Carlo techniques are often employed to simulate detector responses in situations involving intricate geometries or complex particle kinematics. In the Monte Carlo method, particles are tracked through the detector in a sequence  97  of very small steps. At each stage the code determines which physical processes to employ based on random selections from theoretically or experimentally derived distributions. For example; in the simple process of a particle decaying in flight, the Monte Carlo would have to determine; • when the particle would decay, based on an exponential distribution with time constant r; • to which daughter products, based on known branching ratios; • the direction of the daughter products, selected at random from pure phase space, or perhaps a more complex distribution if there were final state interactions involved. In E811 the weak radiative decay branching ratio could only be extracted with the aid of accurate Monte Carlo simulations. In addition, simulations were vitally important during the design stages to estimate rates and reveal potential backgrounds. Several different Monte Carlo routines were utilized in E811, each designed to perform a specific task. These are listed below. L O W E - I &l N O B L E - I During the design stage, many questions of experimental feasibility needed to be addressed. To facilitate this, two independent codes were developed. These were used to estimate rates, uncover backgrounds and optimize the experimental configuration. In addition, a comparison of the two codes provided an independent check on much of the basic Fortran source coding which would become common to all the Monte Carlos. These codes were quite simple in their description of the geometry, and they did not include the generation of electromagnetic showers by interacting photons. E G S A Monte Carlo program based on the Electron Gamma Shower code (EGS) [63] developed at SLAC was used primarily to simulate the detector response for 98  pions stopping on hydrogen. These simulations were used by fitting routines to determine the gains of the Crystal Box, and also as a check of the main Monte Carlo routine. Although it was limited to a treatment of electrons and photons only, it was quickly available for use during the on-line analysis as the geometric package had already been configured by previous users of the Crystal Box. [64]  G E A N T This program, the principal Monte Carlo routine, utilized the CERN developed GEANT package [65] as a shell. It had a complete description of the geometry and could track all particle types. It will be described in greater detail below.  L O W E - I I An independent Monte Carlo was developed as an extension of the original LOWE-I routine, but with many improvements added. It still did not include the electromagnetic shower development, but had the distinct advantage of being able to generate ample statistics, due to a low CPU overhead. It is being used by the New Mexico group in their analysis, and has been used to compare against GEANT predictions. The results of those comparisons led to some of its parameters being adjusted so that it corresponded closer to the GEANT code, and presumably reality. Hence it has evolved into a reasonably accurate, fast, simulation code.  D E G R A D E R A separate Monte Carlo, DEGRADER [66] was used to track particles through the beam line elements. This code was primarily used to determine the beam profile entering the target. This information was used to simulate the stopping distribution, and to estimate the contribution from interactions occurring in flight.  99  4.2.2  The G E A N T Monte Carlo.  Reactions Considered in Geant Monte Carlo. The GEANT program was the main code used to simulate the detector response. Table 4.1 shows the processes considered for reactions of the form K~p —» A + B at rest. The first column lists the names given to the decay mechanisms presented in column two. The branching ratio for these decays are tabulated in the final column. Here, Pi is the A weak radiative decay branching ratio, yet to be determined, but expected to be of order 10 . The branching ratios for hyperon decays were -3  obtained from the particle data handbook [18] and Hessey [10], whereas the K~p decay branching ratios were obtained from Miller [67] and Whitehouse [6]. In addition, the 7r° was assumed to decay by either of the following processes; 7r°->  77  7r°->  e+e-  98.802% 1.198%  7  , 1  . }  A special routine was created to monitor the progress of TT~ and £ ~ particles. Those observed to come to rest in the hydrogen target were induced to undergo nuclear capture in the following ratios [68,69]; *~  ~* ° = 1.554 7T~p — > nj  (4.3)  £ - p —> An ^ = 1.38  (4.4)  P  n  n  and n  A similar set of processes were considered for inflight interactions, although we did not simulate those channels observed to have no contribution from the corresponding process at rest. The main new contribution came from the the process R21 which had a production threshold of 90 MeV/c. The inflight processes which were included are listed in table 4.2. In this table P and P 3 were parameters of the fits comparing the Monte 2  Carlo and the data. They determined the amplitudes of the inflight reactions, and 100  Table 4.1: Processes considered for K p decays occurring at rest. Channel Rl  Decay Process  K~p  ATT  0  Branching Ratio 0.067 x P x  •—> n~f R2  K~p->  ATT  0.0239  0  <—+ mr°  R3  K~p->  ATT  R4  K~p->  E°7r°  0.0430  0  0.2764 x P  x  «—* A 7 <—• n 7  R5  K~p  0.0987  E°7r°  • A7 <—* n 7 r °  R6  K~p^>  0.1771  E°7r°  <—> A 7  R7  K~p -»  E + 7T-  K~p-+  E + 7T-  0.0973  <—> p7T°  R8  <—> n 7 r  R9  K~p->  0.0911 +  E+7T-  0.00026  <—» p 7  RIO  K'p^  E-7T+  Rll  K~p-+  E°7  0.4479 0.00144 x Pj  <^-> A 7  R12  K~p^  0.00051  S°7 <^-> A 7  R13  K'p -»  0.00092  S°7 A7  R14  K~p^  0.00086 x P  A7 <—• 7^7  R15  K~p ->  R16  K~p —» A 7  0.00031  A7 <—> n 7 r °  0.00055  101  a  Table 4.2: Processes considered for K p inflight interactions. Channel R21  Decay Process K~p K n K°s  R22  K'p  ATT  R23  K~p  ATT  R25  K~p  E°7r°  Amplitude 0.5 x P  A xP  0  3  2  0.357 x P  0  Pi x P  2  2  Aj U7  R26  K'p  0.357 x P  E°7r°  2  <—» A 7 n7r  R36  K~p  0.5 x P  An  3  #2  depended on the values of various cuts designed to reduce such contributions from inflights. The Kg and K® were allowed to decay through any of their known decay modes, including the photon rich 2ir° and 3TT° branches respectively.  G E A N T G e o m e t r y Package.  The experimental configuration was described in as much detail as was available and practical. In the target sector, the target, its flask, super-insulation and vacuum chamber were all included. The octagon veto counters, complete with scintillators folding around behind the target flask were also included. As the lightguides for the octagon counters followed a tortuous route downstream of the target, and as they were not in the direct flight path of particles traveling from the target to the Nal, they were omitted. The Crystal Box was also described in great detail. While the Nal itself was just a simple array of Nal blocks, particular attention had to be paid to the entrance window. This was necessary to ensure that the effects of photon conversions were 102  treated properly. The entrance window was composed of rubber pads, fiberglass shims and an aluminum honey-comb front face. The later was approximated by a single sheet of aluminum of average density.  Event Generation in G E A N T .  The tracking of particles for each event began with the K~p interaction in the hydrogen target. For at rest interactions, the x- and y-coordinates of the vertex were randomly selected from the beam distribution suggested by the DEGRADER program. This predicted a beam with horizontal and vertical spreads roughly gaussian in shape with cr/, « 6.4 cm and a « 3.2 cm. The z-coordinate was assumed to v  have a uniform distribution throughout the target. This was justified as the beam momentum entering the target had a very broad distribution. For inflight interactions, the z-coordinate was determined quite differently. Using D E G R A D E R and assuming a 13.0 cm degrader, the beam momentum entering the target was found to be reasonably well described by a gaussian of mean 185 MeV/c and sigma w 57 MeV/c. The cross-sections for the K~p inflight interactions were digitized from the curves given by Martin [70]. These were then used to generate a lookup table describing the probability of an interaction at point z and momentum P. Events could then be generated by the G E A N T code by randomly selecting from this lookup table, each entry weighted by its probability. In the data analysis, the hodoscope information was used to help determine the vertex location. In the Monte Carlo, the exact position was known, so to mimic the data, the position information had to be smeared using a suitable broadening function. As a rough guide we had data from E811-I where we had used wire chambers to trace the particle trajectories back to the vertex location and compare that against the hodoscope prediction. This gave us the general shape of the broadening function to use. The process it~p —> mr° gives rise to a pair of photons which 103  are emitted with a minimum opening angle of 157°. The measured opening angle distribution is broadened by the uncertainty of the vertex location, and fits to this distribution were used to determine the amount of smearing necessary. Figure 4.2 shows various fits to the ir° opening angle for different strengths of smearing. The value of a was chosen to be 4.0.  400  2.2  2.4  2.6  TT  Opening  2.8  Angle.  3.0  3.2  (Rad.)  Figure 4.2: Fits to 7r° opening angle to determine the level of hodoscope smearing required.  Charged Particle Rejection. The veto counters rejected events when struck by charged secondaries or electrons from converted photons. It was important to set the gains of the veto counters in the Monte Carlo to match the data so that the efficiency for vetoing these events would 104  be the same in both cases. In particular, it was necessary to get the cut efficiency for vetoing the conversion electrons correct, as these cuts effect the proportions of each photon multiplicity that are allowed to pass. The gains were set by scaling the Monte Carlo A D C values to match the charged Event 5 data.  4.3  Energy Calibration.  In this section, the steps taken to optimize the Crystal Box energy resolution are discussed. In the data, the gains and pedestals need to be determined, and non-linear effects introduced by the amplifiers need to be unfolded. The energy calibration was based on fits to the Monte Carlo using the pion radiative capture process 7r~p — > nj This photon, monoenergetic at 129.4 MeV, is conveniently typical of photon energies in this experiment.  4.3.1  Pedestals.  Each F E R A had a small zero-offset, or pedestal, associated with it. These pedestals were determined by examining Event 4 data.  Since these are events which are  triggered randomly by an external thorium source they should not have any energy deposited in them. The pedestals had a tendency to drift slightly with time and temperature. Hence during the first pass skim of the data using a nominal set of pedestals, new pedestals were calculated for each run. As runs were typically about 25 min in duration, the pedestals were assumed to be stable over that period. In all subsequent analyses the new pedestals were used.  4.3.2 Non-Linearities. It was known that the main amplifiers had a slight non-linear energy response curve. The non-linearity of each channel was measured so that these effects could be corrected for in the data analysis. A signal from a random pulse generator was 105  shaped to resemble a typical Nal pulse. This signal was fed into the amplifiers via a variable, high precision, attenuator and then passed to the ADC. Twelve different samples, selected to cover the full range of the ADC, were measured for each amplifier channel. Several different approaches were tried to determine the best functional form of the non-linearity correction. The criterion of a good fit was made on the basis of energy resolution, and the requirement that the deviations of the measured nonlinearity from a fit with that function should be evenly scattered. A simple quadratic of the form; E  a(ax + bx )  (4.5)  2  —  where x was the pedestal subtracted ADC value in channels was tried first. The constant a was fixed by assuming the gains to be correct at the pion calibration peak. Although an improvement to the resolution, deviations from the best fit line indicated that the functional form was insufficient, and that more correction was needed at lower energies. Hence the function; E = a[bx + c(1.0 - e~ )} x/a  (4.6)  was tried. Originally the parameters of this form were fit by hand for a typical crystal for use in the on-line analysis. The estimates for the parameters were applied globally to all crystals, and the resolution was observed to improve significantly. Later, during the offline analysis, each crystal was fit independently. However, it turned out that the shape struck upon by the hand made fit was still the most satisfactory. Presumably it was fortuitously correcting for other non-linear aberrations, perhaps from the photomultipliers themselves. Hence the parameters which resulted from the individual fits were scaled so that the basic shape would be that of the fit by hand, but individual fluctuations about this shape would still be ex106  pressed. The results, indicated by the resolution in % at the 129.4 MeV calibration peak, are shown in table 4.3. Table 4.3: Resolution in % at 129.4 MeV using various forms to apply the non-linear corrections.  Face core crystals only All crystals  No Correction 10.5  Quadratic Fit 9.6  Fit By Hand 7.0  Individual Fits 8.1  Individual Fits Scaled 7.0  12.0  10.8  8.6  10.2  8.3  Figure 4.3 shows the deviation between various best fits and the data for a typical amplifier. This figure also supports the claim that the individual fits, scaled to the shape determined by the fit by hand, give the best results.  Hence, this  method was adopted.  4.3.3  B a l a n c i n g the C r y s t a l B o x G a i n s .  The energy determined for every crystal, irrespective of location, should have the same value when a fixed amount of energy has been deposited in it. The adjustment of the software gains until this is the case is known as balancing. The balancing procedure was as follows; • Monte Carlo simulations were used to predict the line shape of the 129.4 MeV photon for each geometrically different crystal location. The clump energies determined from an analysis of the Monte Carlo data were collected into a set of reference histograms. • A similar set of histograms was gathered for all elements of the Crystal Box after analysing the pion data. • For each crystal the data and reference histograms were compared using a x  2  minimization procedure. There were two free parameters; the overall normal107  0  5 0 0  Energy.  1000  1500  ( ADC channels )  Figure 4.3: Comparison of various non-linear correction techniques.  108  2 0 0 0  ization, and the gain. The Monte Carlo data were convolved with a gaussian of fixed width, cr = 2.0 MeV. • A successful fit determined the gain for the HPHC under consideration. However, as the clump sum is formed from many crystals, every time a gain is changed it effects the remainder. Hence the balancing procedure was an iterative one, repeated until the gains were seen to converge. The balancing procedure was difficult and time consuming. Therefore there was a significant time lag between the time the pion data were collected, and when the gains were available. Pion runs were done on average once per week. After a few gain sets had been collected and the balancing procedure established, it became apparent that the statistics were not sufficient to get good fits, particularly for corner or edge crystals. In subsequent pion runs the trigger was modified in such a way that, for part of the time, only the corners were triggering events. For the engineering run, all of the gain sets were combined into one as there were not sufficient statistics from any one set to be fit reliably. This turned out to be perfectly adequate for our needs as the flasher system was not yet operational to track individual gain shifts anyway. In the production run, sufficient data were collected during each pion run, and theflashersystem was fully operational.  4.3.4  T h e Monte Carlo Resolution.  Once the Crystal Box had been balanced, the pion data could be combined into histograms for each geometrically different crystal location. These now contained enough events tofitagainst the Monte Carlo with the free parameter being the width of the gaussian used to convolve the simulation. During the balancing procedure, it has been found that the gains converged much quicker if the resolution folded into the Monte Carlo was kept fixed. Now it was set free to determine its value for 109  different crystal locations. In the Monte Carlo, the geometry of the Crystal Box determines the basic line shape, but does not consider the broadening which arises from photon collection statistics in the photomultipliers. As the number of photons is proportional to energy, the resolution from photon statistics was expected to have a \[E dependence. Fits to the entire photon spectrum from the pion runs, which covered the range from about 55 to 130 MeV, determined that the resolution was best described by the function; R(E)  = Ro (YJ  in close agreement with the predicted \/~E dependence.  (4.7) Here E was the usual 0  calibration energy of 129.4 MeV, and RQ was the resolution at EQ- RO was also dependent on crystal location.  4.3.5  Calibration Adjustments Based on Crystal Location.  The balancing procedure only establishes that the same energy is predicted for equal amounts of energy deposited in each crystal. However, in the corners and outer edge crystals the average amount of energy deposited for a photon of given energy is much less than for face centre crystals. This is because a considerable fraction of the shower may escape out the sides of the Crystal Box. Hence for each different crystal location there is an additional shift factor which shifts the peak of the calibration gamma to 129.4 MeV. These shift factors are close to unity for the central crystals, but raise as high as 1.1 for the outer corner crystals. As there were observed to be some (assumed) global gain shifts between pion sets, these shift factors were determined for each gain set.  The kaon data were  then divided into sets with one pion gain set per kaon set. These kaon sets were delimited by periods of long beam stoppage or other significant events. The pion 110  gain set then determined the shift factors for each kaon set. A comparison of the Monte Carlo and data before applying these shift factors is shown i n figure 4.4. The first fit is to crystals located in the central 6 x 5 array of each face only, and the second is a fit to the corner crystals.  4.4  Treatment of Neutrons.  As every neutral event arising from a K~ + p interaction is accompanied by a neutron, great care must be taken in their handling, both i n the data, and the Monte Carlo. Important amongst these considerations are the detection efficiency, the energy spectrum, and the efficiency of particle type identification.  4.4.1  Neutron Detection Efficiency.  In order to determine the neutron detection efficiency, the data collected from pions stopping in deuterium were examined. The following are the main processes allowed; 26.1% 73.9%  7r d —* r m 7  TT~d  —>  nn  (4.8)  The 7-ray spectrum from the first reaction is well known and easily isolated. The neutrons i n this case are very slow and usually carry away only a small fraction of the available energy. On the other hand, i n the second reaction the neutrons are emitted at 180° to one another and have large kinetic energies of about 68 M e V . As the neutrons are back-to-back, the geometrical acceptance to detect both neutrons, or the single photon, is quite similar. Hence, by comparing the number of neutron pairs versus the number of photons, the acceptance plays a minor role, and the neutron efficiency can be determined. Taking A and Aj to be the acceptance for neutron pairs and photons respecn  tively, the ratio of these two channels is given by; 7r  TT~d  d —y  nn  0.739^  —>  nnj  0.26lAve^ 111  (4.9)  100  80  Energy  100  80  Energy  120  140  160  120  140  160  (MeV  (MeV*  Figure 4.4: Fit of Monte Carlo to data for pion runs, a) face core crystals, b) inner corner crystals. The open circles are the data.  112  where e is the neutron detection efficiency, and the photon detection efficiency e<y n  has been taken to be unity. In the analysis, photons were defined to be events with at least one clump with energy Ey > 100 MeV. The n-n events were identified as 2-clump events, with individual energies between 10 and 85 MeV, and an opening angle 0  nn  > 135°. An  analysis of the data determined  e = 0.3±0.1  (4.10)  n  where the error is due mainly to uncertainties which arise through low energy backgrounds and energy threshold effects. A similar analysis of events generated in the Monte Carlo was best accomplished by examining the 7r~c? —>• nn channel alone . This is because there are no backgrounds in the Monte Carlo and the difficulties encountered when estimating the 7v~d —• nnj three body final state, which does not have a pure phase space distribution, can be avoided. The probability of detecting both neutrons is « A t , 2  n  n  whereas the probability of detecting only one neutron is 2A e (l — e ). Hence their n  n  n  ratio is independent of the acceptance, from the Monte Carlo a value  e «0.4 n  (4.11)  was obtained. The conclusion one can draw from this is that the Monte Carlo and the data are in qualitative agreement, and that the detection efficiency is quite high. This was determined for fast, monoenergetic neutrons, whereas typical neutrons from the K~p decays have an energy range from a few MeV to about 65 MeV. However it is expected that the Monte Carlo and the data will remain to be in reasonable agreement as the neutron energy is varied over this range. 113  4.4.2  Particle Identification.  W i t h large numbers of neutrons depositing detectable amounts of energy i n the Crystal Box, it was important that the Monte Carlo and the data analysis agree in their ability to distinguish between photons and neutrons. To study this effect the deuterium data were used again. Using the same criteria to select "n-n" events, the program was asked to determine their particle type. The only test available to distinguish between neutrons and photons was the clump size i n the Crystal Box. The neutrons were expected to deposit most of their energy i n a single crystal. Hence neutrons were identified as being those events which had 95% or more of their energy deposited i n a single crystal. Defining a to be the probability that a neutron is correctly identified then the following probabilities exist; P( )  =  P(jn)  = 2a(l-a)  P(nn)  =  7 7  (1-a)  a  2  (4.12)  2  Counting the number of each type found in the analysis of known n-n data resulted in a particle identification efficiency of a = 0.22  (4.13)  or that about 80% of the time, the analysis was misidentifying neutrons. We worried that there may have been a strong energy dependent effect, and that we should look at neutrons with a more characteristic energy spectrum. The reactions Jf-p->  E+7T-  18.9%  K~p^  E-7T+  44.8%  . {  '  with subsequent decays -ni^ + n 114  (4.15)  are a clean source of neutrons as well. The neutron energy range in these decays is very similar to the energy range of neutrons from the A decays. In addition these are the dominant decay channels with two detectable charged particles. To locate these decays, the charged Event 5 data were used. Unambiguous candidates were selected by requiring that there be three clumps in three separate faces to eliminate confusing cases of multiple hits in one face. As the 7r from the +  second reaction in equation 4.14 is monoenergetic, (E=82.5 MeV), we concentrated on events coming from that channel. The 7r was tagged by requiring it to deposit +  between 70 and 95 MeV in a face whose charged veto had triggered. The corresponding 7 T _ may undergo nuclear capture and deposit up to its full energy in the detector. Hence the requirements on the TT~ were that it be detected coincident with a charge trigger and have an energy of at least 70 MeV. The third clump, the neutron candidate, was expected to have an energy less than 70 MeV, and not to be associated with a charge. Note that a considerable number of events from the first reaction will also be included with these cuts, even though the monoenergetic 7r~ does not appear as a sharp peak in the data. Figure 4.5a shows the pion spectrum with the TT standing out clearly above +  the broad distribution of the TT~. In figure 4.5b the neutron spectrum is shown. As expected it is peaked towards low neutron energy. Assuming these to be real neutrons results in the particle type identification efficiencies for two different energy ranges of; a(E  n  30) > 30)  = =  0.13 0.22  1  W  )  corroborating the deuterium results. In contrast, an analysis of the Monte Carlo data showed that neutrons were almost always correctly identified. While this is a big discrepancy, it can be handled relatively easily. First of all, in both the Monte Carlo and the data, the photons 115  400  300  w + .->  ti 200 ti  o o  100  130  160  190  250  Pion Energy. (MeV)  Neutron Energy. (MeV)  Figure 4.5: Energy spectra from the K p b) neutron energy.  116  E^TT*  reactions; a) pion energy, and  are only misidentified about 1% of the time for all but the lowest energies. Hence the proper handling of the Monte Carlo was to include, with appropriate weights, those events with neutrons detected. As the Monte Carlo usually got the neutron identification right, and we were not that interested in clumps of energy less than 30 MeV, the factor for neutron misidentification was taken to be 80%. So, for example, if the data were analysed with the condition that there be three photons and either zero or one neutron detected, then in the Monte Carlo the following data sets had to be be included;  • Monte Carlo data with 3 photons and 0 neutrons. Weight 1.0  • Monte Carlo data with 2 photons and 1 neutron. Weight 0.8  • Monte Carlo data with 3 photons and 1 neutron. Weight 0.2  The reasons for the discrepancy between the Monte Carlo and data in this instance are not clear. A likely explanation is that GEANT does not do a good job in tracking the low energy neutrons. This can also be seen in that GEANT does not do a particularly good job of predicting the neutron energy spectrum either, as will be seen below. This however, does not explain why the neutron identification test does such a poor job for the data, because when the Crystal Box was still at Los Alamos, it was reported that such a test was quite adequate [61]. The possibility exists that neighboring crystals are no longer that well optically isolated from one another. This would lead to significant cross talk between adjacent crystals and an apparently dispersed clump. The crystals are separated only by a very thin layer of mylar, and 3500 km by truck from Los Alamos to Brookhaven via the Smokey Mountains may have introduced a large number of light leaks. 117  4.4.3  M o n t e Carlo Energy S p e c t r u m for Neutrons.  GEANT does not predict the neutron energy spectrum very well as is evidenced by the spectra shown in figure 4.6. Although the general shape is about right,  100  10 20 30 40 50 60 70 80 N e u t r o n E n e r g y (MeV)  10 20 30 40 50 60 70 80 N e u t r o n E n e r g y (MeV)  Figure 4.6: Comparison between the Monte Carlo and the data of the neutron energy spectra. the Monte Carlo appears to have more structure than it has any right to have. However as the neutron energy was ill-determined at the best of times, the energy information was only ever used in the form of a kinematic limit. The inexactness of the Monte Carlo neutron spectrum was not believed to be significant.  4.5  Pile-up Studies.  Pile-up, the accumulation of energy in the detector from spurious events occurring close in time with the event which caused the trigger, was a potential problem in 118  E811. Significant amounts of pile-up could lead to a high energy tail in the photon spectrum arising from the process A —• mr . This tail would then encroach on the 0  A weak radiative decay signal. W i t h a necessarily long integration time for the N a l it was important to maintain the beam at a sufficiently low rate so as to keep the likelihood of pile-up at a minimum. Also, some components of the electronics were specifically designed to identify events with pile-up.  4.5.1  T h e Pile-up F E R A system.  The main indicators of pile-up came from the second set of A D C s which were added to sample the leading edge of each N a l signal. As the gate was much narrower than the full energy gate, and was positioned on the steepest slope of the signal, it was quite sensitive to signals arriving out of time. Hence a ratio between the pile-up and main F E R A signals was a good test for pile-up. The pile-up F E R A s were installed during the first week of the engineering run. The gains in the amplifiers were set during that period. Figure 4.7 shows the response curve of a typical amplifier channel for a variety of gains. These amplifiers become non-linear for output pulses exceeding about 250 m V . The gains were set to 5.5 as this value satisfied the requirement that the amplifiers be linear over the entire range of the main F E R A s . However, during the setup stage it was determined that the main F E R A s were overflowing. To reduce the overall gain, the photomultiplier voltage was dropped 100 volts to 1400 V . This represented a drop in overall gain of about a factor of two, and hence the pile-up gains should have been compensated with an increase by this factor. However, as the pile-up issue was of relatively low priority, and reseting the 400 gains would be a time consuming and disruptive task, they were left at their original values. The upshot of this was that the gains were set to about 2 M e V per channel, compared to about 0.15 for the main F E R A s and so were not that 119  500  i  10  i  i  20  Input  30  40  r  ~  r 50  60  Amplitude. (mV)  Figure 4.7: Response curves of the pile-up amplifiers.  sensitive to pile-up effects at very low energies. In the production run, a change in gains and modifications to the pile-up amplifiers increased this sensitivity by about a factor of three. There were two types of pile-up of interest.  The first of these was small  amounts of energy deposited at random throughout the Crystal Box, but without sufficient energy to be identified as a clump. To examine the effects of this type of pile-up, data were collected at four different beam rates. These were; Low beam rate; fa 200,000 beam particles/burst, Medium beam rate; fa 600,000 beam particles/burst, High beam rate; fa 1,000,000 beam particles/burst, and, Very high beam rate; fa 1,600,000 beam particles/burst. 120  These data sets were analysed, and the number of crystals which were found to have an energy of 0.5 MeV or greater, but which were not located within an existing clump, was determined. Table 4.4 shows the average numbers resulting from such an analysis. In the final column, the average amount of energy contained in each such crystal is listed. Table 4.4: Pile-up in random crystals at low energy. Beam Rate Low Medium High V. High  # Crystals Above 0.5 MeV 4.60 4.82 5.09 5.31  Mean Energy of Crystals Above 0.5 MeV 0.68 0.73 0.79 0.87  While table 4.4 clearly shows evidence of pile-up behavior, it also shows that the contribution from these sources is quite negligible. Typical events utilize between 75 and 100 crystals in the clump definitions. Even at the highest rate, there are only 5.3 of the remaining 300 crystals observed to be above pedestal.  This  would suggest that the number of such spurious crystals expected to contribute to a typical clump of 25 crystals would be about 0.44. Then the amount of surplus energy per clump at the very highest rate would only be 0.4 MeV. Comparing this with the lowest rate, which can be assumed to be virtually pile-up free, the total surplus energy per clump is about 0.26 MeV. Hence contributions due to pile-up are expected to be a mere 0.14 MeV on average. Since we ran at typical rates of around 500,000 particles/burst, pile-up from random hits at low energy was not a significant factor. We then considered the second case; where the pile-up was of sufficient energy to be identified as a clump. To study this type of pile-up, we examined the energy spectra of the candidate weak radiative decay gamma rays. We defined the signal region to be from 140 to 180 MeV, and the background region to range from 180 to 121  250 MeV. Assuming a flat background in both the signal and background regions of b counts per Mev bin, and s valid events per MeV bin in the signal region, the ratio of the two will be; r = ^  (4.17)  Then the number of signal events was normalized by the rate independent factor N, which was determined from scalers proportional to the number of primary protons. This yielded the result; ^ hi  =  fe*LZll^ (r -l)N =  l0  o.82 ±0.13  (4.18)  hi  suggesting the presence of some pile-up. As the number of events in the background region was very small, in the above equation "/o" was combined of both the low and the medium rates, and hi" was a combination of the high and very high results. u  The only useful cut found to eliminate pile-up came from taking the ratio of the clump energy determined by the pile-up FERAs over that determined by the main FERAs. Figure 4.8 shows a scatterplot of these two values. Shown as a solid line through this scatterplot is the best fit curve to the data, determined by a polynomial regression technique. Cuts were then made by excluding all data in regions located outside curves running parallel to the central curve. The effects of such cuts are shown in figure 4.9a which shows the fraction of all events eliminated by cuts 30, 40, 50, or 60 MeV below the standard curve. Likewise figure 4.9b shows the effects of such cuts above the standard curve. These curves show that the cuts are eliminating substantially more events at high beam rates, particularly for cuts below the standard curve. It also appeared that most of the pile-up originated with particles scattered from the beam line into the Crystal Box. In figure 4.10 the variable A P U is the difference between the actual energy determined by the pile-up FERAs and that predicted by the main FERAs. The solid line is for the furthest upstream crystals 122  Figure 4.8: Scatterplot of clump energy. Pile-up versus main FERAs  123  0.0  0.5  1.0  1.5  2.0  2.5  Beam Rate (xlO )  0.0  0.5  1.0  1.5  2.0  2.5  Beam Rate (xlO )  6  6  Figure 4.9: Fraction of events eliminated by pile-up cuts, a) Cuts below the standard curve by 30, 40, 50 and 60 MeV, and b) Similar cuts above the standard curve.  124  only, whereas the dashed line is the same spectrum for the downstream crystals. The pronounced tail to the left of the main peak is only present in the upstream crystals. This is clearly indicative of pile-up, and suggests accepting events with about A P U > -AQMeV.  3 5 0 0  3 0 0 0  H  Upstream  2 5 0 0  ^ 2 0 0 0  Downstream  -  a £  1500  1000  -  5 0 0  0 - 1 0 0  100  Figure 4.10: Pile-up in the Crystal Box. Plotted is the difference between expected and observed clump energies as determined by the pile-up FERAs.  4.6  D e g r a d e r Tests a n d T a r g e t Size.  The engineering run was intended to address the interrelated issues of inflight interactions, degrader induced backgrounds, and kaon stopping rates. To this end, we conducted a sequence of tests in which the degrader thickness and composition were varied, and two target versions were tried. 125  Degraders of pure copper, and combinations of copper and carbon, and copper and BeG"2 were tested. The hope was that installing a block of material with low atomic number as the downstream component of the degrader would significantly reduce the amount of multiple scattering of the beam. On the other hand it was suspected that this might lead to more backgrounds from recoil nuclei boiling out of the degrader. Although the differences were not that large, the following statements were found to be qualitatively true; • The pure copper sample had fewer events in the background region relative to the signal region in the energy spectrum for the candidate weak radiative decay photon. • The number of valid data triggers per incident kaon was higher for pure copper. • The number of valid data triggers relative to the estimated background rate of the Crystal Box was highest for pure copper. On the basis of these observations, pure copper was selected to be the best choice of degrader material. Also at issue was the ideal thickness of degrader to use. Although range curves indicated that the maximum stopping rate per incident kaon occurred at 12.7 cm, most of our data were taken at 13.0 cm. The extra thickness was added to lower the mean momentum of the beam emerging from the degrader, thereby lowering the contributions from inflight interactions in the hydrogen target. While the use of a larger target substantially increased the kaon stopping rate, an analysis of the data showed that the large uncertainty in the vertex location made the reconstruction of events less reliable. Figure 4.11 shows the 7r° energy spectra in the case where two of them were found. Although the energy resolution should 126  not be effected by the vertex location, the apparent broadening for the long target results from incorrect photon combinations passing angular cuts which they would normally fail with improved vertex information. Likewise figure 4.12, which plots the opening angle between the two photons from 7r° decay, shows the dramatic difference between the long and short target at small angles.  1200  Noting that the  -  K-p 1000  -  8 0 0  -  S V  OT |  o o  6 0 0  -  4 0 0  -  2 0 0  -  0  -j 100  ~~  1  150  1  r  2 0 0  2 5 0  ,  p  3 0 0  3 5 0  7T° E n e r g y ( M e V ) Figure 4.11: The different 7r° energy spectra when two TT°S were found. Plotted for both the long target (diamonds) and short target (histogram).  reconstruction capability was being compromised, and that the amount of data was less than a third of that available for the short target, and that it would require a huge time investment in generating more Monte Carlo simulations for this target size, the long target data were not analysed. 127  1200  0.0  0.5  1.0  7T  1.5  Opening  2.0  Angle  2.5  3.0  3.5  (Rad.)  Figure 4.12: The different spectra of the opening angle of one 7r° when two 7r°s were found. Plotted for both the long target (diamonds) and short target (histogram).  128  4.7  T h e F i r s t Pass S k i m m i n g  Procedure.  In order to compress the 1050 tapes of data to a more manageable size, and to collect the various types of events into specific output files, a first pass through the data was made using conservative cuts. In addition, all of the original data were copied onto 8 m m V C R magnetic tapes to allow the old tapes to be cleaned and recycled for the production run. The following sections describe the cuts imposed on the data and Monte Carlo during this first pass skim.  4.7.1  Basic Cuts: Data Events.  Beam Line Elements and Veto Counters. The veto counters were observed to have fairly broad pedestals well separated from the charge peak. The cuts were set to eliminate all events with hits recorded above these pedestals. Loose cuts were imposed on the two d E / d x counters. These cuts were set to accept all but the most extreme values, ensuring that no kaons were being vetoed. The most significant cut on beam line elements came from the requirement that the hodoscope record one and only one hit in each of the x and y directions.  Pedestal Suppression. The data word list was significantly reduced by eliminating all F E R A words which were found to be within one channel of the nominal pedestal value. This action alone was responsible for a reduction in volume by a factor of almost four.  Clump Restrictions. Any events possessing more than 10 clumps were automatically eliminated from further analysis. Events with less than three 7 clumps were also excluded. Because of severe energy losses incurred by clumps located in the upstream and downstream 129  edge crystals, such events were deemed to be not reconstructible and hence were discarded. There was also a cut on the total energy in the Crystal Box, that it not exceed 600 MeV. Events with five or more 7 clumps were skimmed to a special output file without further analysis. Only the 3-7 and 4-7 events were analysed in detail.  3—7 Events. All 3-7 events, irrespective of the number of neutrons detected, were tried as candidate events from the process K~p  —> A 7 r ° .  The third photon was expected to  follow from the decay of the A, although no restrictions were imposed on it during this first pass skim. We were only interested in tagging the ir°. This analysis was intended to select those events produced through channels R l and R2. All possible configurations of 7s were tried. The following cuts were applied; E^o - The energy of the 7r° was calculated to be E 0 = E\ + E where E\ and E 2  2  were the measured energies of the candidate TT° photons. E^o was restricted to the range 240 < E 0 < 330. The expected value was 288 MeV. P^o - The momentum of the 7r° was similarly calculated as P 0 —  + P \. The 2  cuts were selected to be 200 < P 0 < 280, bracketing the expected value of 254 MeV/c. P  asy  - This was just another way of calculating the momentum of the ir° using only the energy asymmetry (E\ — E )/(Ei + E ), and the 7r° opening angle # . 2  2  The limits were set to be 190 < P  asy  ERRF  12  < 320.  - This cut was basically a 7r° goodness of fit parameter. It was calculated  as  130  The cut limits were set to —0.08 < ERRF  <  0.12 which was reasonably  effective at selecting a valid 7r° only.  Mi - The 7T° invariant mass, calculated as nv  yj2E E {\ X  2  - cos0 ), was restricted to :2  the range 90 < Minv < 170. The expected mass was 135 MeV/c . 2  4—7 Events. All 4 - 7 events were first tried as 3 - 7 events on the supposition that one of the 7s was a misidentified neutron or other spurious clump. The cuts used were identical to those used above in the regular 3-7 analysis, and again all possible combinations of 3 photons were tried. Those 4 - 7 which could not be fit to a 3-7 process were tried as 4 - 7 events arising from the process K~p — > E°7r° with subsequent decay E° —> A 7 . Once again we were primarily interested in tagging the A production, so the majority of the cuts were imposed on the candidate TT° photons and the E° decay photon. The A was expected to have decayed to produce at least one gamma. The cuts were designed to select channels R4 and R5. The following cuts were used, in analogy to the 3-7 events. E^Q - Restricted to 205 < E^o < 260. The expected value was 225 MeV. P^o - The expected value was 180 MeV/c so P^o was confined to the range 155 < P^o < 215. P  - Restricted to the range 140 < P  asy  < 230.  asy  ERRF  M{  nv  -  Restricted to the range -0.10 < ERRF  - Restricted to the range 120 < M,- „ < 150. n  131  < 0.12.  EWRD  - This was the only restriction placed on the candidate weak radiative  decay photon. It required that the energy have a lower limit; EWRD > 100 MeV. The expected range was from 130 to 202 MeV. ENSK  - The prominent photon peak coincident with the A in E° decay was re-  quired to be in the range 45 < ENSK 64  to  87  < 95. Its kinematic limits were from  MeV.  PA - This cut required that the A momentum be restricted to the range; 90 < PA < 250. The allowed range extended from 95 to 245 MeV/c.  4.7.2  Other Event Types.  The other event types were also analysed and skimmed to separate output files, (recall section 3.3.1 where these event types are described in detail). A brief description of the analysis procedure for each is outlined below. Event 4 - Copied without cuts, other than the suppression of data words within one channel of the pedestal. In addition, individual pedestals files were calculated for each run. Event 5 - These were skimmed by requiring a valid kaon using the standard beam line conditions. Charged events were not rejected. Data reduction was accomplished primarily by suppressing the pedestals. Event 7 - These were significantly compressed by a procedure of averaging. Averages were calculated in sets of 25 words for both beam on and beam off events. The averaged data were skimmed to the output device. Event 9 - The pion events were skimmed into a special format for input into the gain determining routines. The basic beam line conditions were applied. 132  Event 11 - The scaler events were copied without modification. 4.7.3  Skimming the Monte Carlo Data.  When skimming the Monte Carlo data, the goal was to make the conditions as similar as possible to the data. Hence, wherever possible, the same code and cuts were used, the only exceptions being the beam line element conditions which were not a part of GEANT. This also meant including two errors which were subsequently discovered to exist in the original data skimming codes. The first was relatively minor and was actually responsible for extra 4-j events being skimmed. The second was more serious, causing the loss of some events with a wide 7r° opening angle. However, incorporating both of these errors in the Monte Carlo skimming routine ensured that the same events were being kept in either case. Another important point involved the clump shifting procedure. In the original skim this procedure had not yet been developed. Hence when skimming the Monte Carlo data, the clumps were first shifted to their "proper" values, and then shifted back again, this time using the shift factors determined for the data. In this way, the Monte Carlo clump energies mimicked the original data. The other major difference was in the handling of the neutrons. The Monte Carlo data were skimmed to different output files depending on the numbers of neutrons and photons. These could then be collected into histograms and added with the appropriate weights. This meant, for example, that 2 - 7 plus 1 neutron events were tried as 3-7 events in the Monte Carlo, since such an event in the data would usually have its neutron misidentified as a photon.  133  Chapter 5 Data Analysis. 5.1  Considerations in the Selection of Data.  Since much of the data obtained during the engineering run was useful for diagnostic purposes only, the first step of the final analysis was to decide which data sets should be included. The following criteria were used to make that selection;  • Only those runs subsequent to *178 were included. Prior to that time, electronic and hardware failures were occurring quite frequently. Also, much of that data were obtained using a higher than normal photomultiplier voltage, for which there was an insufficient amount of stopped pion data to determine the gains.  • To reduce the contaminating effects of pile-up, runs with an average beam rate above 0.5MHz were excluded from the analysis.  • Only the data obtained while using the short target was kept for analysis.  • Those data collected using the 12.7 or 13.0 cm copper degraders were selected for analysis. All other data with different degrader materials and thicknesses were either unsuitable, or insufficient in quantity.  A summary of the short target data, from run *178 onwards, is presented in table 5.1. 134  Table 5.1: Data taken during the short target running period. Data Type. Neutron calibration using 7r~d data. Energy calibration using 7r~p data. Degrader tests. High rate studies.  No. of Runs 10 82 81 57  Target Full  12.7 cm degrader.* 13.0 cm degrader.* 13.3 cm degrader.  111 357 14  Target Empty  12.7 cm degrader.* 13.0 cm degrader.* 13.3 cm degrader.  3 11 5  * Only these items were included in the analysis.  5.2  General Analysis Techniques &: A I D A .  The first pass skim, combined with the exclusion of all but the most useful data, reduced the number of magnetic tapes required to contain the data from 1050 to just under 10. However, this was still rather unwieldy, especially during that phase of the analysis when it was necessary to investigate all combinations of conditions and variables. As the number of events in the signal region was very small, most tests required a complete pass through all the data. This required about 15 hours of computing time, and the continual loading and off-loading of tapes. Hence an alternate scheme was developed which used the general purpose analysis routine AIDA [71], (An Interactive Data Analysis program). In the normal skimming procedure, for each valid event, the entire data bank was written to an output file. The AIDA technique was to skim mainly the final variables of interest, for example the 7r° energy or the vertex location so that subsequent passes through the data would not have to recalculate these terms. Although some changes in the analysis code necessitated another pass through all the data, 135  these were relatively rare occurrences. Besides saving an enormous amount of computing time, and reducing the wear and tear on the data tapes, AIDA proved to be a powerful analysis tool. Also the reduction in volume, by a factor of about 10, allowed all the data to remain on disk, easily accessible, for extended periods of time.  Once the AIDA output files had been created, it was possible to replay the data by creating one- or two-dimensional histograms for any combination of variables. In addition, these histograms could be created while imposing an array of conditions on the remaining variables. This allowed the correlations between variables to be easily identified, and the suitability and effectiveness of the conditions determined.  The data were then processed by locating those combinations of cuts which were appropriate for, and effective at, reducing the background channels without eliminating the signal. It was also very useful to be able to replay any particular channel of the Monte Carlo to determine how the various conditions were constraining that process. Once satisfied that the cuts were having the desired effect, the histograms were written to disk files in a format suitable for reading by the fitting routines.  It is important to stress that when searching for a peak in the data, kinematic constraints were not applied without due consideration to their origins and effects. This was because artificial peaks may be produced easily through a procedure of relentlessly applying those cuts which have a tendency to enhance a particular region, whether by accident or design. Hence the goal was to reduce the backgrounds while using a minimal set of well understood cuts. In this way we retained realistic distributions and a healthy number of events. 136  5.3  The Fitting Routine.  When fitting the Monte Carlo spectra against the data, a x minimization tech2  nique was used to determine the most likely values of the free parameters. The fitting routine used the standard MINUIT [72] subroutines to perform the function minimization and parameter determinations. MINUIT was chosen as it can fit arbitrarily complex functions, has a proven track record, and the interpretation of its error estimates are well documented. The basic steps in the fitting procedure were as follows; 1. To account for those events resulting from interactions within the target walls or internal scintillators, a target empty spectrum was subtracted from the target full. The number of kaons, NR-, incident on the target in each configuration, was used as the normalization factor between the two spectra. These normalization factors were extracted from the Event 11 scaler data. Hence, the final data histogram was calculated as; H  data  = #" /u  - -£^H ^ em  (5.1)  Unfortunately, the amount of time spent on target empty runs during the engineering phase was hopelessly inadequate. This resulted in a target empty normalization factor of about 20. To diminish the effects of a large normalization factor being applied to a histogram with poor statistical accuracy, it was necessary to subject the target empty spectra to a smoothing process before subtraction. 2. The first step in bringing together the Monte Carlo data was to combine, with the appropriate weight factors, those histograms generated with different photon and neutron multiplicities. This resulted in one histogram per Monte Carlo channel. 137  The next stage was to combine all channels of the Monte Carlo into a single reference histogram. The histograms from the individual channels were added together, each scaled according to the number of Monte Carlo events generated for that process. The final reference histogram was of the form; H  = Po {Pi W  ref  d  + W d)  +  + H$}  (5.2)  In this equation, Po was an overall scale factor between the Monte Carlo and the data, and Pi was the A weak radiative decay branching ratio. Both of these were parameters of the fit. The individual terms are described in more detail below. H  Mrd  Q  - was the contribution from all channels of the Monte Carlo which occurred at rest, PR-- = 0, and which included the A weak radiative decay. Hence this was defined as H  o  rd  = £  fa,  i = 1, 4, 11 and 14  (5.3)  Here P was the branching ratio to produce a A through process R;, and t  N{ was the number of Monte Carlo events which were thrown to generate histogram Hi. Hp  rd  - was similarly the contribution from all inflight channels of the Monte Carlo which included the A weak radiative decay; Bp" = P ^ H iV 2 2  2  2  + ^H )  (5.4)  25  iv 5 2  The factors P,- were essentially the branching ratios for the inflight processes K~ + p —¥ A + 7T° and K~ + p — > E° + 7 r ° . These factors were estimated using the the D E G R A D E R Monte Carlo routine. P was a 2  parameter of the fit which allowed the amplitude of these type of inflight interactions to vary. 138  HQ - was the contribution from all at rest channels other than those included in H  . Hence it was given as;  MRD  Q  16  D  (5.5)  » ^ 1, 4, 11 or 14  = £  o  H  «=2  1  X  *  where now the proper interpretation of B{ is as the total branching ratio for the process R,, as given in table 4.1. - was the contribution from the remaining inflight channels. = P i ( ^ H  2  3  +  ^ H  2  6  )  +P  3  -< 26  ( ^ H  2  1  + %^  3 6  )  (5.6)  ^36  -<21  v  V  The additional parameter P was included to allow the amplitude for the 3  K° channel to vary independently from the other inflight processes. One would expect P and P to be equal, at least to the level that the 2  3  inflight cross-sections were known. The reason that these were kept as separate parameters stems from complications which arose during fits to determine the levels of the K° inflight contribution. In general, the shape of the spectra for the inflight processes associated with P are very 2  similar to their at rest counterparts. Hence the fit should be relatively insensitive to their presence. However, because fewer Monte Carlo events were generated for the inflight processes, the x could be minimized by 2  incorrectly enhancing their contribution. If P and P were treated as 2  3  one parameter, the value of P would be drawn above its correct value. 3  Once P had been determined, P and P were kept fixed at that value. 3  2  3  3. There was also a parameter which allowed the gains for the data to be shifted slightly. This was used to compensate for small differences between the Monte Carlo and data at the level of a fraction of one per-cent. These differences were only apparent at the highest energies, and were not observable during 139  the pion fits at lower energy. They may have resulted from very slight errors in the energy calibration, or they could have been the result of some residual non-linearity. The gain factor was determined to be 0.998 by fitting the 7r° energy spectrum. It was held fixed at that value thereafter. 4. The calculation of x  2  w  a  s  done in the usual way,  j  w  ]  where the index j ran over the histogram bins which were being included in the fit. The error associated with each bin was given by Wj. These errors were estimated assuming Poisson type statistics, and included the errors from both the Monte Carlo and the data, added in quadrature.  5.4  New Basic Cuts Common to the 3—7 and 4—7 Analyses.  The following conditions, beyond those used in the first pass skim, were applied to all the data, independent of the analysis mode. Threshold. While taking data the energy threshold was set to approximately 300 MeV. However, since the hardware balance was only approximate, some events as high as 340 MeV were not being accepted. To ensure that there would be consistency between the Monte Carlo and the data, we set the software threshold to 340 MeV. Corner Edge Rejection. All events which had a clump centred in one of the outside corner elements were rejected. This was done to avoid those events which had excessive energy leakage out of the Crystal Box. This cut was analogous to that imposed on the face edge crystals during the first pass skim. 140  K  -  Timing. In principle, it was possible for an incident kaon to stop in the degrader or the dE/dx counters and have some other particle, close in time, trigger S4. To ensure that the particle which triggered S4 was the same particle which had been observed in the upstream counters, (S3, the kaon Cerenkov etc), a cut was placed on the relative timing between S3 and S4. The spectrum was observed to be very clean, see figure 5.1, and only a fraction of a per-cent of all events were eliminated by the cuts. The cuts used are indicated by arrows in the figure.  2500 H  '—  1  1  h  Time of Flight S4 (ns)  Figure 5.1: Relative timing between counters S3 and S4. Time zero has arbitrarily been set to be the mean time. Any data outside the arrows was rejected from the analysis.  Pile-up. The most effective way of eliminating events with pile-up was to apply a cut of the form discussed in section 4.5. We required that the measured clump energy determined by the pile-up FERAs be not less than 40 MeV below that predicted by the main FERAs, i.e. we required APU > —40 MeV. 141  S4 Counter. The S4 counter was the last detector before the target assembly. Despite being relatively thin , the energy deposited in this scintillator was a valuable source of information. However, its ADC was malfunctioning during almost 30% of the useful runs. As the counter itself had been operating properly, and as this ADC had not been one of the "critical" items being monitored on a regular basis, this fault went undetected for quite some time. Fortunately the faults were usually correctable in the offline analysis. Normally such an ADC produces an 11 bit word, (2000 channels), proportional to the input voltage. This ADC had several of its own unique output modes, and it oscillated between these on a time scale of a few runs. The observed output modes were; 1. All channels in their correct order. Therefore there was no need to make any adjustments. 2. The first 768 channels were in their correct order, the rest were all shifted up by 256 channels. These were 100% correctable. 3. The first 256 channels were correct, while the next three sets of 256 channels were out of place by 256, 512, and 768 channels respectively. It was also possible to recover from this type of malfunction. 4. The first 256 were correct, the second set were shifted by 256, and the remainder were absent. Therefore these were only correctable for words of less than 512 channels. By (painstakingly) examining each run in sequence, or part run as was necessary, the individual runs were categorized as to their mode of failure. The information was then recovered by making the appropriate correction in software. There were a few confusing runs which were too difficult to unravel, 142  but these represented less than 5% of the affected data. d E / d x Cuts. Cuts on the energy deposited in the dE/dx counters could be used to eliminate some of those events arising from inflight interactions. It was expected that there would be a significant number of events in the signal region which originated from inflight interactions. Hence to determine the most suitable values for the dE/dx cuts, the number of events in this region relative to the total was determined for different energy ranges in each scintillator. Plots of this ratio versus the ADC output for the dE/dx counter E2 are shown infigure5.2. The sharp rise at low energy is a clear indication of the presence  .125  .100-  a  > w H bp  .075  .050 -  .025  .000  350  400~  450  500  550  600  650  dE/dx E, (chan.)  Figure 5.2: The placement of the dE/dx cuts. The fraction of all events located in the signal region is shown as a solid line. The data were selected to come from the region between the arrows. For reference, the E2 energy spectrum is also plotted. This is shown with diamonds and uses the right hand scale.  of inflight processes. The cuts were selected to cover the entire range of each ADC in the region that the ratio was observed to be reasonably constant. The cuts did not need to be very tight as the three counters to which they were 143  applied, E l , E2, and S4, were strongly correlated in energy. The cut values used for E2 are indicated by the vertical arrows in the plot.  5.5  Analysis of the 3 - 7 Data.  5.5.1 Event Reconstruction. The kinematic variables used in the analysis of the 3-7 events are shown schematically infigure5.3. As we were interested in the decay process A — > n + 7, it proved  Pi  wrd  Figure 5.3: Kinematic variables describing the 3 - 7 process. to be most useful to transform to the rest frame of the A, where the decay photon was monoenergetic. P rd W  In order to make this transformation, the variables #A and 7  bad to be determined. As the A was assumed to be produced from a  K'p  atom at rest, its direction was opposite to that of the 7r°. The ir° direction was cal—*  —*  culated from Pi and P , where the direction cosines were determined by assuming 2  144  the vertex to be given in x and y by the hodoscope, with the z-position being the target centre. The A particles would travel an average distance of almost 2.0 cm before decaying in the target. Hence the A decay vertex was taken to be translated from the original vertex by that amount. The A vertex and direction, combined with the location of the weak radiative decay photon, allowed 9^ to be determined. The transformation to the A rest frame yielded the Doppler corrected weak radiative decay energy; E  cor  = E  wrd  7A {1 - B cos(0 )} K  (5.8)  A7  In this equation, 7 A and fi\ are 1.0256 and 0.2222 respectively for a A produced through the at-rest process, K~ + p — > A + 7r°.  5.5.2  The Principal Channels Contributing to the 3—7 Events  The A WRD photon was expected to have an energy of 162 MeV in the A rest frame. To isolate this peak, a tight cut was imposed on the TT° energy of 170 < E o < 195 n  MeV. With the 7r° thus identified, the Monte Carlo prediction for the signal channel R l , (recall tables 4.1 and 4.2), is shown in figure 5.4a. The structure at low energy originated mainly from neutrons being misidentified as photons, although there was also a small contribution from incorrect combinations of photons which, when reconstructed, looked like a 7 r ° . If the 7T° had always been identified correctly, then the only other channel which should have been contributing to this spectrum would have been R2. This could have happened if, as was often the case, one of the photons from the second 7T° went undetected by the Crystal Box. In the A rest frame, the maximum photon energy from the decay of this 7r° was « 1 3 7 MeV. Hence these photons usually appeared below the 162 MeV peak.  However, in the lab frame, these photons 145  ranged as high as 172 MeV, and since  was determined assuming process R l ,  it was not the appropriate factor to use to make the Doppler correction. Hence the most energetic photons produced a high energy tail which extended beyond 162 MeV. The spectrum predicted by the Monte Carlo for channel R2 is shown in figure 5.4b. It has been normalized to R l assuming a weak radiative decay branching ratio of 1.5 X 10 . The inset shows the expected R2 contribution superimposed on -3  the signal R l in the region from 140 to 180 MeV.  Doppler Corrected Energy (MeV)  Doppler Corrected Energy (MeV)  Doppler Corrected Energy (MeV)  Doppler Corrected Energy (MeV)  Figure 5.4: The main processes contributing to the 3-7 analysis. These spectra only have the basic set of cuts on them. The inset shows the signal region with the R l process overlaid.  If these were the only contributions to the signal region, then the analysis would have been quite straight forward. However, accidental pairings from any of 146  the 5 photons produced in process R5 led to a significant number of events in the signal region. The Monte Carlo spectrum for the R5 channel is shown in figure 5.4c. It has a very identifiable peak at about 75 MeV which is attributable to the photon emitted in the decay S° —>• A 7 . Unfortunately, to obtain the same order of statistical accuracy in the signal region for the R5 Monte Carlo events, as are shown for the R l events, would require almost a decade of computer processing time with the current TRIUMF resources. Hence every effort was made to reduce the fraction of R5 events surviving the analysis relative to the number of signal events. The last major contribution to the signal region came from inflight interactions. The most problematic of these was the kaon charge exchange channel R21, followed by the decay K® —* 7r° + T T . The other inflights were reasonably well un0  derstood in terms of the analogous processes occurring at rest. Also, to first order, the branching ratio was independent of their contribution. This was because the relative normalization between the inflight channels was the same as that between the corresponding processes which occurred at rest. However the charge exchange channel was a serious problem. Its spectrum is shown in figure 5.4d where it appears as a predominant feature in the signal region. Of lesser concern was the uncertainty in the momentum of the kaons interacting inflight.  In the Monte Carlo we relied on the predictions of the DEGRADER  program. This was a reasonable approximation given the large momentum spread of the beam entering the target.  5.5.3 Inflight  Suppression of Backgrounds. Contributions.  It was important to determine the relative amplitude of the kaon charge exchange process as it was a major, and somewhat uncertain, component of the background. 147  The following techniques were used to measure its amplitude. • The data were scanned to locate all 4-7 events which could be reconstructed to form a  7r°-7r°  pair. The only possible channels in which this could occur  with any likelyhood were R2, R5 and R21. • To find those events which would contribute to the inflight spectra of the 3-7 analysis we required that any three of the four photons passed all the requirements of the regular 3^y analysis. The same cuts were used to ensure that the cut efficiencies would be the same in each case. The only exception to this rule was a cut on 9^ which, as will be shown, was effective at reducing the K° component. • The Monte Carlo data were analysed in the same fashion, except that extra steps needed to be taken to ensure that the neutrons were handled correctly. As these data have been analysed with the requirement that there be exactly four photons and no neutrons, the appropriate Monte Carlo sets were; 4 photons, 0 neutrons. Weight 1.0 3 photons, 1 neutron. Weight 0.8 • The data and the Monte Carlo were compared using the fitting routine. The best variable to test was found to be the energy of the second 7 r ° , the first being the 288 MeV 7r° satisfying the 3 - 7 analysis. This energy spectrum is shown infigure5.5 for the main contributors R2, R5 and R21, and also for the data. Examining these spectra reveals that the only channel with a significant number of events between 210 and 250 MeV is R21. Hence this was a sensitive test of the inflight contribution. Figure 5.6 shows a typical fit to the data. The shaded area shows the contribution to this fit from the R21 channel alone. 148  M  100  '  1  r"-"—  150  Q  7T  200  1  1  250  1  300  E n e r g y . (MeV)  OH  100  ^  ,  150  ,  200  ^T—  250  -\  300  TT E n e r g y . (MeV)  Figure 5.5: The 7r° energy spectrum when a 7r°-7r° pair has been found. Only the energy of the second 7r° is shown. The contribution from R21 is seen to be unique.  149  Figure 5.6: A fit to the TT° energy spectrum when a 7 r ° - 7 r pair has been found. The open circles are from the data. The shaded area is the expected contribution from inflight processes of type R 2 1 . 0  150  By placing cuts on the dE/dx and S4 scintillators, as described in section 5.4, it was possible to eliminate some of the inflight contribution. As the threshold for the inflight process of kaon charge exchange is about 90 MeV/c, those kaons most likely to interact inflight tended to have a higher momentum as they emerged from the degrader. Hence they tended to deposit less energy in the scintillators, and so could be partially identified. The energy of a kaon exiting the thin degrader was slightly higher, on average, than those exiting the thick degrader. As such, they were expected to generate slightly more inflight interactions. However this effect was diminished by the application of dE/dx cuts. This was the result of having deposited a smaller average energy in the dE/dx counters, thereby causing the cuts to be more severe for these events. This is illustrated in table 5.2, which shows the amount of inflight from each degrader for a number of different cuts on ENS4, (the energy in counter S4). The slightly higher rate with the thin degrader was only observable when no cuts were being applied. The value given in the table is parameter P , 3  Table 5.2: Inflight contribution as a function of ENS4 and degrader thickness. ENS4 Cut No cut > 250 ch. > 300 ch.  Degrader Thickness. 12.7 cm 13.0 cm 1.15 ± 0.09 0.89 ± 0.06 0.69 ± 0.07 0.63 ± 0.05 0.35 ± 0.06 0.41 ± 0.04  which should have been 1.0 if the rough calculations based on the inflight crosssections were correct. These values are clearly in reasonable agreement. As can be seen from the table, the ENS4 cuts are quite effective at eliminating the inflight contribution relative to the other channels. The P value used in the data analysis 3  was taken to be the weighted average of the values determined for each degrader type. The weights were determined according to the fraction of each type present in the data. 151  OA? (Rad.)  Figure 5.7: The 6 interest are shown.  Al  0  (Rad.)  spectrum from the Monte Carlo. Only the main channels of  152  The most efficient method of reducing the contribution from the charge exchange channel was through the application of a cut on the angular distribution of #A-r This distribution is displayed in figure 5.7, again for the four main processes of concern. These plots show that a condition 9^  > 1-2 would virtually eliminate  the charge exchange contribution. Such a condition also excluded a large fraction of the data, but it was felt that this was a reasonable compromise, since the K° channel was the least well understood component of the analysis. Figure 5.8 shows the result of the 9^ spectrum for the main channels.  cut on the Doppler corrected energy  As can be clearly seen, the K° inflights have  been reduced to playing a very minor role.  However, the signal has diminished  significantly with respect to channel R2, an unfortunate side effect of the 9^  cut.  Other Channels. Since the 7r° produced in coincidence with the A was equivalent for both R l and R2, no cuts on it could help to distinguish between these two channels. Hence the only possible way to improve on the R 1 / R 2 ratio in the signal region was through cuts on properties of the candidate weak radiative decay photon. Such an improvement could have been reached by applying another cut with #A , of order 6^ 7  < 1.5.  However, this condition was not employed as it had several major disadvantages; 1. For any reasonable improvement in R1/R2, far too few events remained after the cut to make sensible fits during the analysis. 2. The narrow range of acceptance of 9\~, magnifies any discrepancies between the cut efficiencies for the Monte Carlo and data. 3. The signal R l is reduced relative to R5 and R21. Conversely, contributions from R5 can only occur through an accidental pairing of photons conspiring to mimic a 7T°. Hence severe restrictions on the rr° energy 153  Doppler C o r r e c t e d Energy (MeV)  Doppler  C o r r e c t e d Energy (MeV)  Figure 5.8: The results of the #A cut on the Doppler corrected energy spectrum. The insets show the signal region with the R l process overlaid. 7  154  spectrum are very useful in reducing the contribution from this channel, ergo the tight cut; 270 < E o < 295 MeV. Another feature of the R5 events was the large v  number of photons in the final state. Many of these may have overlapped with one another to appear as a single clump. In many cases the clump finding routine had determined that there was a significant overlapping of clumps. Hence these events were rejected from the analysis as being most likely to have come from higher photon multiplicity events. In addition, the minimum opening angle 8 { , between any pair m  n  of photons, or a photon-neutron combination, was limited to a range # ;„ > 46°. m  This was done as the R5 events were observed to be more likely to have such small opening angles. With the addition of these cuts a modest improvement was observed in the R1/R5 ratio. With all the aforementioned cuts in place, the final Monte Carlo predictions for R2 and R5, superimposed on R l in the insets, are shown in figure 5.9.  In summary, at this stage we felt that the data had been constrained to the maximum advantage without reducing the number of events to an unmanageable quantity. In particular; • The K° inflight channel had been virtually eliminated. • The other inflight processes were reduced by scintillator cuts, and their amplitudes relative to the other channels were known. • The contribution from channel R5 had been significantly reduced relative to Rl.  5.5.4  Fitting the Data with the Monte Carlo.  With the inflight amplitudes fixed, and all necessary cuts decided upon, the data and the Monte Carlo were fit against one another. The only free parameters found 155  Doppler C o r r e c t e d Energy (MeV)  Doppler C o r r e c t e d Energy (MeV)  Figure 5.9: R2 and R5 Monte Carlo spectra with all cuts applied. Only these background channels make a significant contribution to the signal region. The insets show the signal region with the R l process overlaid.  156  to be required were the overall scale factor Po, arid the weak radiative decay branching ratio Pi. The resultant fit is shown in figure 5.10. In this figure, the data are represented by the histogram while the open circles are the Monte Carlo. In figure 5.10a the entire spectrum for the Doppler corrected weak radiative decay photon is shown. The signal region of this plot is expanded in figure 5.10b. The shaded area represents the predicted contribution from the signal channel alone. The peak contains 287 events.  ^ 2000 c m •u 1500  CM  5]  1000oo c Ul  F 500  o  ^~^r.-TTn  40  60  80  100  120  140  T I •  ° o n  160  180  200  Doppler C o r r e c t e d Energy (MeV) 120in  >  s.  Monte Carlo.  10080-  •—  60-  \  c > UJ  o o Q  Data  40200140  150  160  170  180  Doppler C o r r e c t e d Energy (MeV)  190  200  Figure 5.10: The final fit to the data. In a), the entire fitted range is shown, whereas in b), only the signal region is displayed. In both cases the Monte Carlo is given by the open circles. In b), the data are also displayed with error bars. The shaded region is the signal's contribution. The area of the signal peak is 287 events. 157  The result of this fit yields the branching ratio; BR  A  ~*  U  ~t  7  = (1-78 ± 0.24) x 1 ( T v  A -> anything T  '  (5.9) y 1  3  where the error listed is purely statistical. The error was estimated by the MINUIT program. It represents a one standard deviation confidence limit on the branching ratio. The fitted range was from 40 to 200 MeV, and resulted in a x P 2  e r  degree  of freedom of 1.25. Values below 1.4 are usually considered to be acceptable, which suggests that this fit was quite reasonable. Another way of displaying the data is shown in figure 5.11. In this case the Monte Carlo prediction for all fitted backgrounds was subtracted from the data. What remained was the expected contribution from the weak radiative decay channel. This is to be compared with the Monte Carlo prediction shown as a histogram in the figure. While the presence of a peak in the data is obvious, this figure also demonstrates quite clearly why the error in the fit is so large. To ensure that the fit was making sense, we used the derived parameter values to compare the Monte Carlo with the data for a number of other kinematic variables. While these were not expected to be sensitive to the weak radiative decay branching ratio, they were very useful in checking that the angular dependencies and energy distributions in the data were adequately simulated by the Monte Carlo. Some examples of these are shown infigure5.12. In the first frame, the variable E R W 1 was equivalent to the ir° goodness of fit test E R R F , only in this case the test was made between the weak radiative decay photon and a photon from the 7r° decay. The dip near zero can be understood by recalling that the acceptable range in E R R F for a ir° was from -0.04 to +0.04. The majority of the events in E R W 1 were well away from this range, as they should have been, and the data and the Monte Carlo agreed well. The other frames compared some angular distributions, 9^ and #12, and the energy of the 7r°, E„.O. The fits in 158  V  50 40 -  >  ' ~r  1  1  1  140  150  160  170  Doppler  Corrected  1180  Energy (MeV)  Figure 5.11: The data with all fitted backgrounds subtracted. The data are shown as the open circles.  159  3000  1  2500 2000 1500 1000 500 H -0.6  -0.4  -0.2  0.0  ERW1 4000  a)  I. 0.2  0.4  270  275  280 E„o  285  290  295  (MeV)  2500 2000  3000  2000  1000  0.5  1.5  ?  12  2.0  (Rod.)  2.5  3.0  3.0  Figure 5.12: Monte Carlo fits to other variables, a) A 7 r ° goodness of fit test for incorrect pairings of photons, b) The ir° opening angle, c) The 7 r ° energy spectrum, and d) The angle used to make the Doppler correction, #A7.  160  all cases were good.  5.5.5  Analysis of Systematic Errors.  By making a relative measurement, this experiment was free of most common sources of systematic errors. The branching ratio was virtually independent of electronic and hardware inefficiencies, the number of stopped kaons and geometric acceptancies.  On the other hand, systematic errors could be introduced through  differing cut efficiencies between the Monte Carlo and the data. The following items were considered to be the most likely sources of systematic errors.  1. Although the Monte Carlo reference histogram was a sum over all the processes listed in tables 4.1 and 4.2, R2, and to a lesser extent R5, were the dominant normalization channels. The uncertainty in the branching ratio for the process A —• n + 7 r ° , common to both these channels, introduced a systematic error of ±1.4%. There was also an uncertainty in the relative branching ratios of the production channels K'p —• A + 7r° and K~p —> S ° + ir°. This introduced a further systematic uncertainty of  ±0.7%•  2. The gain factor was determined by fits to the 7r° energy spectrum, and was found to have a value of 0.998 ±0.001. By calculating the branching ratio with the gain factor set to 0.998 ±2<r, the maximum systematic errors attributable to the gain uncertainty were estimated to be ±2.2%.  3. The inflight contributions were examined in a similar way. Their amplitudes were forced to vary two standard deviations from their independently measured value. The resulting changes in the branching ratio suggested possible systematic effects of about ± ;f%. 1  161  4. The target empty subtraction was a likely source of systematic errors. As a background measurement for particles interacting in the vacuum assembly, the octagon counters, S4, and upstream elements, the target empty run was quite adequate. However the absence of hydrogen in the target effected the background rate in the elements immediately downstream of the target vessel. In this region, particles would have had a higher mean energy, and would have existed in greater numbers than when the target was full. Since we did not know the origin of the main background sources, it was difficult to quantify the amount of variation possible in the target empty normalization. Rough estimates indicated that this normalization could not have been off by more than ±20%. This lead to an estimate of the systematic error for the branching ratio of ±2.9%.  5. The pile-up cuts also introduced systematic errors, and again they were not easy to estimate. Figure 5.13 shows the branching ratio plotted against the pile-up cut used. In the region from —80 < APU < —40 MeV, the gradual slope shows that the pile-up cuts were becoming increasingly effective with the tighter cuts. The sharp downward turn near APU = —40 MeV represents the point at which valid data were being cut too harshly. The drop in branching ratio was the result of an energy dependence in the variable APU. At this point the most energetic photons begin to be cut at a higher rate, thus suppressing the signal region relative to the normalization. If the extrapolation of the linear portion of the curve can be trusted, then it would suggest that the branching ratio should be slightly lower than 1.78. Our estimates for reasonable limits of the branching ratio in the context of this figure suggest systematic errors of ± 7 3 % 162  CO K  (0 PP  -100  Figure 5.13: The branching ratio as a function of the pile-up cuts. The solid line is a smooth curve through the data, whereas the dashed line is an extrapolation of the linear region.  163  6. It was always possible that other backgrounds existed that we hadn't considered. The absence of events below 40 MeV suggested that random background events were very rare. To confirm this, we tried including a linear background term in the fitting routine. The magnitude of this new term was always consistent with zero, although by varying the fitted range, the branching ratio could be made to vary by ±20%- These were taken to be the systematic errors from this source. When the systematic errors from all sources were added in quadrature, the branching ratio, with error, was given by; BR  ~* * = (1.78 ± 0.24±g-**) x 10~ A -» anything A  U  7  A  3  v  0 1 6 ;  (5.10) v  J  In this final result, the first error was statistical, and the second systematic.  Analysis of the 4—7 Data.  5.6 5.6.1  Event Reconstruction.  To reconstruct those events in which 4 photons were found, it was assumed that a A had been created through the decay process: E° —> A + 7. In turn, it was assumed that the £ ° had been produced at rest via the reaction K~p —• E° + 7 r ° . The kinematic variables for this process, known as R4, are illustrated in figure 5.14.  Once again it was most useful to make a transformation to the rest frame of the A. In this frame the energy of the A decay photon should appear as a monoenergetic peak. However, this peak was significantly broadened as the effective energy and position resolutions were even poorer in the 4-7 analysis than was observed in the 3-7 analysis. This was due to the increased uncertainty in the A direction and energy. 164  Figure 5.14: Kinematic variables describing the 4 - 7 process.  The reconstruction of these events was similar to that of the 3 - 7 analysis, except that in this case, it was the E° direction which was determined from the 7r° decay photons of momenta P i and P . Then by measuring the energy and direction 2  of the photon emitted coincident with the decay of the E°, we were able to estimate the A direction and energy.  5.6.2  The 4—7 Analysis and Data Reduction.  As a result of being produced indirectly, there was a greater uncertainty in the direction and energy of the A . This resulted in spectra which were not as sharply defined as in the 3-7 analysis. A direct consequence of this was that most cuts were less effective, and had to be set much wider. Hence it proved to be much more difficult to suppress the backgrounds relative to the signal without compromising the data. 165  In this analysis, the signal channel which we wanted to isolate was R4. As R5 had the same basic topology as R4, it was one of the principal components of the background. There was also a large contribution from channel R2. This channel had 4 photons whose total energy was virtually identical to the sum of the 4 photons from channel R4. Therefore there were many possible combinations of the R2 photons which could mimic the signal channel. These were very difficult to suppress relative to R4. The final component making a sizeable contribution to the signal region was the inflight charge exchange process, R21. As the R5 events had 5 photons, plus a neutron, the average photon energy was much lower than was the case during the analysis of the 3-7 events.  This  introduced a new systematic error as we were suddenly very sensitive to the effects of neutrons. As luck would have it, the signal region was particularly vulnerable to systematic differences between the Monte Carlo and the data on this issue. We noticed that the branching ratio could be made to vary up or down by factors of two by making relatively minor alterations to those cuts which were sensitive to the presence of neutrons. On the other hand the overall normalization factor was only varying by about 2%, (in the most extreme case), with the application of these cuts. Hence the total number of events being effected by the differing efficiencies of these cuts was relatively small, it just happened that it was the signal region that was being influenced the most. In order to avoid this source of systematic error, and indeed to even extract a branching ratio, we found it necessary to completely eliminate the neutrons causing these effects. This was done by placing a 62 M e V minimum energy threshold on all clumps in the Crystal Box. This was safely above the maximum neutron kinetic energy and it rejected those events with misidentified neutron clumps. This had the negative effect of increasing the statistical error since the total number of valid events was reduced. 166  W i t h the neutron contamination effectively removed, R2 must have been composed of a  pair. The contribution from R2 could be significantly suppressed  7r°-7r°  by requiring that no combination of photons, other than 7 and 7 , resembled a i r ° . a  2  To accomplish this we applied the 7r° goodness of fit test to all other photon combinations and applied cuts to exclude events which appeared to have a misidentified 7T°. Figure 5.15 shows one such possible comparison. In this case we have tested the weak radiative decay photon against one of the supposed 7r° photons. The clear peak near zero for channel R2 shows that these are in fact largely just misassociated photons. It also shows that the cut on the R2 spectrum, (indicated by arrows on the figure), greatly reduced the number of signal events as well. The cuts shown were applied as they were reasonably effective. This was not true of all photon pair combinations, and in an attempt to salvage a few events, no other such cuts were employed. W i t h basic cuts on the A and 7r° energies, plus the 7r° goodness of fit tests mentioned above, the main channels, as predicted by the Monte Carlo, contribute the amounts shown in figure 5.16. Again it is the energy of the candidate weak radiative decay photon which has been plotted, after being Doppler corrected to the A rest frame. In the R2 and R21 frames, the signal R4 has been superimposed as a curve on each histogram. This has also been done for R5 in the inset, which is an expanded view of the signal region. The R4 normalization has been made assuming a branching ratio of 1.5 x 1 0 . - 3  It is clear from this figure that there was a significant amount of background to be removed before we would be able to fit these data. In particular, the inflight contribution R21 had to be reduced. In figure 5.17 we show the 6^ spectrum with the above cuts, and with the additional condition that the Doppler corrected photon had an energy in the signal region from 140 to 180 M e V . From this figure it is clear that a cut on 6\  y  could be used to suppress either channel R21 (e.g. 9^ > 1.0), 167  600 ?. 500  -0.25  0.00  0.50  ERW2  0.00  ERW2  0.50  Figure 5.15: The Monte Carlo predicted 7r° goodness of fit test for misassociated photon pairs. A valid 7 r ° was expected to have a test value near zero. This test was made between the weak radiative decay photon and one of the original 7r° photons. It shows that a large fraction of the R2 events (upper frame) are identifiable as misassociated 7T°s. In the lower frame the equivalent spectrum is shown for the signal channel R4. The arrows indicate the location of the cuts applied to this spectrum.  168  135  E  c o r  165  195  105  (MeV)  135  E or C  165  195  (MeV)  Figure 5.16: The Doppler corrected energy spectra after basic cuts. These are shown for the Monte Carlo generated data, and only for the most significant channels. The solid curve superimposed on the other spectra is the expected signal R4, appropriately normalized.  169  or, channel R2 (e.g. #A < 1-0). Both methods were tried, but we found that it 7  was best to use this cut to eliminate the inflight interactions as there weren't any other suitable methods. Unfortunately this had the obvious effect of enhancing the R2 channel relative to the signal. This was considered to be the lesser of two evils, particularly as applying the #A cut to eliminate channel R2 would have resulted in 7  an enhanced R21/R4 ratio which was even less desirable.  Figure 5.17: The #A spectrum for photons in the signal region. This Monte Carlo data is shown for the four principal channels under consideration, R4, R2, R5 and R21. 7  Additional cuts required to suppress R2 and R5 in the signal region were not only many in number, but also tended to cut the signal spectra at inappropriate locations. The later resulted in systematic effects becoming much more important. Hence we had the option of significantly reducing the backgrounds, (and conse170  quently obtaining results with very large systematic errors and generally poor fits), or having a modest background reduction, (and hence an increased statistical error). We utilized the later method as the fits were generally better, the systematic effects were smaller, and we felt that this route allowed us to interpret the results with more confidence. In our final spectrum the most reasonable set of cuts yielded an R2/R4 ratio of about 1.5. With these cuts the R5/R4 ratio was about 2.0.  5.6.3  Fitting the Data with the Monte Carlo in the 4 - 7 Analysis.  The best fit to the data from the 4 - 7 analysis is shown in figure 5.18. The only free parameters of the fit were Po, the overall normalization, and P  1 ?  the A weak  radiative decay branching ratio. It is obvious in this figure that the background was a very significant portion of the spectrum in the signal region, and that the overall fit appears to be worse than was the case in the 3-7 analysis. Because the fit was less certain, and the background subtraction was large, the overall statistical accuracy determined by the fit was quite poor. The branching ratio was determined to be; BR  ~ ^ + = (1.68±°-<|) x lO" A -+ anything A  A  n  7  v  0 A 8 J  3  (5.11) '  v  where only the statistical errors have been considered at this point. The x 2 per degree of freedom for this fit was found to be 1.17, indicating a reasonable fit, although systematic effects can be seen in those regions of the fit which contribute much more to the total x 2 - The fitted region was between 80 and 200 MeV, and the number of events in the signal region attributable to the A weak radiative decay was 252. In figure 5.19 we show a representative selection of comparisons between the data and the Monte Carlo using the fitted parameters. These give us some confidence that the fits are making sense, as they are for angular and energy distributions 171  Doppler C o r r e c t e d Energy (MeV)  Figure 5.18: The final fit to the data for the 4 - 7 analysis. In a) the entire fitted spectrum is shown. The data are represented by the histogram, and the Monte Carlo by the open circles. In b). we show an expanded view of the signal region and we have added error bars to the data histogram bins. The shaded area represents the fitted contribution from the signal channel alone.  172  at various stages of the reconstruction. In the first frame we show another 7r° goodness of fit test (ERW1), between the weak radiative decay photon and the other 7r° decay photon. In the other frames we show  E  w r  d,  the energy of the weak radiative  decay photon before being Doppler corrected; E^o, the energy spectrum of the 7r° used to tag the event; and 9^, the angle used to make the transformation to the rest frame of the A.  Figure 5.19: Monte Carlo predictions of other spectra in the 4-7 analysis using the fitted parameters P and P . a) E R W 1 , a 7r° goodness of fit test for incorrect pairings of photons taken between the weak radiative decay photon and a 7r° photon; b) E d , the weak radiative decay photon energy in the lab frame; c) E^o, the TT° energy spectrum; and d) #A , the angle used to make the Doppler correction. 0  x  w r  7  173  5.6.4  Estimation of Systematic Errors.  Unlike the 3-7 analysis, the primary source of systematic errors came from having to impose many more cuts on the data. Unfortunately there was no way of reducing the number of cuts beyond this minimal set. It was very hard to quantify the systematic errors from each cut as they were highly correlated with one another. Hence the best estimates were determined by performing dozens of different fits while varying these cut parameters. We took the systematic error to be represented by the range of different branching ratios obtainable for all reasonable fits. We found most values in the range 1.36 x 1 0 these sources to be  to 2.14 X 1 0 . Hence we took the systematic errors from  - 3  - 3  ±o!32-  Another important source of systematic error in this fit came from the uncertainty in the relative branching ratios for K~p —• A + 7r° and K~p —> E° + 7 r ° . This relative branching ratio is only known to about 9%. While in the 3-7 analysis the final fit was essentially between just R l and R2, in this fit both R5 and R2 are present in comparable amounts. Hence we allowed this relative branching ratios to vary by ± 9 % and obtained a range of weak radiative decay branching ratios from (1.45... 1.90) x 1 0 . We took this to be a measure of the systematic error, - 3  contributing a factor  ±Q!23-  As with the 3-7 analysis, we investigated other possible sources. We found these to have a much smaller effect relative to the above. Particularly we found the following contributions, calculated in the same manner as was done for the 3-7 analysis. • Target empty subtraction: ± Q Q | x 10~ . J  • Inflight uncertainty: ±°;°f x 10~ . 3  • Two standard deviation gain changes: ±o!o2 174  x  10~ . 3  • Inclusion of linear background: ±S.'oi  x  • Contribution from pile-up cuts: ±o!i  6 x  10 . -3  10 . -3  The systematic errors were combined by added in quadrature and the final result determined from the 4-7 analysis was found to be;  A ^ X A  B  R  n  7  =  (  L68  A —+ anything  175  ^  ±  0  0 ^  *  1  0  _  3  (- ) 5  12  Chapter 6 Discussion and Conclusions. We have measured the A weak radiative decay branching ratio by making separate analyses of two different subsets of the data. We obtained consistent results in each of these analyses. The branching ratios so obtained were; BR  ^ +7 A —> anything A  A  n  " X n  7  BR A A —> anything  4±°\ ) e  =  ±  =  0  2  x  10  3-7 analysis, and  -3  (1.68 ±g"S±g-g) x l O "  3  4-7  analysis.  The second value had much larger errors associated with it. This was the result of having a more complicated event topology, and hence, a poorer effective resolution and the necessity of having many constraints on the data. To combine these results, we found the total error for each result by adding the statistical and systematic errors in quadrature. The final branching ratio was determined by making an error weighted average of the two results.  The posi-  tive and negative errors were averaged to determine the weights, but were treated independently when calculating the error. This yielded the final result; BR  +? = A -> anything  (1.77±g-£) x lO" °-  v  3  25/  Although we had several hundred events in our sample, our errors were quite large. This was due primarily to the large background subtractions, and in the 4 - 7 analysis there were considerable systematic effects as well. Our value of 1.77 x 10 176  -3  is higher than the only other measurement (see table 2.1) which exists for this reaction. The previous result was (1.02 ± 0.33) x 10  -3  [4]. The two results differ by  about 1.3 standard deviations. This is not considered to be a strong disagreement, but is nevertheless a little disconcerting. In their measurement, Biagi et al. had the difficult task of estimating the amount of background in the small region of parameter space where they expected to find the signal. In this respect, the lack of a Monte Carlo simulation detailing the effects of shower developments in the LAD was a particular concern. This was important because many candidate events were rejected on the basis of their being associated with some spurious low energy clump located elsewhere in the detector. A Monte Carlo simulation of the LAD may have lead to a different interpretation of these events.  In addition, a careful examination of the data they presented  showed that there were other regions of parameter space that did not agree that well with the Monte Carlo predictions. This leads to an additional uncertainty in the interpretation of the events in the signal region. In light of these uncertainties in the Biagi et al. analysis, and in consideration of their small sample size, we feel that our result is a more reliable estimate of the actual branching ratio. Our result does not compare favourably with the most recent calculations by Zenczykowski [53], who has predicted a branching ratio of 3.21 x 10 , (see section -3  2.5). To make a fair comparison really requires some knowledge of the estimated error in his theoretical predictions. In part, his uncertainties would arise from the uncertainties in the non-leptonic decay branching ratios, the estimated parameter a, and the F / D and f/d ratios. Since the non-leptonic decays of the A are known to a few percent, it would appear the the largest uncertainties come from the F / D and f/d ratios. Large variations were observed by Brown and Paschos [52] in their calculations of hyperon weak radiative decay amplitudes when they allowed the F / D values to 177  vary. Also it is known that different F / D ratios are obtained when making fits to either the s- or p-wave amplitudes of the non-leptonic decays. It is generally assumed that the p-wave amplitudes give correct results, as they are in closer agreement with the quark model predictions. Although Zenczykowski only uses the F / D ratios when making a pole-model calculation of the p-wave amplitude, he stresses that predictions made using his technique will only be reliable when the issues of the F / D value and the relative contributions of the s- and p-wave amplitudes are properly understood. The PC AC approach by Brown and Paschos also relied on the pole-model and a relationship between the non-leptonic and radiative decays. They included terms which allowed the photon to couple to the hyperon magnetic moments, and in particular they included a non-zero value for the transition moment  — A).  It is interesting to note that they found that this term was essential to suppress the parity conserving amplitudes to experimentally observed levels, particularly for the A decay. As Zenczykowski used the pole model to calculate these contributions, it may be that the omission of such terms was responsible for the rather high value he obtained for the A — > n + 7 process. This is especially true if one considers that the remainder of his predictions are in quite reasonable agreement. It is not clear from his manuscript whether such couplings were included. Our results clearly rule out as being the dominant mechanism, the single quark transitions considered by Gilman and Wise [40]. This is consistent with every other experimental measurement. Our result is also significantly above the lower limit of 0.85 x 10  -3  which was predicted by Farrar in her unitary calculation [36].  Additionally our results are in agreement with the model of Kamal and Verma. They considered the one- and two-quark transitions. They have reasonable agreements with much of the existing data, although we found in our recalculation of their results, (using the newly obtained H —» E + 7 branching ratio), that the -  178  -  predictions were quite sensitive to the input values. More substantive tests of the various theoretical calculations, and of the importance of the various terms they include, will require more quality data for all hyperon channels. However, for more accurate data to be a useful barometer of the theories, theorists must endeavor to estimate the amount of uncertainty in their predictions.  6.1  A Critique of E811-II.  From the experience we gained during the engineering run we were able to make a substantial number of improvements to the quality of the data obtained during the production run. Many of these have already been discussed.  Of particular  importance was the need for more time to be spent on target empty background runs, independent gain sets, and the ability to monitor the photomultiplier gains between pion runs with a reliable flasher system. As a result of our preliminary analysis of these data, we lowered the beam momentum during the production run to reduce the possibility of inflight interactions, and we improved the system used for monitoring the pile-up. Also a number of other changes were made to improve the temperature stability and general operating conditions. However there were many other aspects of the experiment which were found wanting, and for which we had no option but to proceed with the existing hardware and the available resources. We found that in order to get a sufficient kaon stopping rate we had to compromise on many key points. We found that it was often necessary to open the mass slits wider than was ideal to increase the kaon flux. However this increased the fraction of pion contaminates in the beam, leading to more pile-up and a higher susceptibility to backgrounds. We were also operating at a higher than optimum momentum, again to boost the kaon flux. As a consequence of this we were forced to use a thick degrader 179  and hence suffered from energy straggling and a large momentum range for kaons incident on the target.  In addition multiple scattering in the degrader resulted  in a large beam divergence. To accommodate this broad beam we had to use a long, wide, hydrogen target. This led to large uncertainties in the vertex location, compromising the reconstruction ability. It also meant an increased number of TX~ secondaries were stopping in the target, contributing to the background. A very substantial improvement in this experiment could have been attained through the use of a different detector, particularly a device like the Crystal Barrel now in use at LEAR. This detector is similar to the Crystal Box, except it uses Csl rather than Nal, and has almost 4n solid angle coverage. It also sits in a large magnet and has drift chambers surrounding the target to track charged particles through the magnetic field. This would result in a significantly higher event rate. For instance, the probability of detecting 3 uncorrelated photons in a 2TT detector like the Crystal Box is only about 0.125, while it is nearly 1.0 for the Crystal Barrel. An equally important advantage is that the contribution from the background channels would be greatly diminished because one would be detecting all the particles in the final state, and so there would be little ambiguity about which type of event it was. Other advantages of the Crystal Barrel were the improved coverage against charged particles, a higher granularity, and a depth 16.1 radiation lengths compared to only 12 in the Crystal Box. In our Monte Carlo studies we found increasingly significant energy losses out the ends of the crystals for photon energies above 150 MeV. There are many such photons in these reactions, hence one might also expect a better energy resolution at high energies. We feel that the predicted fluxes, and improved 7 r / K ratios being planned by the designers of the KAON factory, suggest that a major improvement in this measurement could be reached by conducting it at such a facility. In particular, if a 180  detector similar to the Crystal Barrel was used, and an improved target design was adopted, a very substantial improvement could be made to the A weak radiative decay measurement. Also, without any change of apparatus, the E  +  could be mea-  sured with a much more efficient method of tagging the pion. However this would require a clean beam or the rates from pions in the halo would probably be too high for the chambers to handle.  6.2  Conclusions.  We have made the most precise measurement of the A — > n + 7 branching ratio to date. Besides leading to smaller statistical errors, our large sample size allowed us to study the systematic effects in detail. Our technique was very different from the Biagi et al. method, and we feel it resulted in a more accurate reflection of the true branching ratio. In particular, in the Biagi et al. experiment, the estimation of which events should constitute the background was an extremely delicate and tricky calculation. This was compounded by the lack of a full Monte Carlo simulation, and in view of the discrepancies that existed elsewhere in regions outside their canonical signal region, we feel that their branching ratio estimate was less reliable. With the exception of ruling out the single quark transitions as the dominant processes, we were unable to make any clear selection between the various theoretical models. While our branching ratio does agree with some theoretical estimates, most notably those of Kamal and Verma, none of the models does a good job of explaining all the measured data. However, this measurement does suggest that the model by Zenczykowski with no free parameters is inadequate in its present form. Finally, we wish to reiterate that, while this result contributes significantly towards furthering our understanding of hyperon weak radiative decays, more dedicated experiments like this one are needed. These improved experiments will not 181  only test the current theories, but hopefully, they will also stimulate improved calculations. With the current experimental status, there has been little need for more detailed calculations.  6.3  Closing Remarks.  We have made a successful measurement of the A weak radiative decay branching ratio. When both this analysis, and the independent analysis of the production run data are complete, we shall make a detailed comparison between the two results. We will study the different analysis techniques that each group has employed, our Monte Carlo routines, and the cuts that we used. When we are fully satisfied, the results will be combined and presented for publication. It also became apparent during the analysis that we may be able to measure the ratio (K~p —»• A + n )/(K~p 0  — > S ° + 7r°) better than it is presently known.  That possibility is now being investigated.  182  Bibliography [1] G.D. Rochester and C C . Butler. Nature, 160:855, 1947. [2] R.E. Taylor. Proc. Int. Symp. Elec. and Photon Interactions at High Energies, SLAC, 1967. [3] J.I. Friedman and H.W. Kendall. Ann. Rev. Nucl. Science, 22:203, 1972. [4] S.F. Biagi, M . Bourquin, R.M. Brown, H.J. Burckhart, Ch. Dore, P. Extermann, M. Gailloud, C.N.P. Gee, W.M. Gibson, R.J. Gray, P. Jacot-Guillarmod, P.W. Jeffreys, W.S. Louis, Th. Modis, P. Muhlemann, R.C. Owen, J. Perrier, K.J.S. Ragan, Ph. Rosselet, B.J. Saunders, P. Schirato, H.W. Siebert, V . J . Smith, K.-P. Streit, J.J. Thresher, R. Weill, A . T . Wood, and C. Yanagisawa. Z. Phy3. C - Particles and Fields., 30:201, 1986. [5] D. A. Whitehouse. Radiative Kaon Capture at Rest in Hydrogen. PhD thesis, Boston University, 1989. unpublished. [6] D.A. Whitehouse, E.C. Booth, W.J. Fickinger, K.P. Gall, M.D. Hasinoff, N.P. Hessey, D. Horvath, J . Lowe, E.K. Mclntyre, D.F. Measday, J.P Miller, A.J. Noble, B.L. Roberts, D.K. Robinson, M . Sakitt, and M . Salomon. Phys. Rev. Lett, 63:1352, 1990. [7] K. P. Gall. Radiative Kaon Capture at Rest in Deuterium. PhD thesis, Boston University, 1989. unpublished. [8] K.P. Gall, E.C. Booth, W.J. Fickinger, M.D. Hasinoff, N.P. Hessey, D. Horvath, J. Lowe, E.K. Mclntyre, D.F. Measday, J.P Miller, A.J. Noble, B.L. Roberts, D.K. Robinson, M . Sakitt, M . Salomon, and D.A. Whitehouse. Phys. Rev. Rapid Comm., C42:R475, 1990. [9] N. P. Hessey. A Study of Some Hyperon Radiative Decays. PhD thesis, University of Birmingham, 1988. unpublished. [10] N.P. Hessey, E.C. Booth, W.J. Fickinger, K.P. Gall, M.D. Hasinoff, D. Horvath, J. Lowe, E.K. Mclntyre, D.F. Measday, J.P Miller, A.J. Noble, B.L. Roberts, D.K. Robinson, M . Sakitt, M . Salomon, and D.A. Whitehouse. Z. Phys. C Particles and Fields., 42:175, 1989. [11] N. Isgur and G. Karl. Phys. Rev., D18:4187, 1978. [12] A.J.G. Hey and R.L. Kelly. Phys. Rep., 96:72, 1983. 183  [13] A. de Rujula, H. Georgi, and S. Glashow. Phys. Rev., D12:147, 1975. [14] G. Breit. Phys. Rev., 36:383, 1930. [15] A.W. Thomas. Adv. Nucl. Phys., 13:1, 1984. [16] P.N. Bogolioubov. Ann. Inst. Henri Poincare, 8:163, 1967. [17] A. Chodos, R.L. Jaffe, C.B. Thorn, and V. Weisskopf. 1974.  Phys. Rev., D9:371,  [18] Particle Data Group. Phys. Lett, 204B:1, 1988. [19] Y. Hara. Phys. Rev. Lett, 12:378, 1964. [20] M. Ademollo and R. Gatto. Phys. Lett, 13:264, 1964. [21] E.D. Commins and P.H. Bucksbaum. Weak Interactions of Leptons and Quarks. Cambridge University Press, Cambridge, 1983. [22] A. Le Yaouanc, O. Pene, J.-C.Raynal, and L. Oliver. Nucl. Phys., B149:321, 1979. [23] M. Gronau. Phys. Rev., D5:118, 1972. [24] Particle Data Group. Phys. Lett, 111B:286, 1982. [25] M.D. Scadron. Phys. Lett, 95B:123, 1980. [26] M.A. Shifman, A.I. Vainshtein, and V.I. Zakharov. Nucl. Phys., B120:316, 1977. [27] S.G. Kamath. Nucl. Phys., B198:61, 1981. [28] J.O. Eeg. Z. Phys. C - Particles and Fields, 21:253, 1984. [29] P. Langacker and B. Sathiapalan. Phys. Lett, 144B:395, 1984. [30] J.C. Pati and C H . Woo. Phys. Rev., D3:2920, 1971. [31] G. Altarelli and L. Maiani. Phys. Lett, 52B:351, 1974. [32] M.K. Gaillard and B.W. Lee. Phys. Rev. Lett, 33:108, 1974. [33] J.F. Donoghue and E . Golowich. Phys. Lett, B69:437, 1977. [34] F.E. Close and H.R. Rubinstein. Nucl. Phys., B173:477, 1980. [35] F. Halzen and A.D. Martin. Quarks and Leptons. John Wiley & Sons, New York, 1984. 184  [36] G.R. Farrar. Phys. Rev., D4:212, 1971. [37] J.D. Bjorken and S.D. Drell. Relativistic Quantum Fields. McGraw-Hill, New York, 1965. [38] M.B. Gavela, A. Le Yaouanc, 0. Pene, J.-C.Raynal, L. Oliver, and T.N. Pham. Phys. Lett., 101B:417, 1981. [39] L. Copley, G. Karl, and E . Obryk. Nucl. Phys., B13:303, 1969. [40] F.J. Gilman and M.B. Wise. Phys. Rev., D19:976, 1979. [41] Y.I. Kogan and M.A. Shifman. Sov. J. Nucl. Phys., 38:628, 1983. [42] L. Bergstrdm and P. Singer. Phys. Lett, 169B:297, 1986. [43] C. Goldman and C O . Escobar. Phys. Rev., D4O:106, 1989. [44] L. Chong-Huah. Phys. Rev., D26:199, 1982. [45] M.K. Gaillard, X.Q. Li, and S. Rudaz. Phys. Lett, 158B:158, 1985. [46] I. Picek. Phys. Rev., D21:3169, 1980. [47] A.N. Kamal and R.C. Verma. Phys. Rev., D26:190, 1982. [48] R.C. Verma and A. Sharma. Phys. Rev., D38:1443, 1988. [49] R.E. Marshak, Riazuddin, and C P . Ryan. Theory of Weak Interactions in Particle Physics. John Wiley & Sons, New York, 1969. [50] M.D. Scadron and M. Visinescu. Phys. Rev., D28:1117, 1983. [51] M.D. Scadron and L.R. Thebaud. Phys. Rev., D8:2190, 1973. [52] R.W. Brown and E.A. Paschos. Nucl. Phys., B319:623, 1989. [53] P. Zenczykowski. Phys. Rev., D4O:2290, 1989. [54] J. Bernstein. Elementary Particles and Their Currents. W.H. Freeman and Company, San Francisco, 1968. [55] M. Kobayashi, J. Haba, T. Homma, H. Kawai, K. Miyake, T.S. Nakamura, N. Sasao, and Y. Sugimoto. Phys. Rev. Lett, 59:868, 1987. [56] C. James, K. Heller, P. Border, J. Dworkin, O.E. Overseth, R. Rameika, G. Valenti, R. Handler, B. Lundberg, L. Pondrom, M. Sheaff, C. Wilkinson, A . Beretvas, P. Cushman, T. Devlin, K.B. Luk, G.B. Thomson, and R. Whitman. Phys. Rev. Lett, 64:843, 1990. 185  [57] S. Teige, A. Beretvas, A. Caracappa, T. Devlin, H.T. Diehl, K. Krueger, G.B. Thomson, P. Border, P.M. Ho, M.J. Longo, J. Duryea, N. Grossman, K. Heller, M. Shupe, and K. Thorne. Phys. Rev. Lett, 63:2717, 1989. [58] S.F. Biagi, M . Bourquin, R.M. Brown, H.J. Burckhart, Ch. Dore, P. Extermann, M. Gailloud, C.N.P. Gee, W.M. Gibson, R.J. Gray, P. Jacot-Guillarmod, P.W. Jeffreys, W.S. Louis, Th. Modis, P. Muhlemann, R.C. Owen, J. Perrier, K.J.S. Ragan, Ph. Rosselet, B.J. Saunders, P. Schirato, H.W. Siebert, V.J. Smith, K.-P. Streit, J.J. Thresher, R. Weill, A . T . Wood, and C. Yanagisawa. Z. Phys. C - Particles and Fields., 28:495, 1985. [59] S.F. Biagi, M . Bourquin, R.M. Brown, H.J. Burckhart, Ch. Dore, P. Extermann, M. Gailloud, C.N.P. Gee, W.M. Gibson, R.J. Gray, P. Jacot-Guillarmod, P.W. Jeffreys, W.S. Louis, P. Muhlemann, R.C. Owen, J. Perrier, K.J.S. Ragan, Ph. Rosselet, B.J. Saunders, P. Schirato, H.W. Siebert, V.J. Smith, K.-P. Streit, J.J. Thresher, R. Weill, A.T. Wood, and C. Yanagisawa. Z. Phys. C Particles and Fields., 35:143, 1987. [60] M . Bourquin, R.M. Brown, J.C. Chollet, A. Degre, D. Froiddevaux, M. Gailloud, C.N.P. Gee, J.P. Gerber, W.M. Gibson, P. Igo-Kemenes, P.W. Jeffreys, M. Jung, B. Merkel, R. Morand, H. Plothow-Besch, J.-P. Repellin, J.-L. Riester, B.J. Saunders, G. Sauvage, B. Schiby, H.W. Siebert, V . J . Smith, K.-P. Streit, R. Strub, J.J. Thresher, and S.N. Tovey. Nucl. Phys., B241:l, 1984. [61] S.L. Wilson, R. Hofstadter, E.B. Huges, Y . C . Lin, R. Parks, M.W. Ritter, J. Rolfe, R.D. Bolton, J.D. Bowman, M.D. Cooper, J.S. Frank, A.L. Hallin, P. Heusi, C M . Hoffman, G.E. Hogan, F . G . Mariam, H.S. Matis, R.E. Mischke, D.E. Nagle, L.E. Piilonen, V.D. Sandberg, G.H. Sanders, U. Sennhauser, R. Werbeck, R.A. Williams, D.P. Grosnick, S.C. Wright, and J. McDonough. Nucl. Instrum. Methods, A264:263, 1988. [62] 1982. Allen Avionics, Inc., Catalog 24D, 224 E. 2nd St., Mineola, New York. [63] R.L. Ford and W.R. Nelson. The EGS code system: computer programs for the Monte Carlo simulation of electromagnetic cascade showers (version 3). SLAC-210, 1978. [64] S. L. Wilson. A Search for Neutrinoless Muon Decay fi + —»• e 7. PhD thesis, Stanford University, 1985. LA-10471-T. +  [65] R. Brun, F. Bruyant, M . Maire, A. C. McPherson, and P. Zanarini. Geant (version 3-11). 1986. unpublished. [66] D. Worledge and L. Watson. 1973. lished.  Developed at Rutherford Labs, unpub-  [67] D. J. Miller, R. J. Nowak, and T. Tymieniecka. Low and Intermediate Energy Kaon-Nucleon Physics, page 251. D. Reidel Publishing Co., 1980. 186  [68] J. Spuller and D. F. Measday. Phys. Rev., D12:3550, 1975. [69] M . Goossens. Low and Intermediate Energy Kaon-Nucleon Physics, page 243. D. Reidel Publishing Co., 1980. [70] A. D. Martin. Low and Intermediate Energy Kaon-Nucleon Physics, page 251. D. Reidel Publishing Co., 1980. [71] S. Egli. 1987. AIDA, An Interactive Data Analysis program. This was a general purpose analysis routine which was written at the University of Zurich. [72] F. James and M . Roos. Comp. Phys. Comm., 10:343, 1975.  187  


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items