UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Data acquisition for SuperCDMS SNOLAB Page, William Alexander 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2015_september_page_william.pdf [ 11.83MB ]
JSON: 24-1.0166582.json
JSON-LD: 24-1.0166582-ld.json
RDF/XML (Pretty): 24-1.0166582-rdf.xml
RDF/JSON: 24-1.0166582-rdf.json
Turtle: 24-1.0166582-turtle.txt
N-Triples: 24-1.0166582-rdf-ntriples.txt
Original Record: 24-1.0166582-source.json
Full Text

Full Text

Data Acquisition for SuperCDMSSNOLABbyWilliam Alexander PageB.A., Bowdoin College, 2013A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinThe Faculty of Graduate and Postdoctoral Studies(Physics)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)August 2015c© William Alexander Page 2015AbstractThe SuperCDMS SNOLAB experiment will use solid state Germanium andSilicon detectors to search for Weakly Interacting Massive Particles (WIMPs),a leading candidate to explain dark matter. WIMPs are thought to existin halos around galaxies and therefore thought to be constantly streamingthrough the earth. The CDMS detectors have been developed to measurethe energy deposited by a WIMP-nucleon collision in terrestrial calorimeters.This thesis focusses on the Data Acquisition (DAQ) system that usesDetector Control and Readout Cards (DCRCs) and is designed to be dead-time-less. The DCRCs read in the data stream from the detector’s 12 phononand 4 ionization energy channels. The DCRCs also control detector settings,and we develop interactive codes to allow users to easily change detectorsettings through the DCRC.The DAQ is designed to decide which events to write to disk in orderto keep data throughput under a limit yet never miss an event that will beuseful in the subsequent analysis. In this effort we develop different readoutmethods of the detector array for the different calibration runs and WIMPsearch runs. We also develop fast algorithms for rejecting events that fail acertain criteria for being usable. We also present a novel data compressionmethod that reduces the total data volume by a factor of ∼ 16 yet retainsall important information. This method involves a large covariance matrixinversion, and we show that this inversion can be consistently computedgiven that a sufficient amount of data has been used to build the covariancematrix.We also develop a GUI that is a critical element of the detector testingprogram for SuperCDMS SNOLAB. The GUI accesses the data stream as itis being written to disk, efficiently reads in the waveforms, and displays themin a user-friendly, oscilloscope-like, format. By making use of Fast FourierTransform technology, the GUI is also capable of displaying incoming datain the frequency domain. This tool will enable a new degree of real-timeanalysis of detector performance, specifically noise characteristics, at thetest facilities in the next stage of detector testing.iiPrefaceThe introductory chapters of this thesis (i.e. Chapters 1-3) are heavily citedto recognize the many accomplishments of others off of which a thesis suchas this is based. None of the text is taken from previously published articlesor internal collaboration documents.Chapter 4 discusses software which I developed alongside my advisor,Scott Oser, and University of California Berkeley research scientist, BrunoSerfass. Chapter 5 discusses the ‘hybrid optimal filter’ method which wasdeveloped primarily by Scott Oser and myself, and sources that we foundhelpful are cited throughout the text.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xii1 Evidence for Dark Matter . . . . . . . . . . . . . . . . . . . . 11.1 Early Evidence . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Galaxy Clusters . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 The Cosmic Microwave Background . . . . . . . . . . . . . . 41.4 Composition Hypotheses . . . . . . . . . . . . . . . . . . . . 51.4.1 The WIMP Hypothesis . . . . . . . . . . . . . . . . . 62 WIMP Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 92.1 Direct Detection . . . . . . . . . . . . . . . . . . . . . . . . . 93 The Cryogenic Dark Matter Search . . . . . . . . . . . . . . 143.1 Semiconductor Detector Physics . . . . . . . . . . . . . . . . 143.1.1 Electron Recoils . . . . . . . . . . . . . . . . . . . . . 153.1.2 Nuclear Recoils . . . . . . . . . . . . . . . . . . . . . 173.2 Amplifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.2.1 Measuring the e−/h+ Energy . . . . . . . . . . . . . . 183.2.2 Measuring the Phonon Energy . . . . . . . . . . . . . 193.3 iZIP Interleaved Design . . . . . . . . . . . . . . . . . . . . . 19ivTable of Contents4 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . 224.1 The DCRCs . . . . . . . . . . . . . . . . . . . . . . . . . . . 224.2 MIDAS and the Software Front Ends . . . . . . . . . . . . . 244.3 The DCRC Driver . . . . . . . . . . . . . . . . . . . . . . . . 254.4 Detector Testing Tools . . . . . . . . . . . . . . . . . . . . . 274.4.1 The Pulse Display . . . . . . . . . . . . . . . . . . . . 284.5 Readout Modes . . . . . . . . . . . . . . . . . . . . . . . . . 315 The Hybrid Optimal Filter . . . . . . . . . . . . . . . . . . . . 335.1 Fourier Transform of a Hybrid Trace . . . . . . . . . . . . . 355.2 Power Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . 365.3 Fourier Transform without the FFT . . . . . . . . . . . . . . 375.3.1 Fourier Frequencies of a Hybrid Trace . . . . . . . . . 395.3.2 Time Points of a Hybrid Trace . . . . . . . . . . . . . 405.3.3 The Transformation Matrix: A . . . . . . . . . . . . 405.4 Determining and Inverting the Covariance Matrix . . . . . . 415.4.1 The Ratio M/d . . . . . . . . . . . . . . . . . . . . . 435.4.2 Robust and Efficient Computation of the CovarianceMatrix . . . . . . . . . . . . . . . . . . . . . . . . . . 445.4.3 Inversion by Choleski Decomposition . . . . . . . . . 465.5 Time Domain Covariance Matrix and χ2 . . . . . . . . . . . 465.6 Amplitude and Time Offset Estimation Results . . . . . . . 475.6.1 Phonon Pulse Simulation . . . . . . . . . . . . . . . . 485.6.2 Pulse Amplitude Resolution . . . . . . . . . . . . . . 495.7 Time Offset Resolution . . . . . . . . . . . . . . . . . . . . . 515.8 Data Volume Consequences . . . . . . . . . . . . . . . . . . . 515.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58A The Imaginary Component of Zero and Nyquist Frequency 58B Derivation of Aliasing Frequencies . . . . . . . . . . . . . . . 60vList of Tables3.1 Detector specs for different generations of the GermaniumCDMS detectors. Adapted from [24] and SuperCDMS SNO-LAB detector performance projections, by permission. . . . 155.1 Important time and frequency values for hybrid phonon wave-forms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355.2 Important time and frequency values for hybrid charge wave-forms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365.3 The baseline resolution of the HOF improves asM/d increasesup to ∼7. We simulated 1/f + white noise and 1000 phononpulses (discussed further in sec. 5.6.1). The HOF approachesthe limit of its best possible resolution, which is the resolu-tion of the 1D uniform OF applied to the full 625kHz, 32768point, trace. We also show the resolution by determining thecovariance matrix as described in sec. 5.4.2, which showsgood performance. . . . . . . . . . . . . . . . . . . . . . . . . 425.4 Baseline and expected resolution of different length 1D OFsand the HOF. The HOF performs as well as the 32768 point1D OF. The uncertainty on the resolutions is σσ2 = σ2√2N−1 ,so σσ2 = σ2× 0.014 for N=10000, plus a small additionalfactor from the covariance matrix estimation. 1/f+wt.+δhas large noise peaks injected at 100, 1300, and 2200Hz.1/f+wt.+∼ has the normal white and 1/f features on topof which we add a sinusoidal in J , the PSD, up to 40kHz.While the HOF seems to have superior resolution in mostcases, this improvement is mostly within 1 σ uncertainty ofthe standard deviation, and thus assume the HOF and 32768OF to perform equally well. . . . . . . . . . . . . . . . . . . . 505.5 Time offset resolution results and the time offset’s effect onthe pulse amplitude estimation. The time offset estimationwith 1024 points is nearly as good as with 32768 points. . . 52viList of Tables5.6 Projected data throughput values for 42 iZIPs. A 5Hz triggerrate per detector is assumed for the Ba calibration. The livefraction is the assumed fraction of operation time spent inthe specified mode. The 200TB/year and 897 TB/year valuesassume recomputation of the covariance matrix every 3 hours(potentially na¨ıve). . . . . . . . . . . . . . . . . . . . . . . . 53viiList of Figures1.1 Rotational curves, similar to those published by Rubin, over-laid on an image of the Andromeda galaxy (M31) taken bythe Palomar Sky Survey. The roughly constant velocity athigh radius indicates that the high-luminosity galactic cen-ter makes up a small fraction of the total mass. ( c© Vera C.Rubin and Janice S. Dunlap, courtesy of Carnegie InstitutionDTM, by permission [3]). . . . . . . . . . . . . . . . . . . . . 21.2 (Left) Abell 1689 ( c© 2008 X-ray: NASA/CXC/MIT/E.-HPeng et al; Optical: NASA/STScI, by permission) [8]. (Right)1E 0657-56 ( c© 2004 X-ray: NASA/CXC/CfA/M.Markevitchet al.; Optical: NASA/STScI; Magellan/U.Arizona/D.Cloweet al.; Lensing Map: NASA/STScI; ESO WFI; Magellan/U.Arizona/D.Cloweet al., by permission) [9] . . . . . . . . . . . . . . . . . . . . . 31.3 CMB power spectrum predicted by a Λ-CDM cosmology (i.e.a dark energy (69%) and cold dark matter (26%) dominateduniverse). Data points in red are measurements by the Planckcollaboration [11]. Power spectrum of temperature fluctua-tions in the Cosmic Microwave Background ( c© 2013 ESA/Planck,and the Planck collaboration, from Planck 2013 results. I.Overview of products and results, by permission). . . . . . . 51.4 Number density of WIMPs in the Universe as a function oftime, where the relic density depends on the WIMP annihila-tion cross section, σχχ¯ or σ in the figure. On the x-axis, m isthe WIMP mass while T is the temperature of the universe,converted to a mass/energy by Boltzmann’s constant. Evolu-tion of a typical WIMP number density in the early universe( c© NASA/IPAC Extragalactic Database (NED) which is op-erated by the Jet Propulsion Laboratory, California Instituteof Technology, under contract with the National Aeronauticsand Space Administration, by permission). . . . . . . . . . . 7viiiList of Figures2.1 The expected WIMP event rate for the given mχ and σSI =σχT . σSI=10−41cm2 corresponds to roughly the cross sec-tion reported by DAMA/LIBRA, CRESST, CDMS Si, andCoGent. σSI=10−45cm2 corresponds to a cross section justbelow published sensitivities from 2013. Internal CDMS fig-ure, used with permission, from [26]. . . . . . . . . . . . . . . 112.2 Current and projected limits on the WIMP mass-cross sec-tion parameter space. Figure from SuperCDMS collaborationstandard public plots [29]. . . . . . . . . . . . . . . . . . . . . 133.1 A SuperCDMS SNOLAB iZIP 6 detector. Figure from Su-perCDMS collaboration standard public plots [29]. . . . . . 143.2 Electron stopping power in Silicon. Internal CDMS figure,used with permission, from [30]. . . . . . . . . . . . . . . . . 163.3 The ionization yield of different energy nuclear and electronrecoil events. Ionization yield is defined as the fraction of re-coil energy that goes into the e−/h+ pairs with the electronrecoil ionization yield normalized to one (as seen by the hor-izontal lines). Internal CDMS figure, used with permission,from [24]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.4 (Left) A schematic of the CDMS II detector. The four phononchannels are colored on the top surface and the charge elec-trodes (Qi and Q0) are pictured on the lower surface. Figurefrom [37]. (Right) The iZIP design with interleaved chargeelectrodes (±2V ) and phonon rails (0V). Internal CDMS fig-ure, used with permission, from [34]. . . . . . . . . . . . . . 203.5 (Left) The electric field and potential lines produced from thephonon rails (yellow) and charge electrodes (green). Noticethat the unique surface E-field extends ∼1mm into the crystaland therefore surface events within this margin should exhibitasymmetric charge collection. (Right) Data from T3Z1 show-ing surface event discrimination (discussion in main text). In-ternal CDMS figure, used with permission, from [34]. . . . . 214.1 Cartoon showing the proposed phonon waveform readout andthe ability to monitor low frequency noise. Hybrid waveformreadout is discussed in chapter 5. . . . . . . . . . . . . . . . 23ixList of Figures4.2 A Rev C DCRC. The 50-pin connector interfaces with thebias and sensor lines to the detector. The board receivespower over the ethernet and also communicates with the DAQthrough the ethernet port. . . . . . . . . . . . . . . . . . . . 234.3 A cartoon roughly depicting the communication between thetrigger front end, tower front end, the DCRC’s trigger buffer,and the DCRC’s circular buffer. The hypothetical data streamshows triggers labeled with yellow arrows. The time stampsof the triggers are continuously recorded in the trigger bufferand periodically read out by the trigger front end. The trig-gers with yellow arrows too close together would be rejectedas piled-up while the other triggers would be read by thetower front end from the circular buffer. . . . . . . . . . . . 254.4 (Left) Two iZIP phonon pulses that are piled-up. (Right)The rate of usable (non piled-up) events vs. the raw eventrate. For this plot a 52ms long trace has been assumed (sincethis is the proposed data acquisition trace length), where anoptimal raw rate is ∼ 20Hz giving a ∼7Hz usable rate. . . . 274.5 A screen shot of the Pulse Display displaying the PSD of twocharge channels in real time. Behind the scenes, the timedomain traces are grabbed from the SYSTEM buffer whichare then Fourier transformed, and displayed on the screen. . 284.6 A screen shot of one of the existing detector control GUIs,written primarily by A. Roberts. The detector settings (thevalues in red) are controlled by the user. Internal CDMSfigure, used with permission. . . . . . . . . . . . . . . . . . . 315.1 Sampling scheme for phonon and charge traces. Roughly toscale. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.2 Fourier coefficient spacing for phonon waveforms. . . . . . . 365.3 A PSD, using the the pasting method, of a hybrid phonontrace with certain injected frequencies. There are aliasedpeaks at f=14.062kHz and f = 17.186kHz (discussed in maintext). The red line marks the folding point, fNyq,slow at19531Hz. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375.4 There are 16 frequency components in the orthogonal basisthat contribute to the cosine coefficient at ∼ 9538Hz in thenon-orthogonal basis. These frequencies match the predictedalias frequencies derived in Appendix B. . . . . . . . . . . . . 46xList of Figures5.5 A phonon template, with Af = 0.9, Tf = 2771µs,Rf =25µs, Ff = 175µs,As = 0.4, Ts = 2771µs,Rs = 100µs, Fs =816µs. The inset shows the 1024 points of fast sampled data. 48xiAcknowledgementsI am extremely thankful to my advisor Scott Oser for his supervision overthese past two years. I will always be amazed by the amount of time thathe is willing to invest in the education of his students. His direction andpatience over my months of chipping away at the ‘hybrid optimal filter’problem resulted in a successful conclusion (and solution), and I thank himfor giving me to opportunity to take on to this challenge. I am grateful forall that I have learned under his guidance.I also owe a great thank you to Bruno Serfass, with whom I startedworking on Data Acquisition software, but who became an approachablesource of knowledge on anything CDMS related. I am additionally indebtedto Bruno for introducing me more broadly to the CDMS community, beyondthe University of British Columbia group. I sincerely hope he has not wastedtoo much of his time in replies to my emails.Matt Pyle has also been exceptionally generous with his time over thislast summer of my M.Sc. work. It has been a privilege to learn about theCDMS detector technology from one of the foremost experts. I would alsolike to thank Kedar Page for meeting up with me in cafe´s around Vancouverto answer all the questions I had about CDMS, even though he had alreadycompleted his degree.I also thank my family for their encouragement over these last two years.xiiChapter 1Evidence for Dark Matter1.1 Early EvidenceIn the 1920s the Dutch astronomer Jan Oort, observing the motion of starsin the Milky Way, measured the rotational speed of these stars around thegalactic center. By this time, it had been determined that the bulk of thegalaxy’s luminous mass was at its center and this mass had been experi-mentally deduced. The rotational speeds of the stars found by Oort weresignificantly greater than those given by Newton’s law of gravitation [1].In 1933 Fritz Zwicky found the same discrepancy between expected andobserved galactic rotational speeds in his observations of the Coma galaxycluster. He computed the mass of the Coma cluster by way of the virialtheorem and found it to be 400 times larger than the mass due to theluminous part alone. Zwicky called this the “missing mass” problem [2].Since the 1930s more accurate measurements have been made of the rota-tion speeds of stars in our own and other galaxies. Vera Rubin pioneered thework on these measurements in her observations of the Andromeda galaxyin the 1970s. She represented her data as plots of stellar rotational speedvs. distance from galactic center, known as rotational curves, shown in fig.1.1. Rubin is attributed with publishing the first robust evidence for darkmatter and solidifying Zwicky’s, Oort’s, and others’ earlier predictions ofthe existence of dark matter [3].Newton’s laws of gravitation give the rotational speed of the stars, as-suming that all matter in the galaxy is luminous and roughly estimated tobe concentrated at the center, as:v(r) =√GM√1r .From Rubin and other more recent measurements (e.g. [4]) we know thatthe rotation speed of stars does not fall off as r−1/2. Rather speeds areroughly constant at large radius from the galactic center, suggesting thatluminous matter makes up only a small portion of the total mass and thatnon-luminous “dark” matter is distributed around the galaxy.11.2. Galaxy ClustersFigure 1.1: Rotational curves, similar to those published by Rubin, overlaidon an image of the Andromeda galaxy (M31) taken by the Palomar SkySurvey. The roughly constant velocity at high radius indicates that thehigh-luminosity galactic center makes up a small fraction of the total mass.( c© Vera C. Rubin and Janice S. Dunlap, courtesy of Carnegie InstitutionDTM, by permission [3]).If we instead modeled mass in the galaxy as a spherically symmetricdistribution that varies with distance, M(r), the constant stellar rotationalvelocity at large distances given by v = C =√GM(r)√1r , can be explainedby an enclosed mass that varies linearly with the distance from the center:M(r) ∝ r.Therefore the enclosed mass has a density proportional to the inverse squareof the distance from the galactic center, ρ(r) ∝ 1r2 . Most of this matter mustbe dark because the luminous mass density falls off much more rapidly. Toaccount for this missing matter, the leading model is a spherically symmetric“halo” of dark matter distributed throughout our galaxy and other galaxies,interacting gravitationally and making up the majority of mass in the galaxyand the universe.1.2 Galaxy ClustersMeasurements of galaxy clusters since Zwicky have provided additional evi-dence for dark matter and important information regarding the compositionof dark matter. Observations of galaxy clusters—the largest gravitationallycollapsed astrophysical structures—are particularly powerful because dif-ferent techniques can be used to make independent measurements of theirmass.The Chandra Observatory measures the x-ray emission from the inter-galactic gas of galaxy clusters. With the clusters’ gravitational potential21.2. Galaxy Clusterslargely due to dark matter, their intergalactic gas gains kinetic energy andheats to ∼ 108K, and they are therefore among the brightest x-ray sources[6]. By measuring the x-rays, the temperature and pressure of the gas canbe computed. If the cluster is in equilibrium (which excludes merging orcolliding clusters), the gravitational potential can be computed under thegood assumption that gravitational forces cancel with pressure forces. Re-cent publications matching models of the interstellar gas and dark matterhalo to Chandra Observatory data indicate that ordinary matter makes up12-15% of the total mass of the cluster [6] [7]. Figure 1.2 (left) shows Abell1689, the largest known galaxy cluster which contains ∼1000 galaxies. Theintergalactic gas, emitting in the the x-ray, is shown in purple. The op-tical image is overlaid and shows some distortion and arcing of luminousbackground objects characteristic of strong gravitational lensing.In an independent measurement of the cluster mass, the Hubble SpaceTelescope measures light from background objects that bends around themassive structures. The total gravitational mass of the cluster is computedby the strength of the gravitational lensing, which confirms that the majorityof gravitational mass of the clusters is not due to the luminous matter[5].Figure 1.2: (Left) Abell 1689 ( c© 2008 X-ray: NASA/CXC/MIT/E.-H Penget al; Optical: NASA/STScI, by permission) [8]. (Right) 1E 0657-56 ( c©2004 X-ray: NASA/CXC/CfA/M.Markevitch et al.; Optical: NASA/STScI;Magellan/U.Arizona/D.Clowe et al.; Lensing Map: NASA/STScI; ESOWFI; Magellan/U.Arizona/D.Clowe et al., by permission) [9] .The “Bullet” Cluster is one of the most famous examples of the darkmatter making itself apparent in our universe. The Hubble Space Telescopeand Chandra Observatory have observed the collision of two galaxy clus-31.3. The Cosmic Microwave Backgroundters and measured the distributions of both the gravitational matter fromstrong gravitational lensing and luminous matter from x-ray emission. Asshown in figure 1.2 (right), these two distributions are observed to have sep-arated. The luminous matter (pink) is superimposed on the distribution ofgravitational matter (i.e. predominantly dark matter) as measured by lens-ing (purple). The two dark matter distributions have passed through eachother while the luminous distributions lag behind due to the impedance oftheir collisions. The nearly non-interacting dark matter streams throughunimpeded.1.3 The Cosmic Microwave BackgroundIf we follow this chapter’s progression to physically larger scales, from the in-dividual galaxies of section 1.1 to the galaxy clusters of section 1.2, then theCosmic Microwave Background (CMB) is certainly the next topic to discuss.Approximately 400,000 years after the Big Bang, the expanding universecooled to a point where it became energetically favorable for the plasma ofprotons and electrons to fall out of equilibrium with photons and form neu-tral hydrogen. At this point of ‘recombination,’ the universe became trans-parent to the photons which make up the CMB radiation, which matchesa blackbody spectrum with (currently) a temperature of 2.7K. Slight dif-ferences, or anisotropies, in this temperature across the sky, on the orderof 10µK, have been a rich source of cosmological information, including themost accurate measure of the non-baryonic (dark matter) matter density inthe universe.Transforming the spacial anisotropies into spherical harmonics (and tak-ing the magnitude) gives the CMB power spectrum. The acoustic peaksof the power spectrum show the angular scales at which the photons wereslightly hotter and denser than average at recombination (and today). Thehotter overdensities are well understood as regions where the photons, cou-pled to the baryonic matter right up to the point of recombination, hadclumped together because of gravitational wells into which the baryonicmatter were attracted.Most importantly for the case of non-baryonic dark matter, the size of thepeaks would be much smaller if these gravitational wells had been formed bybaryonic matter alone. This is because the pressure of the photons coupledto the baryonic matter opposes the formation of gravitational wells. In orderto accurately model the location and size of the peaks, a decoupled non-baryonic matter component must exist which continues to collapse regardless41.4. Composition HypothesesFigure 1.3: CMB power spectrum predicted by a Λ-CDM cosmology (i.e. adark energy (69%) and cold dark matter (26%) dominated universe). Datapoints in red are measurements by the Planck collaboration [11]. Powerspectrum of temperature fluctuations in the Cosmic Microwave Background( c© 2013 ESA/Planck, and the Planck collaboration, from Planck 2013 re-sults. I. Overview of products and results, by permission).of the photon restoring force. The photon pressure restoring force does setup an oscillation of the baryon-photon plasma, which is highly sensitive tothe non-baryonic matter density, and which is imprinted on the CMB at lastscattering in the form of the peaks in the power spectrum [13].While the baryonic and non-baryonic matter densities are degeneratewith other cosmological parameters in determining the location and size ofthe peaks, the degeneracies can be broken by including other cosmologicaldata (e.g. 21cm and supernovae measurements). The best current measure-ment of the cold dark matter density fraction of the universe is 26% (witha 69% dark energy component) [11].1.4 Composition HypothesesDespite overwhelming observational evidence that dark matter does exist,very little is known about its composition. A number of theories have been51.4. Composition Hypothesesput forth.In one effort to account for the dark matter, collaborations searched forhidden massive compact halo objects (MACHOs), such as black holes ormassive non-luminous planets. They looked for MACHOs in the Milky Wayby waiting for slight unexpected gravitational lensing of distant luminousgalaxies as a MACHO passed between us and the galaxy. These searchesruled out the possibility of MACHOs constituting any more than 25% ofthe Milky Way’s dark matter halo, and therefore disqualified them as theprimary dark matter candidate [10].Most theories predict that non-baryonic particles make up dark matterhalos around galaxies, but still there exist many possibilities for the type ofparticle. If we assume that dark matter is non-baryonic, it is highly likelythat such dark matter is also non-relativistic, or “cold”, dark matter. Rel-ativistic, or “hot,” dark matter conflicts with the accepted model of galaxyformation [14]. Returning to the discussion of section 1.3, had the darkmatter been relativistic then its kinetic energy would have largely preventedits gravitational collapse. However, the gravitational landscape at recom-bination is well understood, and not only is it imprinted on the CMB butit also explains the formation of the dense small scale structures (galaxies)seen in the universe today. That the majority of dark matter is cold rulesout a hot relativistic (neutrino-like) species from contributing substantiallyto the 26% dark matter component.With these requirements, Axions and WIMPs are the two leading darkmatter candidates. The axion particle was originally proposed as a solutionto the strong CP problem [15] and has since become an leading ultra-lightmass (100keV to 1MeV) dark matter candidate. The ADMX experimentsearches for axions by detecting axion-to-photon conversion in a strong mag-netic field resonance cavity [16].The SuperCDMS experiment, along with many competitor experiments,searches for WIMPs. We will devote the remainder of this chapter andchapter 2 to discussion of this hypothetical particle.1.4.1 The WIMP HypothesisThe WIMP hypothesis is intriguing because it fits into a parameter spacesupported by supersymmetric (SUSY) theory as well as cosmology. By sat-isfying the above cosmological constraints, it is likely that WIMPs are athermal relic of the Big Bang phase of the universe, that “froze out” ofthe primordial plasma at early times. The dark matter density at freezeout (and today) is highly sensitive to the the WIMP cross section σχχ¯. For61.4. Composition HypothesesWIMPs to make up the 26% energy density of the universe, the WIMP crosssection must be on the scale of the weak force where SUSY postulates thatnew particles could exist.Figure 1.4: Number density of WIMPs in the Universe as a function oftime, where the relic density depends on the WIMP annihilation cross sec-tion, σχχ¯ or σ in the figure. On the x-axis, m is the WIMP mass while Tis the temperature of the universe, converted to a mass/energy by Boltz-mann’s constant. Evolution of a typical WIMP number density in the earlyuniverse ( c© NASA/IPAC Extragalactic Database (NED) which is operatedby the Jet Propulsion Laboratory, California Institute of Technology, un-der contract with the National Aeronautics and Space Administration, bypermission).A number of assumptions regarding matter in the early universe allowscosmologists to estimate the WIMP cross section and mass. First, WIMPswould have been in constant creation and annihilation until some criticalpoint of the universe’s cooling where the low temperature would preventany further WIMP creation. Following this, expansion of the universe wouldhave made it exponentially unlikely that a WIMP would collide with itsantiparticle and annihilate [18]. This second critical moment—when anni-hilation ceases—determines particle abundance and depends on the WIMP71.4. Composition Hypothesesannihilation cross section as shown in Fig. 1.4. In order to account for thedark matter in the universe, the WIMP annihilation cross section is esti-mated to be roughly at the scale of the weak force where yet undiscoveredparticles are expected to exist [19] .This coincidence is what some refer to as the “WIMP miracle.” SUSYwas proposed in order to solve the hierarchy problem (the large discrepancybetween the magnitude of the weak force and the gravitational force) andSUSY could provide elegant means of unifying gravity with the other threefundamental forces [19]. SUSY adds particles to the Standard Model and thelightest of these particles, the neutralino, could be exactly what experimen-talists are looking for in the WIMP. The Minimal Supersymmetric StandardModel places the neutralino mass less than the TeV scale [19]. This conver-gence of SUSY and cosmology is the primary motivation behind the WIMPhypothesis and has launched the dozens of experiments attempting to detectWIMPs [19].8Chapter 2WIMP DetectionThe WIMP dark matter detection effort is sufficiently large that this thesiswill not attempt to review the field of experiments pushing to make the firstcredible WIMP discovery. Here we quickly mention the leading searchesthat are constraining dark matter parameters and excluding certain models.Experimentalists at the LHC attempt to create dark matter throughparticle collisions, and to observe an absence of energy in their detector asthe dark matter particle passes through [20]. Telescopes on earth and aboardsatellites seek to observe excess gamma ray signals in nearby dwarf galaxiesas a signature of dark matter annihilation in the dense galactic center [21].Experiments like Super-Kamiokande and IceCube search for dark matterannihilation through the neutrinos they produce [22].Direct detection experiments look for WIMP-nucleon recoils in terres-trial detectors and employ different targets (e.g. liquid argon, liquid xenon,germanium, silicon, calcium tungstate), background rejection techniques,amplifiers, and/or energy thresholds. Direct detection experiments are opti-mized (or equivalently, limited) to certain WIMP masses and cross sectionsbased on their detector technology.2.1 Direct DetectionIn order to measure the energy deposited by a WIMP-nucleon collision,detectors must be capable of measuring energies on the order of 1keV. The(purely classical) expected energy transfer to a target in a WIMP collisionis given by:Erecoil = (mχmTmχ +mT)2 v2mT(1− cos(θχ)) (2.1)where mχ is the WIMP mass, mT is the target mass, v is the WIMP velocity,and θχ is the WIMP scattering angle.The WIMP velocity is given by v and deserves brief discussion. In theStandard Halo Model (SHM) the dark matter halo is overall stationary with92.1. Direct Detectionrespect to the galaxy, and therefore the local WIMP velocity is given ap-proximately by the Sun’s rotational velocity (v ≈ 220km/s ≈ 7 × 10−4c).It is standard to model statistical fluctuations in the WIMP velocity, in thegalactic rest frame (where the bulk WIMP velocity is zero), by a Maxwelliandistribution [25]:f(v) ={Ae(−v2/v20) v < vesc0 v > vesc(2.2)where v0 = 7×10−4c and vesc is the local escape velocity (vesc ≈ 4.5×10−3c).One key element of direct detection is made clear from equation 2.1—the WIMP cannot efficiently transfer energy to target components that aremuch less massive than a nucleon. Consider the maximum energy transfer ofa WIMP-electron collision (θχ → 180◦, mT ≈ me), giving Erecoil ≈ 2mev2 =0.25eV. Signals of this magnitude will be entirely buried under the noisefloor of detectors for even the lowest threshold next generation dark matterexperiments (SuperCDMS SNOLAB HV detector thresholds will be on theorder of 100eV).Instead consider the maximum energy transfer of a WIMP-nucleon col-lision where the dark matter particle is well matched kinematically to a Genucleus target: (θχ → 180◦, mχ ≈ mT ≈ 72mp). In this case Erecoil ≈(1/2)mT v2 ≈ 16.5keV, which is certainly an energy capable of detection.Next, since the local dark matter density is approximately 0.3GeV/cm3(i.e. many WIMPs streaming through the detectors every day), direct de-tection experiments hope to measure this steady rate of WIMP events. Theexpected differential scattering rate, which of course depends on Erecoil, isgiven by:dRdErecoil= ρmTmχ∫ ∞vminvf(v)[ dσχTdErecoil(v,Erecoil)]dv[keV kg day]−1 (2.3)where dσχT /dErecoil is the differential cross section and vmin is the minimumWIMP velocity in order to produce recoil energy Erecoil.1Except for the differential cross section dσχT /dErecoil and the WIMPmass, all the parameters of the differential scattering rate are estimatedin the SHM. The differential cross section, while largely unknown, clearlyhas large implications on the detectability of WIMP particles. The total1The rotation of the earth around the sun seasonally adds and subtracts from theWIMP velocity relative to the earth and detecting a seasonal variation in a possibleWIMP signal would be another sign that the signal is indeed the dark matter halo. A dif-ferent direct detection experiment—the DAMA/LIBRA collaboration—claims that theyare seeing this annual modulation in their data [27].102.1. Direct Detection340 2 4 6 8 10102101100101Recoil energy threshold (keV)Integrated rate above threshold (cts kg1  d1 ) m = 10 GeV, SI = 1041 cm2  GeSiNaXe0 20 40 60 80 100105104103102Recoil energy threshold (keV)Integrated rate above threshold (cts kg1  d1 ) m = 100 GeV, SI = 1045 cm2  GeSiNaXeFigure 2.1: Comparison of the integrated WIMP rate (in counts kg1 day1) as a functionof the detector recoil energy threshold and target nucleus. (left) Rate assuming a 10 GeVWIMP, with a cross section close to that needed to explain the experimental results inSec. 2.1.4. (right) Rate for a 100 GeV WIMP with a cross section just below currentdetection limits. Spin-independent elastic scattering and the SHM parameters describedabove are assumed.out in a model-independent fashion. In this case, it is common to parameterize the formfactor, F 2SD ⌘ S(q)/S(0) in terms of isoscalar, a0 = ap + an, and isovector, a1 = ap  an,components, with:S(q) = a20S00(q) + a0a1S01(q) + a21S11(q) (2.10)Here, the parameters Sij depend on the nucleus and must be determined from nuclearstructure calculations (e.g., [151,152]).From the parameterizations of the cross sections above, we can make several generalconclusions. For the spin-independent scattering cross section with fp ⇡ fn, then thetotal scattering rate is proportional to A2. At low momentum transfer, the WIMP-nucleonscattering does not probe the detailed nuclear structure, and the scattering amplitudesadd coherently leading to the A2 dependence. For spin-dependent scattering, the crosssection scales as (J + 1)/J instead, leading to larger expected spin-independent sensitivityfor most WIMP models and targets. However, models can be constructed in which thespin-independent scattering is significantly suppressed, leading to the need to also explorethe spin-dependent parameter space to fully exclude SUSY WIMPs.Figure 2.1 shows the expected total scattering rate for spin-independent elastic scatteringFigure 2.1: The expected WIMP event rate for the given mχ and σSI =σχT . σSI=10−41cm2 c rresponds to roughly the cross section reported byDAMA/LIBRA, CRESST, CDMS Si, and CoGent. σSI=10−45cm2 co re-sponds to a cross section just below published sensitivities from 2013. In-ternal CDMS figure, used with permission, from [26].cross section could be the sum of a spin independent and spin dependentterm, although the spin independent term is expected to be amplified. AsWitten and Goodman noted in their 1978 paper [23], the spin independentterm in the cross section scales as the number of nucleons (A) squared,which greatly increases the likelihood nd conceivability of a direct WIMPdetection. A2 scaling is due to the low-momentum WIMP-nucleon collisions,which does not probe the nuclear structure, which leads to coherent addingof the scattering amplitudes. Even so, lower bounds on the WIMP crosssection of most models extend below the coherent neutrino scattering limitshown on figure 2.2. The solar and atmospheric neutrinos to which detectorsbecome sensitive below these WIMP cross sections present a serious obstaclefor beyond-next-generation direct detection experiments.More optimistically, D. Moore omputed the integral in equ tion 2.3for different targets and different cross sections (figure 2.1). In the (overlyoptimistic) left plot he used a cross section from disputed WIMP detectionclaims and in the right plot he used cross sections just below the currentpublished sensitivities. As a rough reference, reading off from figure 2.1(left), a recoil threshold of 6keV gives a rate of 1/10 [events kg−1day−1].CDMS II ha roughly 5kg of detector bulk (Ge and Si), translating to arate of 0.5 events per day.At these relatively low nuclear-recoil energies and low event-rates, onefundamental challenge to direct detection experiments is background dis-112.1. Direct Detectioncrimination. One advantage is that the majority of backgrounds will scat-ter off electrons in the detector bulk. All competitive (with the exceptionof CDMS HV detectors) direct detection technologies have means to dis-tinguish electron recoils from nuclear recoils and thus reject backgroundevents. CDMS’s background discrimination technique will be discussed inmore detail in chapter 3.The above is in principle how CDMS and other direct detection ex-periments intend to discover WIMPs and measure σχT and mχ. However,ever since 2002 when the first generation of CDMS results were published,no such rate (that has been widely accepted within the community) hasbeen observed. CDMS has gone through three generations of experiments:CDMS, CDMS II, SuperCDMS Soudan, and is now preparing for Super-CDMS SNOLAB. Each generation of the experiment has increased the totaldetector mass and implemented improved detector technology. In two of theiterations the detectors were moved to a cosmogenically cleaner (deeper) andradiogenically cleaner site. Meanwhile competitor experiments made similarimprovements and new detection technologies were developed, all in orderto combat the two primary difficulties facing WIMP direct detection: (1) alow rate of WIMP-nucleon collision, and (2) background rejection.122.1. Direct DetectionFigure 2.2: Current and projected limits on the WIMP mass-cross sectionparameter space. Figure from SuperCDMS collaboration standard publicplots [29].13Chapter 3The Cryogenic Dark MatterSearch3.1 Semiconductor Detector PhysicsThe CDMS detector bulk is ultra pure crystalline germanium and silicon.The response of cryogenically cooled crystalline semiconductors to a nuclearrecoil versus an electron recoil provides CDMS with the ability to discrimi-nate between a potential WIMP signal and background events. Additionally,the availability of radioactively stable Ge and Si, free of radioactive contam-inants, ensures that any energy deposited in the crystal is the result of aforeign particle collision. Si and Ge also have good charge transport prop-erties and small band gaps [28], the importance of which will be elaboratedon in section 3.1.1.As shown in Table 3.1, the mass of each individual detector is on theorder of one kg and they are about the size of a hockey puck (fig. 3.1). Thedetectors are stacked in towers, each of which (for SuperCDMS SNOLAB)holds 6 detectors.Figure 3.1: A SuperCDMS SNOLAB iZIP 6 detector. Figure from Super-CDMS collaboration standard public plots [29].143.1. Semiconductor Detector PhysicsCDMS SCDMS SCDMSII Soudan SNOLABDetector ZIP iZIP iZIP (HV)Mass per Detector [kg] 0.25 0.62 1.38 (1.38)Number of Detectors 19 15 ∼42 (∼6)Total Ge Mass [kg] 4.75 8.89 ∼58 (∼8)Phonon Channels per Det. 4 8 12 (∼16)Phonon Energy Res. [eV] 180 200 ∼75 (∼50)Trigger Threshold [eV] ∼2000 ∼3000 ∼550 (∼350)Charge Energy Res. [eV] 300 450 ∼200 (–)Table 3.1: Detector specs for different generations of the Germanium CDMSdetectors. Adapted from [24] and SuperCDMS SNOLAB detector perfor-mance projections, by permission.Ever since 2003, CDMS has been operating out of the Soudan mine(∼2300ft below surface) in Minnesota and the SuperCDMS SNOLAB ex-periment is scheduled to start taking data in the SNOLAB facility (∼6800ftbelow surface) in 2018. The earth above these facilities serves as a shield ofcosmic ray muons.Additional shielding is added locally around the detectors. The towerssit in a muon veto cage in the event that a cosmic ray muon does penetrateinto the facility. Ancient, non-radioactive, lead surrounds the detectors andserves to absorb gamma rays while a polyethylene cover is in place to absorbneutrons. A copper enclosure shields from alpha and beta radiation.Because any particle (WIMP or background) that interacts within thedetector does so in (1) an electron recoil or (2) a nuclear recoil, we nowdiscuss the basic physics of these interactions and why they form the basisof the detectors’ sensitivity to WIMPs. The ideas behind this discussionof nuclear vs. electron recoils in semiconductors come from Jan Lindhard’swork, published in the 1960’s, which is known as Lindhard theory [31].3.1.1 Electron RecoilsMost background particles (α, β, γ) are far more likely to recoil off electronsin the detector bulk. For example, when a medium energy γ-ray (10keVto 1MeV) passes through the detector bulk, it is likely to compton scatterand create electron-hole (e−/h+) pairs along its track[37]. In germanium,where the energy to create an e−/h+ pair (Ecreate) is 2.96eV, a 10keV γproduces 3000 e−/h+ pairs under the good assumption that it is fully153.1. Semiconductor Detector Physicsabsorbed. Similarly, α and β background recoil off electrons and ionizee−/h+ pairs in equal proportions to their energy.A recoiling electron will lose energy by transferring it to other surround-ing electrons. Fig. 3.2 gives the stopping power for recoiling electrons in Si(which is similar to that in Ge). Note that on this plot the x and y energyscales are comparable, and the y axis gives the energy loss per µm. Equiv-alently, all but the highest energy (> 105eV) recoiling electrons only travelat maximum a matter of µm before losing most of their energy. We can beconfident in the assumption that all of a recoiling electron’s energy will bedeposited within the crystal.Figure 3.2: Electron stopping power in Silicon. Internal CDMS figure, usedwith permission, from [30].How does the recoiling electron lose its energy to other electrons? Anelectron with sufficient energy to create another e−/h+ pair (Ecreate) willdo so until its energy falls below Ecreate. The newly ionized electrons willsimilarly ionize other e−/h+ pairs, creating an electron cascade. Energyis distributed to other local electrons until every electron’s energy is belowEcreate. At this point, the electrons are still capable of losing energy, but nolonger by ionization. Instead they transfer their energy to vibrations in thecrystal lattice (phonons), which then travel like particles through the crystal[24]. Electrons radiate in this manner until they reach the gap energy (Egap)of the material, which is the minimum energy that an excited electron canhave. Therefore, an electron recoil results in two types of energy depositedin the crystal: e−/h+ pair energy and phonon energy.163.1. Semiconductor Detector PhysicsWe can roughly calculate the expected energies channeled into e−/h+pair and phonon production. Given some initial electron recoil energy, thefractional charge energy is given by Egap/Ecreate and the fractional phononenergy given by 1 − Egap/Ecreate. In germanium, where Egap=0.785eVand Ecreate=2.96eV, roughly 25% of electron recoil energy goes into e−/h+pairs[24].3.1.2 Nuclear RecoilsWIMPs and neutrons (a particularly toxic background) recoil off Ge and Sinuclei in the CDMS detector bulk. Nuclear recoils are similar to electronrecoils in many ways; however, they differ most importantly in that a lowerfraction of energy goes into e−/h+ pairs than in electron recoils.When a nucleus recoils, it is capable of transferring energy to other nucleiand other surrounding electrons. Nuclei are capable of this from a purelykinematic standpoint, whereas electrons are not because of their small mass[31]. Let us consider a high energy nuclear recoil and a low energy nuclearrecoil separately in order to discuss how energy is partitioned. In a highenergy nuclear recoil (∼ 1MeV in Ge), most of the energy goes into theelectron cascade (figure 3.3) [31]. Therefore, a high energy nuclear recoillooks a lot like an electron recoil in the detectors.Figure 3.3: The ionization yield of different energy nuclear and electronrecoil events. Ionization yield is defined as the fraction of recoil energy thatgoes into the e−/h+ pairs with the electron recoil ionization yield normalizedto one (as seen by the horizontal lines). Internal CDMS figure, used withpermission, from [24].A lower energy nuclear recoil, however, transfers energy more evenlybetween excitations of electrons and excitations of other nuclei [31]. The173.2. Amplifiersnuclei are freed from the crystal lattice and excite other nuclei in a cascadeseparate from the electron cascade. Once the nuclei have insufficient energyto excite another nuclei they lose their energy to phonon production. Thistime, however, there is no analogous Egap for phonons as there was in theelectron case. Therefore, nuclear cascades are more efficient than electroncascades in the final phonon production[31]. Figure 3.3 displays this aspectof nuclear recoils over a range of recoil energy.From the discussion of section 2.1, clearly it is the lower energy (<500keV)nuclear recoils that are expected from WIMPs. The event discrimination inthis energy range provides the CDMS detectors with their sensitivity toWIMPs masses above 5GeV.3.2 AmplifiersThe detector surfaces are instrumented with sensors designed to measurethe e−/h+ and phonon energy with optimal resolution. As resolutions im-prove, event discrimination improves and (perhaps more importantly) de-tector thresholds can be lowered.3.2.1 Measuring the e−/h+ EnergyOnce an electron cascade has occurred, the detectors prevent the excitede−/h+ pairs from de-exciting back into valence states by applying a voltageacross the detector. The electric field drifts the charges to the flat detectorendcaps. In standard CDMS iZIP detectors, the voltage has been tunedto the smallest value such that charge trapping in crystal impurities is alsominimized. The optimal field is ∼1V/cm. In SuperCDMS iZIP detectors,FET (Field Effect Transistor) amplifiers read out the image charge on theelectrodes and amplify this signal as a voltage that is read out and furtheramplified by downstream amplifiers. Because the fall time of the Super-CDMS iZIP amplifier is larger than the charge collection time (∼ 1µs), avoltage pulse proportional to the charge collected is the only informationthat is obtained from the charge amplifier [37]. SuperCDMS SNOLAB willuse HEMTs (High Electron Mobility Transistors) in place of FETs, primar-ily to reduce heat load on the fridge, but which rely on the same basicamplification principles described above.183.3. iZIP Interleaved Design3.2.2 Measuring the Phonon EnergySix phonon channels are implemented on each detector surface, and eachchannel consists of thousands of Transition Edge Sensors (TESs). The TESsare made from Tungsten superconducting material whose transition tem-peratures (Tc) are tuned in fabrication to be anywhere between 30mK and200mK. The voltage biased TESs are held within the range of their tran-sition such that when heat from phonons reach the sensors their resistancechanges rapidly and the current through them drops. The current throughthe TES is inductively coupled to a SQUID which further amplifies the re-duction in current. TESs, which are at the heart of the CDMS detectors’sensitivity to dark matter, have also propelled the fields of x-ray astronomyand CMB cosmology to their current states, and the sensor technology isreviewed in K. Irwin’s and G. Hilton’s seminal chapter [32].Unlike the charge measurement, more than just the phonon energy isencoded in the phonon measurement. We obtain an estimate of the eventlocation by comparing the energy deposited in the different phonon chan-nels. We have a good idea for how phonons move through the crystallinebulk because the anisotropic and incoherent propagation of phonons in crys-talline semiconductors is well modeled by CDMS’s detector Monte Carlo[33]. Therefore the partitioning of energy in the different channels allows aweighted average of sorts to determine the x and y coordinate of the initialevent[37]. The pulse shape of the phonon energy signal is also occasionallyused to obtain more information about the event.3.3 iZIP Interleaved DesignWhile the CDMS II ZIP detectors were beautiful devices with world-leadingWIMP sensitivity in their time, they suffered primarily from one designflaw. The ionization from electron recoils close to the detector surface wasmore prone to trapping which reduced the ionization yield of the event [34].This reduced ionization yield caused surface electron recoils to mimic thenuclear recoil signature and thus leak into the WIMP signal region. CDMSII sensitivities were limited by this background [34].A new detector (the iZIP) was designed to provide a solution to discrimi-nate the surface event background. The ionization electrodes are interleavedbetween the phonon sensors which allows readout of both ionization andphonon energy on each detector face. Just as importantly, the phonon TESrails are maintained at 0V while the ionization electrodes are kept at oppo-site potentials (±2V in standard operation) on either face. This configura-193.3. iZIP Interleaved DesignFigure 3.4: (Left) A schematic of the CDMS II detector. The four phononchannels are colored on the top surface and the charge electrodes (Qi and Q0)are pictured on the lower surface. Figure from [37]. (Right) The iZIP designwith interleaved charge electrodes (±2V ) and phonon rails (0V). InternalCDMS figure, used with permission, from [34].tion produces a unique electric field within the crystal (figure 3.5) whereinthe ionization from surface events will largely be collected on one side ofthe detector face. The CDMS and EDELWEISS collaborations have shownthat the interleaved design allows for robust rejection of surface events. Ananalysis cut on asymmetric charge collection on side 1 vs. side 2 of the iZIPrejects the surface events.Figure 3.5 (right) shows the ionization yield vs. recoil energy for eventsfrom 900 hours of exposure of Soudan iZIP T3Z1 to a 210Pb source. The210Pb source was found, as expected, to produce ∼ 130 surface electronrecoils per hour via beta decay. These events exhibit reduced ionizationyield and fail the symmetric charge cut. They populate the region abovethe 2σ nuclear recoil band and below ionization yields of ∼1. The eventsbelow the germanium nuclear recoil band but also failing the symmetriccharge cut are surface events from recoiling 206Pb nuclei (the end product ofthe 210Pb decay). The colored blue dots are events that pass the symmetriccharge charge and accordingly show large ionization yield corresponding tobulk electron recoils. Out of the 90,000 events in this plot, two outliers existwhich (barely) pass the charge symmetry cut but show low ionization yield,which are blue and circled in black [34]. Overall, this study demonstratesrobust surface rejection capability of the iZIP.203.3. iZIP Interleaved DesignFigure 3.5: (Left) The electric field and potential lines produced from thephonon rails (yellow) and charge electrodes (green). Notice that the uniquesurface E-field extends ∼1mm into the crystal and therefore surface eventswithin this margin should exhibit asymmetric charge collection. (Right)Data from T3Z1 showing surface event discrimination (discussion in maintext). Internal CDMS figure, used with permission, from [34].21Chapter 4Data AcquisitionThis chapter rapidly specializes, relative to the discussion of previous chap-ters, on the topic of Data Acquisition (DAQ) for SuperCDMS SNOLAB.We limit this chapter further by omitting in-depth discussion of the readoutelectronics and focus on the overall framework and various software elementsof the proposed DAQ architecture.Each iZIP will have a Detector Control and Readout Card (DCRC) con-nected to the upstream amplifiers and filters of the 12 phonon and 4 chargechannels, and it will be continuously digitizing these signals. One substan-tial upgrade from previous generations of the CDMS electronics is now theexistence of deadtime-less triggering and read out. Instead of halting digi-tization upon read out, the DCRC can simultaneously read out waveformsfrom triggers and digitize the incoming data stream (further discussion be-low).This parallelization of processes is powerful in other ways as well. Forexample, our DAQ system plans to take advantage of the DCRC’s ability toread out very long traces (∼ 50ms) without a livetime penalty. At the Su-perCDMS Soudan experiment, excess low frequency noise dominated othersources of TES noise and severely degraded the iZIP energy resolution. Asshown in figure 4.1, the long-trace readout probes low frequencies which willallow us to filter this noise if it exists. Optimally filtering these waveformsis discussed in much further detail in chapter 5.The DAQ system also plays an important role in the high statistics cal-ibration runs. These runs, accomplished with different radioactive sources,are critical to understanding the energy signature of background particlesand to ensure that a background event will never be mistaken for a WIMP.We discuss this, and other, fundamental features of the DAQ system below.4.1 The DCRCsThe DCRC is the fundamental piece of DAQ hardware and these singlecompact boards have replaced the Front End Boards (FEBs), Readout Trig-ger Filter (RTF) boards, and bulky digitizers from previous generations of224.1. The DCRCsFigure 4.1: Cartoon showing the proposed phonon waveform readout andthe ability to monitor low frequency noise. Hybrid waveform readout isdiscussed in chapter 5.CDMS. The DCRCs continuously digitize the analog signal from the 12phonon and 4 charge channels at 2.5Mhz (charge) and 625kHz (phonon)and write each channel to a circular memory buffer. This memory buffer islarge enough to store several seconds worth of data bins.2 The digitizer onthe DCRC does a 14-bit analog to digital conversion.Figure 4.2: A Rev C DCRC. The 50-pin connector interfaces with the biasand sensor lines to the detector. The board receives power over the ethernetand also communicates with the DAQ through the ethernet port.The DCRC is also responsible for determining where triggers occurwithin the circular buffer and recording the time stamps. Trigger decisionsare then made on the data in the circular buffer and triggers are read outbefore the data are overwritten (discussed further in section 4.2).A Field Programmable Gate Array (FPGA) on the DCRC allows for asignificantly fancier SuperCDMS SNOLAB trigger algorithm than was pos-2The Rev C version DCRC has a 3.3 second buffer, but the buffer capacity will belarger on later versions.234.2. MIDAS and the Software Front Endssible with the previous electronics. The standard trigger algorithm is tolook for the rising edge of pulses by setting a threshold and triggering onexcursions of the ADC value above the baseline that exceed the threshold.In experiments seeking to lower thresholds and increase sensitivity to lowenergy events, there are clear disadvantages to this simple algorithm. TheDAQ system will improve the trigger by integrating into the algorithm in-formation about the noise characteristics and the expected pulse shape. Thecomputational speed permitted by the FPGA allows for a finite impulse re-sponse (FIR) filter to be continuously applied to some finite length of binsin the circular buffer from which a pulse amplitude can be estimated. TheFIR coefficients will be computed based on the noise pulse spectral density(PSD) and pulse shape.Optimal coefficients for the FIR are given by the optimal filter time do-main weights. While optimal filters are generally thought of in the frequencydomain, they can also be computed more efficiently (O(N) vs. O(N logN))in the time domain after first inverting a matrix. As will be discussed inmuch more depth, equation 5.14 is an example of such a time domain cal-culation.4.2 MIDAS and the Software Front EndsImmediately downstream of the DCRCs are the software front ends whichare written using the MIDAS (Maximum Integrated Data Acquisition Sys-tem) package. The front end code is in C++, but the open source MIDASpackage comes with a vast array of desirable DAQ features already build in[35].The MIDAS front ends interact with the circular buffer and trigger stampbuffer on the DCRCs and decide which of these data get written to disk.The cartoon figure 4.3 displays this interaction of the front ends with oneDCRC, but what it leaves out is that the front ends will be making decisionsbased on and interacting with data from an entire tower (i.e. 6 DCRCs).In the first step, delineated by (1) in the figure, the trigger front endprogram reads in the time stamps from the trigger buffer. It decides whichtriggers to read out based on the time stamps and from which iZIP thetrigger occurs. The most basic of these decisions will reject triggers thatoccur within some short (and adjustable) time of each other. These piled-upevents spoil each other because event energy, position, and timing algorithmsin the downstream analysis depend heavily on pulse shape. Figure 4.4 (left)shows two piled-up events.244.3. The DCRC DriverFigure 4.3: A cartoon roughly depicting the communication between thetrigger front end, tower front end, the DCRC’s trigger buffer, and theDCRC’s circular buffer. The hypothetical data stream shows triggers la-beled with yellow arrows. The time stamps of the triggers are continuouslyrecorded in the trigger buffer and periodically read out by the trigger frontend. The triggers with yellow arrows too close together would be rejectedas piled-up while the other triggers would be read by the tower front endfrom the circular buffer.The trigger front end programs then transfer the list of time stampsof triggers that have passed the decision algorithms to a second front end,called the tower front end. In the second step, delineated by (2), the towerfront end accesses the DCRC circular buffer and reads out a predetermined(and adjustable) number of bins around all of the triggers. These waveformsare then written to disk.The trigger front end and tower front end communicate with each otherin a loop, and communicate with the DCRC when new triggers have beenrecorded, such that good triggers are never lost. In one iteration of this loop,the process of reading from the circular buffer takes significantly longer thanthe other steps combined.4.3 The DCRC DriverThe most interesting and powerful aspects of the DCRC and MIDAS re-late to detector readout discussed above, but the DCRC and MIDAS arealso responsible for detector settings control. The settings fit into 6 generalcategories: Phonon, Charge, LED, Test Signal, Trigger, and General. Eachcategory holds individual settings; for example, in the Charge category, theuser can set the voltage bias to be applied across the detector, and set DCvoltage offset of the signal arriving from the charge amplifier. The “DCRC254.3. The DCRC Driverdriver” is the fundamental program that handles the settings control. Asection of the DCRC’s hard memory is reserved for these settings, which areorganized into registers. Some settings that require a large dynamic range(i.e. many bits) take up multiple registers. Conversely, because some set-tings are booleans or a small number of bits, some registers hold many morethan 1 setting. The MIDAS “Online Database” (ODB) offers a convenientlayout of settings for automatic or manual setting adjustment. The ODBis also is the primary means of communication between the user and theDCRC, which is done through the DCRC driver.The DCRC driver program is written as a MIDAS process. It should berunning whenever the trigger front end and and tower front end (describedin section 4.2) are running. The DCRC driver constantly monitors thevariables in the 6 categories in the ODB, checking for any change to occurin any of the settings.The consistent monitoring of variables is done via the MIDAS “hotlink”functionality, which links any change in a specific ODB variable to a specificfunction call [35]. I have written the linked functions, which to first order :• check that the setting has been changed to an allowed valued– some settings have an allowable range (the detector bias range is-2V→+2V), while others have discrete allowed values (the drivergain must be 1,2,4, or 8)• identify and recompute the register corresponding to the setting thathas been changed• through a TCP/IP connection to the DCRC, write the new registervalue to the DCRC• if the register is successfully written to, update a special section ofODB (which we label the “Readback” section), which contains all thecurrent DCRC settings– note that this step is necessary because, due to our design choices,the normal section of the ODB in which the user changes settingsdoes not necessarily reflect the current settings on the DCRC– i.e. if a setting is changed to a disallowed value and thereforerejected or the writing to the DCRC fails, this will not affect thenormal section of the ODB264.4. Detector Testing ToolsThrough these steps the DCRC driver successfully propagates any changesin the ODB to the DCRC registers. The DCRC itself then subsequentlychanges the settings on the detector or to components on the DCRC board.4.4 Detector Testing ToolsThe MIDAS “analyzer” class puts in place a convenient means of real-timeviewing of waveforms as MIDAS takes them from the tower front end localmemory and writes them to disk. Internal to MIDAS there is a separateseparate data buffer, the SYSTEM buffer, through which the writing tohard disk occurs [35]. The SYSTEM buffer is stored in Random AccessMemory (RAM) and therefore the analyzer class takes advantage of the fastaccess and reading of the data as it moves through this buffer.Figure 4.4: (Left) Two iZIP phonon pulses that are piled-up. (Right) Therate of usable (non piled-up) events vs. the raw event rate. For this plot a52ms long trace has been assumed (since this is the proposed data acquisitiontrace length), where an optimal raw rate is ∼ 20Hz giving a ∼7Hz usablerate.GUIs are being written to display the waveforms that are read out, andone of these is shown in figure 4.5. As the GUI displays replace physicaloscilloscopes for displaying data, the freedom to cater the GUI design tothe needs of test facilities presents significant advantages. This is especiallytrue because SuperCDMS SNOLAB will scale up the number of iZIP/HVdetectors substantially (15 to ∼50) and the number of phonon channels byan even larger fraction (120 to ∼ 600). Each phonon channel, whether fortesting or for WIMP search, must be tuned before it becomes operable.Because manual detector tuning is relatively simple but time consuming,the MIDAS GUIs will used to automate this laborious aspect of test facilityrunning.274.4. Detector Testing ToolsCurrently GUIs are being written to accomplish these automation tasks,namely SQUID tuning, “IBIS,” and “Complex Impedance” tools. Detaileddiscussion of these tuning procedures are omitted, however figure 4.6 showsone of the GUIs already written for user-friendly tuning of the the phononamplifier components.Figure 4.5: A screen shot of the Pulse Display displaying the PSD of twocharge channels in real time. Behind the scenes, the time domain traces aregrabbed from the SYSTEM buffer which are then Fourier transformed, anddisplayed on the screen.4.4.1 The Pulse DisplayI wrote the Pulse Display GUI (a screen shot of which is shown in fig-ure 4.5), which displays waveforms in the time and frequency domain withoscilloscope-like user control. This is a critical and the fundamental detectortesting tool for test facilities.The GUI is build using ROOT GUI classes. All of the widgets shownin figure 4.5 are objects of the numerous ROOT GUI classes (e.g. TG-TextButton ,TGNumberEntry , TGSlider , TGTexEntry , etc.) [36]. The284.4. Detector Testing Toolsobject-oriented C++ framework, off of which ROOT is based, is central tothe GUI’s functionality. The layout of the GUI is organized in the GuiMain-Frame class, where the frames, buttons, sliders, check boxes, etc. are givencolors, sizes, positions, etc.An instance of the GuiMainFrame class is created in a separate class,PulseDisplay, which handles the functionality of the buttons. Similar to theMIDAS “hotlink” functionality described above, function calls are linkedupon the pressing of a button. A brief description of the buttons is nowgiven, from roughly top to bottom on the GUI.With the ‘Detector’ button, the user selects which DCRC to see datafrom. The ‘Display Fraction’ slider selects the fraction of events to displayon the screen and the selection is indicated in the number entry to the left ofthe slider. The necessity of the ‘Display Fraction’ option is discussed belowin regard to the throughput capabilities of the GUI.From the ‘Running Mode’ frame, the user has the option to put a re-striction on the number of waveforms to be gathered and displayed. In ‘FreeRunning’ mode (default) there is no restriction but with the ‘Process NWaveforms’ option selected, the number entry will become enabled, allow-ing the user to choose some number of waveforms to display (default=1000)before stopping the display.The ‘Start Run’ frame holds some of the most important features, be-cause the main button starts a MIDAS run and starts displaying data. Notethat the frontends (triggerfe2, towerfe3) must be running for data to bedisplayed because the data must be moving through the SYSTEM buffer.If the frontends are not running, no traces will be displayed, but the GUIwill not hang. Once the button is pressed the text turns to ‘Stop Run,’ andclicking it again stops displaying data and stops the MIDAS run. The ‘Lis-ten’ button starts displaying data by “listening in” to an already ongoingrun. It therefore requires that the frontends are running and that a run hasbeen started. Once the button is pressed the text turns to “Stop Listen.”Clicking it again stops displaying data but leaves the run going.The ‘Read File Options’ frame offers some less-used features. It initiatesa popup window in which the user can select the directory and file nameof the MIDAS file to read. These MIDAS files contain data that has beentaken in the past. The reader expects .mid.gz files (the typical files generatedby the MIDAS logger). Clicking the ‘Read’ button starts reading the file.Clicking the right arrow key on the main GUI window iterates waveform-by-waveform through the file.With the ‘Channel Selection’ frame the user selects the channel to dis-play. Currently, theres no way to display charge and phonon channels si-294.4. Detector Testing Toolsmultaneously because of the different sample rate of the charge and phononwaveforms.Pressing the ‘Capture’ button captures the currently displayed trace orPSD and displays it on the screen. This button also enables the two arrows,allowing the user to backtrack over traces PSDs (in case the user didn’tpress ‘Capture’ sufficiently quickly to see an interesting trace). There is a10 trace/PSD buffer implemented, and the number entry “N/9” says wherethe user is in the buffer. When in ‘capture mode,’ clicking the ‘Save’ buttonsaves the displayed trace to a .root file in the home directory of the PulseDisplay.In the ‘Display Options’ frame, with the ‘Trace’ option selected (default),raw traces are displayed. With the ‘PSD’ option, the power spectral densityof the trace is computed by doing a FFT on the trace, and taking themagnitude. Within either of these modes, the ‘Do Running Average’ buttoncan be checked, where a running average of either the traces or the PSDsis computed as the data comes in, and is displayed. The buffer size of therunning average can be changed at any time, without having to start fromscratch in computing the average.Clicking ‘Baseline Subtract’ activates a popup window. With both ‘Sub-tract Baseline’ and ‘Auto’ checked (both buttons are part of the popupwindow, and therefore not shown in the figure), a running average of theprepulse baseline mean is computed and the entire pulse is subtracted bythis amount. The ‘Auto’ check box can be un-checked which stops the com-puting of the running average, but the pulse is still subtracted by whatevervalue exists in the running average. This functionality is important in casesomeone wants to use the DC offset of the ODB and see the change on thePulse Display. The slider also allows shifting of the trace/PSD.In the ‘Y Axis’ frame, the different buttons control the ADC Range,which is also controlled by the slider which is parallel to the Y axis. The‘Center ADC’ displays and controls the ADC value of the center of the YAxis, which is also controlled by the slider parallel to the Y axis.Last, in the ‘Auto Scale’ frame, selecting a channel to autoscale andpressing the ‘AutoScale’ button scales the Y-axis to the selected channel.Behind the scenes of the Pulse Display, it is critical to interface with theMIDAS “analyzer” class efficiently in order for the Pulse Display to operatelike an oscilloscope. If significant lag exists between when traces are digi-tized and when traces are displayed on the screen the user will be looking atold data. Of course, this lag will typically grow in time, and it is thereforeimportant to identify bottlenecks in the data stream. First, when grabbingtraces from the SYSTEM buffer, the code uses the “GET RECENT” option304.5. Readout Modesso that the most recent data from the buffer is obtained [35]. Second, eventhe smallest memory leaks must be eliminated because, with continuous al-location and deallocation of memory for the traces, any memory leak willcontinue to grow until the machine is ground to a halt. Third, a bottleneckexists between the the ROOT GUI classes and the display to the monitor.Because display of traces at any rate greater than 60Hz is not generally dif-ferentiable from 60Hz to the human eye, the ‘Display Fraction’ option allowsthe user to adjust the fraction of displayed events, widening the bottleneck,and ensuring that no lag occurs.Figure 4.6: A screen shot of one of the existing detector control GUIs,written primarily by A. Roberts. The detector settings (the values in red)are controlled by the user. Internal CDMS figure, used with permission.4.5 Readout ModesCalibration runs involve higher event rates than WIMP search runs andtherefore we design a DAQ system to handle this throughput. If we readoutcalibration data in the same manner as WIMP search data, the throughputwould be unmanageable. Here we describe our intelligent DAQ system thatcollects the calibration data quickly, without compromising DM search timeand avoiding unmanageable data throughputs.In this effort we use different readout methods for the different calibra-tion runs and WIMP search runs. In WIMP search mode (low background),because the event rate is so low, we plan to read out every detector whenany trigger is issued from any detector. In addition, the pileup rejectionwithin the trigger front end will be turned off.314.5. Readout ModesWhen exposing the detectors to a gamma (Barium) or neutron (Cali-fornium) source, the DAQ is put into a “selective” readout mode where,for a given trigger, the only detector that is readout is the one that hasissued the trigger. Figure 4.4 (right) demonstrates part of the importanceof “selective readout.” The plotted function is f(R) = Re−R(52ms), which isthe rate of non-piled up events in the detector given a raw rate of R events.This is derived by noting that the probability that a random event (a pois-son process) will occur at least ∆T52ms after a previous event is given bye−R(52ms). Therefore, if we are to calibrate our detectors with an optimalrate, we should create a raw rate of 20Hz, giving a usable rate of 7Hz. Withthis relatively high rate (reading out ∼1/3 of the total data stream com-ing from each detector) it is clearly important that “selective readout” beimplemented, and only the detector issuing the trigger be readout.32Chapter 5The Hybrid Optimal FilterAt SNOLAB it is proposed that we read out phonon and charge traces usingtwo different sampling rates. We refer to this as ‘hybrid sampling,’ and fig.5.1 shows the proposed sampling scheme. The slower sampled regions aregenerated by averaging every 16 samples of the raw digitized waveform inthose regions (the data is hexadecimated). We do this because we requirehigh resolution pulse information, long traces (for measuring low frequencynoise), and reasonable data throughput; hybrid sampled traces allow us tomeet these otherwise conflicting requirements. The charge/phonon pulse iscaptured largely within the fast sampled region, which probes the smallertime scales of the pulse, while the long time length of the traces probesfrequencies down to ∼19Hz. Thus, the total data volume is suppressed bya factor of ∼ 16 without losing important information.In this chapter we show how to do a computationally efficient hybridoptimal filter (HOF), and demonstrate its good performance relative to 1DOFs that require more data. Additionally, we will use Jeff Filippini’s nota-tion from Appendix A of his thesis where A is the template, S is the signal,and J is the noise PSD [37].The central complication of optimal filtering hybrid waveforms is thatthe frequency domain no longer exhibits the convenient properties assumedfor a 1D optimal filter. Namely, gaussian random noise that is non-uniformlysampled in the time domain does not have independent frequency compo-nents. Power aliasing is the primary effect at work in ruining the orthogonal-ity between the Fourier modes—the process of averaging the fast-sampledtime domain data aliases higher frequency components into the slow-sampleddata, as described in sec. 5.2. This introduces correlations between differentfrequencies, which are represented by off-diagonal elements in the covariancematrix. We include these frequency correlations in the HOF pulse amplitudefit, where the variance of the noise components, the PSD J in eq. 5.1,χ2(a) =N∑n|S˜n − aA˜n|2Jn(5.1)33Chapter 5. The Hybrid Optimal FilterFigure 5.1: Sampling scheme for phonon and charge traces. Roughly toscale.is replaced by the full covariance matrix V˜:χ2(a) =N∑i,j(S˜i − aA˜i)(V˜−1)ij(S˜j − aA˜j). (5.2)In this notation, the n, i, and j subscripts represent the Fourier indices, andA˜ and S˜ are the Fourier transforms of the template and signal, respectively.The amplitude of the signal, the value being fitted for, is given by a.Section 5.1 presents an interpretation of the frequency domain of thehybrid waveforms. We note, however, that the non-diagonality of noise inthe frequency domain makes it no more preferable than the time domainwhen evaluating eq. 5.2. Removing the tildes and using the time domainV is equivalent and is slightly more efficient (discussed in sec. 5.5). Westill include sec. 5.1 to build some intuition for the HOF in terms of 1Doptimal filters, to mention some unsuccessful attempts to use non-uniformlysampled Fourier techniques to solve this problem in the framework of 1Doptimal filters, and to clarify points such as the appropriate summationbounds of eq. 5.2.We devote sec. 5.4 to discussion of the covariance matrix, V. In par-ticular we address issues that arise from its large dimension, which is equalto the number of samples in the hybrid trace (phonon: 3008; charge: 9152).We describe the most numerically stable and efficient method of determiningand inverting matrices of this size and type (symmetric, positive definite).345.1. Fourier Transform of a Hybrid TracePhonon R (Hz) N T (ms) ∆f (Hz) ∆t (ms) fNyq (Hz)fast 625000 1024 1.6384 610.3515 0.0016 312500slow 39062.5 2048 52.4288 19.0735 0.0256 19531.25Table 5.1: Important time and frequency values for hybrid phonon wave-forms.Most notably, we found that instead of building V from many hybrid sam-pled random trigger noise traces, V should be determined from noise tracesthat are un-averaged and read out uniformly at the fast sample rate. Doingso will save live-time and will also reduce the overall data size required toreliably determine and invert V.In my results in sec. 5.6 , I show that going through the process ofdetermining and inverting the covariance matrix pays off in the pulse am-plitude resolution. We also show that the time offset estimation does notsuffer from this readout scheme. We compare pulse amplitude estimationperformance of this hybrid optimal filter with other 1D optimal filters underdifferent monte carlo generated noise conditions (e.g. white, 1/f + white,δ + white). We show that the HOF performs just as well as a regular OFthat uses the 52.4288 ms non-averaged waveforms, and the HOF thereforeserves its purpose of filtering low-frequency noise while reducing our datasize by a factor of ∼16.5.1 Fourier Transform of a Hybrid TraceThroughout this section we will primarily discuss phonon traces, noting thatthe hybrid charge traces can be treated analogously.First, it is useful to identify the important time and frequency valuesthat are characteristic of the hybrid trace. To determine these characteristicvalues, it is helpful to consider the hybrid trace as two separate uniformlysampled traces. The fast sampled trace consists of just the central highfrequency data; the slow sampled trace consists of all the bins of the hybridtrace with the high frequency points averaged by a factor of 16.Tables 5.1 and 5.2 note the sample rate (R), number of samples (N),time span (T ), frequency resolution (∆f), time resolution (∆t), and Nyquistfrequency (fNyq).Fig. 5.2 roughly displays the frequency domain spacing and range probedby the fast and slow sampled trace for the phonon waveform case, althoughthere is a frequency range (610 Hz < f < 19531.25 Hz) of overlap in which355.2. Power AliasingCharge R (Hz) N T (ms) ∆f (Hz) ∆t (ms) fNyq (Hz)fast 2500000 1024 0.4096 2441.41 0.0004 1250000slow 156250 8192 52.4288 19.0735 0.0064 78125Table 5.2: Important time and frequency values for hybrid charge wave-forms.Figure 5.2: Fourier coefficient spacing for phonon waveforms.both data sets are sensitive.5.2 Power AliasingWe initially tried to determine the Fourier coefficients by ‘pasting’ the FFTof the fast sampled data onto the FFT of the slow sampled data. As can beseen in table 5.2, the slow sampled data provides most of the low frequencyinformation down to ∼19 Hz and up to 19531.25 Hz, in steps of ∼19 Hz.The fast sampled data provides all of the high frequency information up to312500 Hz. For the lower frequencies in the region of overlap, f < 19531.25Hz, we only used the slow sampled FFT values because of the superiorresolution of this data.This method failed because of the problem of spectral power aliasing inthe Fourier coefficients of the slower sampled data. Briefly, by sampling asignal at a uniform frequency f , you lose information about any frequencycomponents of the signal exceeding f/2 = fNyq, the Nyquist frequency. Thespectral power from frequencies exceeding fNyq instead are aliased into thefrequency range (0, fNyq). This of course spoils the spectrum informationbelow fNyq, and it is therefore critical to sample any signal at twice thefrequency of the highest noise frequency component. Our fast samplingrate (phonon: 625kHz; charge: 2.5MHz) meets this requirement after thedata goes through an upstream low-pass filter (an analog ‘anti-aliasing’ filterpreceding the analog-to-digital conversion). The source of the aliasing isthe low Nyquist frequency of the slow sampled data (phonon: fNyq,slow =365.3. Fourier Transform without the FFT19531.25 Hz ; charge: fNyq,slow = 78125 Hz), where the averaging processaliases power between (fNyq,slow, fNyq,fast) into the range (0, fNyq,slow).Figure 5.3: A PSD, using the the pasting method, of a hybrid phonon tracewith certain injected frequencies. There are aliased peaks at f=14.062kHzand f = 17.186kHz (discussed in main text). The red line marks the foldingpoint, fNyq,slow at 19531Hz.The frequency spectrum in fig. 5.3 demonstrates the aliasing, and wasgenerated using the pasting method of a hybrid phonon trace that consistedof five sinusoidal signals. The signals had frequencies 50Hz, 200Hz, 5kHz,25kHz, 100kHz, all of amplitude 1. The peak at ∼14kHz is clearly due toaliasing and can be attributed to the true signal at 25kHz mirrored aroundfNyq,slow: 19.531− (25− 19.531) = 14.062. The smaller feature at ∼17kHzis also due to aliasing and can be traced back to the true 100kHz signal:19.531− (100− 5× 19.531) = 17.186. The smaller amplitude of this featureshould not be surprising because we expect at least some of the power at thehigher frequencies is ‘washed out’ by the averaging process. The derivationof where aliased frequencies occur is in Appendix B.5.3 Fourier Transform without the FFTThere are many algorithms in existence for calculating non-uniform discreteFourier transforms (NDFTs) for the many applications where you have dataspaced randomly, or somewhat randomly, in the time domain. One caneven compute a robust NUFFT—a non-uniform fast Fourier transform—in375.3. Fourier Transform without the FFTO(N log(N)), by diffusing your data onto an evenly spaced time domaingrid, applying the FFT, and then dividing out the FFT of the diffusion.References [39] (the original paper) and [40] (Accelerating the NonuniformFast Fourier Transform) have solved this problem in two and three dimen-sions. The Lomb-Scargle periodogram [42] is another technique with whichwe experimented. However, neither of these NDFTs are designed to exploitthe high degree of symmetry in our time domain data, and we therefore donot use them.I do exploit these symmetries (i.e. only two different time domain spac-ings) with my own O(N) non-uniform discrete Fourier transform. How-ever, most importantly, regardless of the performance of my or other NDFTtechniques, because the frequency domain is not one in which the noise isdiagonal, any frequency domain optimal filter fit that neglects correlationsbetween frequency components will be suboptimal. Therefore, the contentof this and the rest of section 3 is not explicitly used in my solution to thegeneral problem, which is developed in sec. 5.5.The discrete Fourier transform (DFT) of a time series gk is given by:g˜n =1NN2 −1∑k=−N2gke−2piifntk (5.3)where g˜n is evaluated for n = −N2 → N2 − 1 in integer steps. The inverseDFT (IDFT) isgk =N2 −1∑n=−N2g˜ne+2piifntk . (5.4)For a real-valued gk there is a symmetry in the Fourier coefficients aboutthe zero frequency:g˜n = g˜∗−n, (5.5)which ensures that the number of independent points in the time and fre-quency domain are equal.For uniformly sampled data at a rate R spanning a time T , fn = nT ,and tk = kR . With R × T = N data points, the argument of the exponentbecomes −2piinkN .The values of fn and tk are not as simple for the hybrid trace, as willbe seen in the following two sections. Due to this added complexity, it isimpossible to explicitly write the hybrid DFT/IDFT as cleanly as eq. 5.3385.3. Fourier Transform without the FFTand 5.4. We instead use a matrix A to accomplish the IDFT: ~g = A~˜g andits inverse to accomplish the DFT: ~˜g = A−1~g.Determining fn and tk for the hybrid traces allows us to determine theelements of A.5.3.1 Fourier Frequencies of a Hybrid TraceIn this section we elaborate on the Fourier frequency spacing that is roughlydisplayed in fig. 5.2.It again helps to consider the Fourier frequencies of the two separatedata regions: the fast sampled data and the derived slow sampled data. Forthe slow sampled data these are: (simply fslow,n = n× 19.07349...Hz)nslow 0 1 2 3 ... 1021 1022 1024fslow,n(Hz) 0 19.1 38.1 57.2 ... 19474.0 19493.1 19531.25The Fourier frequencies of the fast sampled data are: (simply ffast,n =n× 610.3515625Hz)nfast 0 1 2 ... 31 32 ... 512ffast,n(Hz) 0 610.4 1831.1 ... 18920.9 19531.25 ... 312500.0The hybrid trace Fourier frequencies fn should take the nslow frequenciesvalues below the slow sampled Nyquist frequency. The slow sampled dataloses sensitivity above nslow = 1024, and therefore for f1025, we should usenfast = 33 at f = 20141.6Hz, and we continue at the fast sampling Fourierfrequencies up to nfast = 512, f = 312500Hz. The hybrid trace fn arewritten out below:n 0 1 ... 1023 1024 1025 ... 1504fn(Hz) 0 19.1 ... 19512.17 19531.25 20141.6 ... 312500.0∆fn(Hz) - 19.1 ... 19.1 19.1 610.4 ... 610.4Negative frequency coefficients are overdetermined from the symmetry ofeq. A.1, and therefore not computed. Counting real and imaginary compo-nents at each Fourier frequency, it appears that we transform Ntime = 3008time domain points to NFourier = 1505 × 2 = 3010 frequency domain points.This issue is resolved by noticing that the imaginary Fourier coefficient ofthe zero frequency, f0, and of the Nyquist frequency, f1504, must be zero.This proof is given in appendix A.395.3. Fourier Transform without the FFT5.3.2 Time Points of a Hybrid TraceIt is also important to identify the tk at which to evaluate the hybrid traceFourier transform. One option is to use the tk at the 3008 time points atwhich the hybrid waveform is given:tk =k(0.0256) ms 0 < k ≤ 1024(k − 1024)(0.0016) + (1024)(0.0256) ms 1024 < k ≤ 2048(k − 2048)(0.0256) + (1024)(0.0016) + (1024)(0.0256) 2048 < k ≤ 3008(5.6)However, this choice is suboptimal because the slow sampled portion of thewaveform is obtained by averaging 16 time points, not by keeping 1 timepoint.For 0 < k ≤ 1024 and 2048 < k ≤ 3008, the average of the sine or cosinefunction over 16 time points should be used. For example, the inverse Fouriertransform in eq. 5.4 becomesgk =∑N2 −1n=−N2g˜n 116∑15j=0 exp[2piifn(tk + jδ)]0 < k ≤ 1024∑N2 −1n=−N2g˜n exp[2piifntk] 1024 < k ≤ 2048∑N2 −1n=−N2g˜n 116∑15j=0 exp[2piifn(tk + jδ)]2048 < k ≤ 3008(5.7)where the tk values are given by eq. 5.6, δ = 0.0016ms is one of the fasttime steps, and the fn are at the spacing discussed in sec. The Transformation Matrix: AIn the interest of using an entirely real transformation matrix, we transformreal and imaginary Fourier coefficients independently. Additionally, recallthat we need not compute the g˜n for negative frequencies (since they areoverdetermined by eq. A.1). Last, recall that the imaginary component ofthe zero and Nyquist frequency must be zero (proof in Ap. A).Therefore, A−1 transforms the 3008 time domain points into 3008 cosineand sine coefficients. We use S˜i to describe these coefficients in the 3008dimension non-orthogonal frequency domain. We write out how we haveorganized our S˜i vector below in terms of the g˜n that you would associate405.4. Determining and Inverting the Covariance Matrixwith the standard Fourier transform, but using the unconventional frequencyspacing and time averaging.~˜S S˜0 S˜1 S˜4 S˜5 ... S˜3006 S˜3007g˜n Re[g˜0] Re[g˜1504] Re[g˜2] Im[g˜2] ... Re[g˜1503] Im[g˜1503]Notice that for convenience we place Re[g˜1504], the real part of the Nyquistcoefficient, as the second element of ~˜S.We show the third and fourth columns of A below.116∑15n=0 cos(ω1(t0 + nδ))116∑15n=0 cos(ω1(t1 + nδ))...116∑15n=0 cos(ω1(t1023 + nδ))cos(ω1t1024)...cos(ω1t2047)116∑15n=0 cos(ω1(t2048 + nδ))...116∑15n=0 cos(ω1(t3007 + nδ))116∑15n=0 sin(ω1(t0 + nδ))116∑15n=0 sin(ω1(t1 + nδ))...116∑15n=0 sin(ω1(t1023 + nδ))sin(ω1t1024)...sin(ω1t2047)116∑15n=0 sin(ω1(t2048 + nδ))...116∑15n=0 sin(ω1(t3007 + nδ))Clearly the lower and upper bounds of i and j must be 0 and 3007,respectively, in the general pulse amplitude χ2 in eq. Determining and Inverting the CovarianceMatrixThe covariance matrix in the HOF plays the role of the PSD J in the 1DOF, and just as J is computed from many noise traces (Ji = 〈g˜2i 〉), thefrequency domain covariance matrix V˜ is given by the average of the outerproduct of noise traces:V˜ij = E[(g˜i − 〈g˜i〉)(g˜j − 〈g˜j〉)] =1MM∑m=1N∑i,jg˜m,ig˜m,j (5.8)for M noise traces, where the g˜i coefficients have been determined accordingto sec. 5.3. Here it is also assumed that 〈g˜i〉 = 0. The time domaincovariance matrix is equivalently computed by using the time domain gidata.415.4. Determining and Inverting the Covariance MatrixHybrid OFs 1D OFsec. 5.4.2 V 32768 pointsM/d 3 7 10 - -σaˆ 6.4 5.8 5.6 5.7 5.5Table 5.3: The baseline resolution of the HOF improves as M/d increases upto ∼7. We simulated 1/f + white noise and 1000 phonon pulses (discussedfurther in sec. 5.6.1). The HOF approaches the limit of its best possibleresolution, which is the resolution of the 1D uniform OF applied to the full625kHz, 32768 point, trace. We also show the resolution by determining thecovariance matrix as described in sec. 5.4.2, which shows good performance.Eq. 5.8 is only an estimation of V˜, which we denote V˜’, and increasingM brings the estimate closer to the true covariance matrix. Inconveniently,we found that the number of noise traces required to reliably determine V˜is:M/d & 7, (5.9)where dim(V˜)=d. M is surprisingly large in comparison to uniformly sam-pled data where there were d independent, diagonal noise components. V˜is symmetric but has d2/2 + d/2 independent elements, and therefore mea-suring the noise requires significantly more data and effort.The particular ratio of eq. 5.9 was determined and verified in multipleways. First, because all covariance matrices are symmetric and positive def-inite (i.e. have positive eigenvalues), negative eigenvalues of V˜’ are a clearindicator that V˜ has been poorly estimated, generally due to statistical fluc-tuations in the noise. Increasing M makes all eigenvalues positive. Second,because the true V˜ will give the best possible pulse amplitude estimationresolution, increasing M up to M ≈ d× 7 and observing only small ampli-tude estimation resolution improvement for larger M justified the conditionof eq. 5.9. Table 5.3 shows this baseline resolution improvement for simu-lated phonon pulses (the method of obtaining these resolutions is discussedin sec. 5.6.1).425.4. Determining and Inverting the Covariance Matrix5.4.1 The Ratio M/dA matrix’s eigenvalues are indicators of its invertibility. The existence of azero eigenvalue of course indicates a singular (non-invertible) matrix. Withan input of a nearly singular matrix, eigensolvers will compute a numberof very small (e.g. 10−15) and a number of very large eigenvalues. Theeigenvectors with very small eigenvalues make up the nullspace of the matrix,which is the space in which the matrix will never transform any vector; amatrix that has a null space is non-invertible. The rank of a matrix, R, isthe dimension of the matrix, d, minus the dimension of the nullspace.In the process of trying to invert V′ (the estimate of V), looking at itseigenvalues showed a clear linear relationship between R and the number ofnoise traces used to build the matrix, M . The rank approximately equaledthe number of noise traces, R ≈ M , and this correlation was strongest forvalues of M much less than d.To explain this relationship, it is important to note that every datatrace can be represented as a linear combination of the true eigenvectors ofthe true underlying covariance matrix. This is nothing particularly specialabout V—every data trace lies in a d dimensional space and therefore anyinvertible d dimensional matrix will have d orthogonal eigenvalues with non-zero eigenvalues that span the d dimensional vector space. Additionally,every data trace that has been added to V’ can be represented as a linearcombination of its eigenvectors. From this, it would be impossible to invertV’ using fewer than d+1 traces.As traces are added to build V′, they probe more dimensions of the fullspace, which is reflected in the matrix’s eigenvalues. The observation thatR ≈ M indicates that each new trace probes one additional dimension. Itmight therefore be surprising that we found that M/d & 7 for reliably deter-mining V, instead of M/d ≈ 1. First, we observed the R ≈ M relationshipweaken as M approached d, and V’ remained singular until 1 . M/d < 2.Second, once M was increased so that V was invertible, the baseline resolu-tion of the HOF improved for larger M/d. Table 5.3 displays these results,and we report the approximate condition M/d & 7 because of the only smallimprovement in resolution for M/d ≈ 10, which approaches the limit of thebest possible resolution.These findings represent that it is more difficult to reliably probe the finaldimensions of the space. Consider an analogy with 3 randomly chosen pointsin 3D space. Most likely these 3 points can represent, by linear combination,any point in the 3D space—this will be true provided that one of the pointsis not a linear combination of the other two (i.e. does not lie on the line435.4. Determining and Inverting the Covariance Matrixdetermined by the other two points).3 Therefore, while in principle onlyd+1 data traces are required to probe d dimensions and reliably estimateV, as more noise traces are acquired, it becomes more likely that the newesttrace lies close to a ‘plane’ established by (i.e. can be expressed as a linearcombination of) the previous noise traces. This gives some sense for whyM/d & 7 is required to reliably estimate the eigenvectors/eigenvalues of V,and reliably invert V′.Due to the impractical acquisition of ∼21000 phonon and ∼63000 chargenoise traces over the course of a series, we instead determine V˜ from ∼1000un-averaged noise traces as described next. Unaveraged noise traces willalso reduce total data throughput, as shown in section Robust and Efficient Computation of the CovarianceMatrixFor simplicity we discuss only the phonon waveforms in this section, not-ing that charge waveforms can be treated analogously; the dimension ofthe charge problem is just larger. Note that an un-averaged phonon wave-form is 32768 samples long, and therefore has 32768 independent Fouriercoefficients. We will use ~β to describe these Fourier coefficients, which arethe coefficients of the (orthogonal) sines and cosines at the standard evenlyspaced frequencies for uniform Fourier transforms, and we use V˜β to de-note the covariance matrix of this data. All the convenience of this methodcomes from that fact that we can assume that V˜β is diagonal. While thisis an approximation, where non-stationary noise introduces off diagonal el-ements, V˜β’s diagonality is an assumption implicitly made when applyinga 1D optimal filter to uniformly sampled data.4 For this note we ignore theoff-diagonal elements, and we use FFT technology to compute the diagonalcomponents.V˜ βii = 〈β2i 〉. (5.10)Recall that we use ~˜S to describe Fourier coefficients in the 3008 dimension3This analogy can be carried through to understand why d+1, not d, traces are requiredto probe d dimensions. Three linearly independent points only establish a plane; a fourthlinearly independent point probes the third dimension.4Off-diagonal elements due to non-stationary noise (i.e. eq. 39 in Rito’s OF theory note[44]) could certainly be included in this framework and included in Vβ . However, comput-ing the NSOF covariance matrix could present challenges due to the non-hexadecimatedread out of real events instead of noise traces. Fortunately, no sparsification of the NSOFmatrix would be needed in order to invert it, because the full propagation would still beas easy as eq. 5.12.445.4. Determining and Inverting the Covariance Matrixnon-orthogonal frequency domain. We know that there is a linear transfor-mation between these two domains accomplished by a 3008 × 32768 matrix,which we denote G. We then propagate the uncertainty from this covari-ance matrix, V˜β, to the non-diagonal 3008 dimensional covariance matrix,denoted V˜, to be used in the optimal filter. The propagation is simple oncewe have determined the linear transformation between the two domains:S˜i =32768∑j=1Gijβj . (5.11)We determine G by computing its columns explicitly, one-by-one, by thefollowing procedure:1. produce an un-averaged time domain trace equal to one of the or-thogonal modes of the ~β vector (i.e. sin(2pifn) or cos(2pifn) wherefn = n× 19.07...Hz)• for example, sin(2pif2) gives ~β = [0, 0, 0, 0, 1, 0, 0, ...]• cos(2pif2) gives ~β = [0, 0, 0, 0, 0, 1, 0, ...]2. downsample this waveform to obtain the 3008 sample trace3. determine the ~˜S coefficients of this trace by multiplying the time do-main trace by A−1, as described in sec. 5.3. This ~˜S vector is onecolumn of G4. repeat for each sine and cosine orthogonal mode of the ~β basis, up tothe Nyquist frequency at n = 16384It is interesting to look at the rows of G because they indicate whichof the 32768 orthogonal sine and cosine coefficients contribute to a singlenon-orthogonal 3008 sine or cosine coefficient. Fig. 5.4 makes it clear thatpower aliasing is scrambling the hybrid Fourier coefficients.With the 32768 columns of G determined, the variances of ~β coefficientsalong the diagonal of Vβ are propagated to the variances/covariances of ~˜Scoefficients, V˜, according to:V˜ij =32768∑k=132768∑l=1GikV˜ βklGjl ; V˜ = GV˜βGT. (5.12)G only need be computed once because the linear transformation be-tween the two domains is of course constant. V˜β and V˜ depend on thenoise, and they must be redetermined once per series.455.5. Time Domain Covariance Matrix and χ2Figure 5.4: There are 16 frequency components in the orthogonal basis thatcontribute to the cosine coefficient at ∼ 9538Hz in the non-orthogonal basis.These frequencies match the predicted alias frequencies derived in AppendixB.5.4.3 Inversion by Choleski DecompositionCholeski Decomposition is an efficient method of inverting positive definitematrices, and the decomposition gains a factor of 2 in operations countover the more standard LU decomposition. The inversion is still an O(N3)process. Like the LU decomposition, it is a triangular decomposition but dueto positive definiteness, the upper triangular matrix can be found so thatit is the transpose of the lower triangular matrix. Choleski decompositionsare covered thoroughly in Section 2.9 of Numerical Recipes[38], and they arealso implemented in ROOT with the TDecompChol class.5.5 Time Domain Covariance Matrix and χ2Because the frequency domain of the hybrid waveforms is not one in whichthe noise is diagonal, there is no advantage to working in that domain. Thefrequency domain is in fact less efficient because transforming any hybridwaveform into the frequency domain requires an O(N2) multiplication byA, as discussed in sec. 5.3.To be clear, we continue to compute V˜β, which uses the full uniformly465.6. Amplitude and Time Offset Estimation Resultssampled noise traces, in the frequency domain. We still utilize the conve-nient approximation that this matrix is diagonal; but instead of propagatingthose variances to V˜, we propagate them to the time domain covariance ma-trix. This follows the same procedure outlined in sec. 5.4.2, except G isnow a different matrix, G′, which is the linear transformation from the32768 dimension orthogonal frequency domain into the 3008 point hybridtime domain. We compute the columns of G′ identically as outlined in sec.5.4.2, but we skip step 3 of the procedure. Eq. 5.12 still gives the correctpropagation of variances/covariances between domains. Because we will usethe hybrid time domain covariance matrix for the remainder of this note,we denote it simply by V.The tildes are then removed from the χ2 pulse amplitude fit of eq. 5.2,which now takes the form:χ2(a) =3007∑i=03007∑j=0(Si − aAi)(V −1)ij(Sj − aAj)= (~S − a ~A)T ·V−1 · (~S − a ~A)(5.13)and by the normal equations the best fit pulse amplitude is:aˆ = [ ~AT V−1 ~A]−1 ~AT V−1 · ~S (5.14)Eq. 5.14 might seem too computationally intensive to be carried out foreach trace that is read out, and it is an O(N) operation once the vectoron the left side of the dot product is computed. The quantity in bracketsreduces to a single number, so no matrix (other than the covariance matrx)is inverted.5.6 Amplitude and Time Offset EstimationResultsIn order to check the performance of the HOF, we simulate the 52.4mspulse and noise waveforms. To build a uniformly sampled 52ms noise tracewe start with an underlying noise PSD (J). We then generate the Fouriertransform of the trace by randomly drawing the cosine and sine coefficientsfrom appropriate gaussian distributions; at frequency fk the gaussian distri-bution has µ=0 and σ =√J(fk). Applying an IFFT gives the desired timedomain noise trace. If a realistic pulse is desired, the pulse template, scaledto a certain amplitude, is added to the noise.475.6. Amplitude and Time Offset Estimation Results5.6.1 Phonon Pulse SimulationWe compare the pulse amplitude resolution, σ2aˆ, of the HOF to different1D optimal filters. With the ability to generate realistic noise and pulsewaveforms that are uniformly sampled at 625kHz and 52.4288ms long, wetest (1) the HOF with the 3008 point downsampled waveform, (2) a 1DOF using the 32768 point full waveform, (3) a 1D OFs using 1024 points(1.64ms), and (4) a 1D OF using 2048 points (3.28ms).The covariance matrix for the HOF is computed using 1000 full waveformnoise traces. PSDs for the 1D OFs are computed from the appropriatesections of the same 1000 noise waveforms.For the template, A, we use the double decaying exponential functionalform:A(t) = Af[1−exp(− t− TfRf)]exp(− t− TfFf)+As[1−exp(− t− TsRs)]exp(− t− TsFs)(5.15)to generate the discrete time domain points, Ak. Fig. 5.5 shows an exampleof the uniformly sampled 52.4ms template; the inset shows the templatefor the 1024 point OF, which is also the fast sampled portion of the hybridwaveform.Figure 5.5: A phonon template, with Af = 0.9, Tf = 2771µs,Rf =25µs, Ff = 175µs,As = 0.4, Ts = 2771µs,Rs = 100µs, Fs = 816µs. Theinset shows the 1024 points of fast sampled data.485.6. Amplitude and Time Offset Estimation Results5.6.2 Pulse Amplitude ResolutionFor the table below, we perform the 4 different OFs on 10000 simulatedsignals with amplitude 100 and measure the amplitude resolution, σ2a=100.We also measure a baseline resolution for 10000 signals with amplitude 0,with the time offset search turned off, denoted σ2BL. We report only σ2BLbecause it was found to be statistically equivalent to σ2a=100. For the 1Doptimal filters we also report the expected resolution, σ2EX , given by:σ2EX =[ N2 −1∑n=−N2|A˜n|2Jn]−1(5.16)For each noise model, we normalized the PSD such that expected amplitudeestimate resolution of the 32768 point OF would equal 9.0, as shown in thetable for σ2EX .Looking down the columns of table 5.4, the resolutions decrease as ex-pected for the longer 1D OFs, and also agree with the expected resolution.Also, the 3008 point HOF performs equally well as the full 32768 point OF.As you go to longer traces, the expected resolution of eq. 5.16 gets betterfor a number of reasons. You begin to more finely resolve the changes inthe template and the PSD, which allows for an optimal filter A˜∗nJn whichdoes a better job of weighting the frequencies with the best signal-to-noise.Equivalently, the longer traces brings the sum in eq. 5.16 closer to itscontinuous analog:σ2EX =[ ∫df |A˜(f)|2J(f)]−1, (5.17)which tracks the continuous variations of |A˜(f)|2J(f) , and by resolving the fea-tures of the integrand the area is maximized. For example, if there is nodifference between the signal-to-noise at f = 20Hz vs. f = 100Hz, you payno penalty by weighting these bins equivalently in the optimal filter—a fre-quency sensitivity of ∆f = 80Hz is equally optimal as ∆f = 20Hz (for theexample of these two bins). But we expect that low frequency features inthe PSD and template will not make |A˜(f)|2J(f) constant from f = 20Hz to f =100Hz, and so ∆f < 80Hz is optimal.Longer traces also reduces power leakage from frequency-to-frequency,because ∆f decreases, which also increases the sum. Additionally, and mostimportantly, since the template |A˜| is largest at the lowest frequencies, it isextremely beneficial to measure the lowest frequencies down to ∼ 19Hz, and495.6. Amplitude and Time Offset Estimation ResultsNoise Modelswhite 1/√f 1/f 1/f 1/f(wt.) + wt. + wt. +wt.+δ +wt.+∼1024pointOFσ2BL 10.2 10.96 11.22 16.6 18.4σ2EX 9.0 9.8 11.4 14.9 17.72048pointOFσ2BL 9.12 9.0 9.42 11.56 17.19σ2EX 9.0 9.0 9.24 11.4 17.232768pointOFσ2BL 9.12 9.18 9.18 9.18 8.76σ2EX 9.0 9.0 9.0 9.0 9.03008pointHOF σ2BL 9.01 9.18 8.91 8.9 9.0Table 5.4: Baseline and expected resolution of different length 1D OFs andthe HOF. The HOF performs as well as the 32768 point 1D OF. The un-certainty on the resolutions is σσ2 = σ2√2N−1 , so σσ2 = σ2× 0.014 forN=10000, plus a small additional factor from the covariance matrix estima-tion. 1/f+wt.+δ has large noise peaks injected at 100, 1300, and 2200Hz.1/f+wt.+∼ has the normal white and 1/f features on top of which we adda sinusoidal in J , the PSD, up to 40kHz. While the HOF seems to havesuperior resolution in most cases, this improvement is mostly within 1 σuncertainty of the standard deviation, and thus assume the HOF and 32768OF to perform equally well.505.7. Time Offset Resolutionresolve those high signal-to-noise points from the zero-frequency. Clearlyfrom table 5.4, the HOF does this just as well as the full 32768 point OF.5.7 Time Offset ResolutionWhereas the amplitude estimate substantially improves by measuring thelowest frequencies of the template, signal, and noise, a time offset estima-tion using a 1.64ms trace (1024 points) is nearly as good as using the full52.4288ms trace. It’s not surprising that high frequencies are more impor-tant in the time offset estimation than the amplitude estimation, and thisadditional weight is expressed in the extra factor of f2 the time offset ex-pected resolution:σ2tˆ0,EX =[aˆ2N2 −1∑n=−N2(2pifn)2|A˜n|2Jn]−1. (5.18)Therefore, we resort to doing a two step fit for tˆ0 and aˆ. We apply a standard1D OF to the fast sampled data to determine tˆ0 (and a bad aˆ), then shift our52.4 ms time domain template of eq. 5.13 by the offset, and then determineaˆ.Table 5.5 shows the slight inferiority of using 1024 points to 32768 pointsfor the time offset fit. Also, σtˆ0 was independent of tˆ0 as long as the offsetwas within a reasonable search window (-100µs, +100µs), as expected. I alsoshow that the two-stage fit does not affect the pulse amplitude estimation.For a = 100, we fit for aˆ first with an unknown t0 = 100µ and second witha known t0 = 0, and obtain statistically equivalent resolutions. We also sawthat for a = 20, as the t0 estimation deteriorates, the aˆ amplitude is affectedno more than the 32768 point 1D OF.5.8 Data Volume ConsequencesIn order to build the charge and phonon covariance matrix, we either require∼ 21000 phonon and ∼ 64000 charge hybrid random noise triggers, or ∼1000full un-averaged randoms. A new set of noise randoms will be needed perseries, or at least after some amount of time over which we believe the noiseis changing (we call this ∆T ). This data volume from randoms is ∼5 timeslarger than the data volume projected for the true triggers during Ba andWIMP search runs.515.8. Data Volume ConsequencesNoise Modelswhite (wt.) 1/√f + wt. 1/f + wt.1024pointOFσt0 for a = 100 4.9µs 3.1 µs 2.4µsσt0 for a = 20 51µs 31µs 19µs32768pointOFσt0 for a = 100 4.4µs 2.6 µs 2.1µsσt0 for a = 20 49µs 25µs 16µs3008pointHOF σ2a=100 w. t offset 9.2 9.2 9.0σ2a=100 w.o. t offset 9.1 9.2 8.9Table 5.5: Time offset resolution results and the time offset’s effect on thepulse amplitude estimation. The time offset estimation with 1024 points isnearly as good as with 32768 points.In table 5.6, we copy the projected data values from the trigger task forcereport, and include the additional data contribution needed for computingthe HOF [43].For the un-averaged randoms read out, the full size per year is given by:M∆T × 76MBevent × 3.15× 107 secyear × 0.835 (5.19)where we have used M = 1000 and ∆T=3 hours for the table (10003hr. ≈0.1 Hz). The 76MB/event comes from 12 full phonon waveforms (∼64kBeach) and 4 full charge waveforms (∼ 262kB each), so 1.8MB per detector,multiplied by 42 detectors.The numbers here are conservative, and there are a number of ways toimprove the final data volume. It is possible that M = 500 is sufficientto build V˜β. It’s also possible that ∆T can be greater than 3 hours. Ad-ditionally, a noise monitor could be implemented to gauge changes in thenoise, which could use a smaller rate of randoms, but could trigger a fullrecomputation of V if need be. This could be a job for the L3 trigger. Last,some of the un-averaged randoms could be used to compute V˜β but neverwritten to disk.We also include data throughput values for the brute force computation525.9. ConclusionMode Rate Event Size Max Rate Live Ave. RateMB × # det. FractionBa 210 Hz 0.14 MB 29 MB/s 0.035 0.6 TB/wk(0.14MB × 1) 32 TB/yearWIMP 0.03 Hz 5.88 MB 0.15MB/s 0.8 0.07 TB/wksearch (0.14MB × 42) 3.9 TB/yearerrorprop. 0.1 Hz 76MB 7.6MB/s 0.835 200 TB/yearV (1.8MB × 42)bruteforce 5.9 Hz 5.88 34MB/s 0.835 897 TB/yearV (0.14MB × 42)Table 5.6: Projected data throughput values for 42 iZIPs. A 5Hz triggerrate per detector is assumed for the Ba calibration. The live fraction isthe assumed fraction of operation time spent in the specified mode. The200TB/year and 897 TB/year values assume recomputation of the covari-ance matrix every 3 hours (potentially na¨ıve).of V and assume reading out d× 7 hybrid randoms. Since d = 9172 for thecharge V, we use M = 64000 per ∆T , even though M = 21000 would besufficient for the phonon V. While we win by a factor of 13 in event size, welose by a factor of 60 in the required rate. The 5.9Hz event rate is clearlyunreasonably high—with 52ms traces, this would mean reading out ∼300msof data for every 1s of real time.Unless the M ≈ d× 7 requirement can be improved, full readout of therandoms is clearly superior.5.9 ConclusionI introduce a method for optimal filtering the hybrid sampled waveformsproposed for SNOLAB data acquisition. I show that the pulse amplitudeestimation performance of our hybrid optimal filter is just as good as a 1Doptimal filter with the full non-averaged waveform.My method does require a matrix inversion, and it computes a single op-timal filter amplitude estimate in O(N) operations; it is similar in efficiencyto the currently implemented non-stationary OF.Power aliasing, due to the averaging process, introduces off diagonal el-ements in the frequency domain covariance matrix, which eliminates the535.9. Conclusionconvenience of that domain that existed for 1D OFs. We therefore recom-mend determining the covariance matrix and evaluating χ2(aˆ) in the timedomain. Reliably computing the covariance matrix and its inverse is themost challenging step of the process, where either ∼ d×7 hybrid noise tracesor 500 un-averaged noise traces are required (where d is the dimension ofthe covariance matrix). We recommend the latter from a data throughputand efficiency standpoint.Last, these findings lead to a ∼ ×6.5 increase in our projected datavolume, from ∼ 36 TB/yr to ∼ 236 TB/yr (see section 5.8), but this increaseseems manageable.54Bibliography[1] J. H. Oort, Proceedings of the National Academy of Science, 10, pp.256-260, 1924[2] F. Zwicky, Astrophys. J., 6, pp.110-127, 1933[3] V. Rubin et al., Astrophys. J. 225, 107, 1978.[4] Y. Sofue and V. Rubin, Ann. Rev. Astron. Astrophys. 39, 137, 200.arXiv:0010594 [astro-ph].[5] J.-P. Kneib, R. S. Ellis, I. Smail, W. J. Couch, and R. M. Sharples,Astrophys. J. 471, 643, 1996, arXiv:9511015 [astro-ph][6] A. Vikhlinin et al., Astrophys. J. 692, 1033, 2009. arXiv:0805.2207[astro-ph].[7] A. Vikhlinin et al., Astrophys. J. 628, 655, 2005. arXiv:0412306 [astro-ph].[8] CHANDRA X-ray Observatory/NASA, http://chandra.harvard.edu, (accessed Aug. 15, 2015).[9] M. Markevitch. arXiv:0511345 [astro-ph].[10] C. Alcock et al., Astrophys. J. 550, L169, 2001.[11] Planck Collaboration, arXiv:1303.5076 [astro-ph].[12] Planck Collaboration. Astronomy & Astrophysics, 571, 2014.[13] S. Dodelson, Modern Cosmology. San Diego, CA: Academic, 2003.Print.[14] S. D. M. White et al., Astrophys. J. 209, 27, 1984.[15] S. Weinberg. Physical Review Letters 40, 223, 1978.[16] S.J. Asztalos et al., Physical Review D. 74, 12006, 2006.55Bibliography[17] E. Kolb and M. Turner, The Early Universe. Westview Press, 1994.[18] K. Griest, Annals of the New York Academy of Sciences, 688, pp.390-407, 1993.[19] M. Kamionkowski. High Energy Physics and Cosmology Conference,WIMP and Axion Dark Matter, 1997.[20] P. J. Fox et al., Physical Review D 85, 56011, 2012.[21] A. Geringer-Sameth et al., arXiv:1503.02320 [astro-ph][22] IceCube Collaboration. arXiv:1405.5303 [astro-ph][23] M. W. Goodman and E. Witten. Physical Review D 31, 3059, 1985.[24] S. Hertel, Advancing the Search for Dark Matter: from CDMS II toSuperCDMS, Ph.D. Thesis, Department of Physics, Massachusetts In-stitute of Technology, 2012.[25] S. K. Lee, M. Lisanti, A. H. G. Peter, B. R. Safdi, Phys. Rev. Lett.112, 011301 (2014)[26] D. Moore. A Search for Low-Mass Dark Matter with the Cryogenic DarkMatter Search and the Development of Highly Multiplexed Phonon-Mediated Particle Detectors, Ph.D. Thesis, Division of Physics, Math-ematics and Astronomy, California Institute of Technology, 2012.[27] R. Bernabei et al., Journal of Physics: Conference Series 375, 012002,2012.[28] G. F. Knoll, Radiation Detection and Measurement (2nd Edition). NewYork: John Wiley and Sons, 1989.[29] SuperCDMS collaboration public website, http://cdms.berkeley.edu/gallery.html, (accessed Aug. 15, 2015).[30] B. Cabrera, Oblique propagation, Presented at CDMS Detector MonteCarlo Workshop, 2009.[31] J. Lindhard et al., Math. Fys. Medd, 33 and 36, (10 and 36), 1963 and1968.[32] K. Irwin and G. Hilton, Topics in Applied Physics, 992005.http://www.springerlink.com.ezproxy.stanford.edu/content/2chgqa70h5yd1pw5/, (accessed Aug. 15, 2015).56[33] D. Brandt, M. Asai, P.L. Brink, B. Cabrera, E. do Couto e Silva, M.Kelsey, S.W. Leman, K. McArthy, R. Resch, D. Wright, E. Figueroa-Feliciano, J. Low Temp. Phys. 167, 485, (2013).[34] R. Agnese et al. arXiv: 1305.2405, 2012.[35] The MIDAS Data Acquisition System. https://midas.triumf.ca/MidasWiki/index.php/MainPage (accessed Aug. 15, 2015).[36] R. Brun and F. Rademakers, Nuclear Instruments and Methods A:Accelerators, Spectrometers, Detectors and Associated Equipment 389,pp.81-86 1997.[37] J. Filippini, A Search for WIMP Dark Matter Using the First Five-Tower Run of the Cryogenic Dark Matter Search, Ph.D. Thesis, De-partment of Physics, University of California, Berkeley, 2008. http://cdms.berkeley.edu/Dissertations/ (accessed Aug. 15, 2015)[38] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery.Numerical Recipes in C++. Cambridge: Cambridge U Pr., 1992[39] Dutt, A., Rokhlin, V., Fast Fourier transforms for nonequispaced data,SIAM J. Sci. Comput. 14 (1993), pp. 1368-1393[40] Greengard, L., Lee, June-Yub., SIAM Review, 46, 3, pp.443-454,2004. http://www.cims.nyu.edu/cmcl/nufft/nufft.html (accessedApr. 21, 2015).[41] O. Ledoit and M.Wolf, Journal of Multivariate Analysis 88, pp.365-411,2004.[42] J D. Scargle, Ap.J. 263, pp. 835-853, 1982.[43] SuperCDMS SNOLAB Trigger Task Force report. November 17,2014. Internal CDMS document link: https://confluence.slac.stanford.edu/display/CDMS/SuperCDMS-SNOLAB+Trigger+Task+Force(accessed Aug. 4, 2015).[44] Thakur, R. B., Notes on Optimal Filter Theory 2010. InternalCDMS document link: http://cdms.berkeley.edu/wiki/lib/exe/fetch.php?media=software:cdmsbats_batroot_user_guide:ofnotes_nsofedit.pdf(accessed June. 2, 2015).57Appendix AThe Imaginary Componentof Zero and NyquistFrequencyHere we prove that the imaginary component of the zero frequency andNyquist frequency must be zero for uniformly and hybrid sampled traces.There is a symmetry in the DFT coefficients about the zero frequencyfor uniformly sampled real time domain points:g˜n = g˜∗−n. (A.1)Explicitly,n −N/2 −N/2 + 1 ... -1 0 1 2 ... N/2− 1Re(g˜n) rN/2 rN/2−1 ... r1 r0 r1 r2 ... rN/2−1Im(g˜n) 0 −iN/2−1 ... −i1 0 i1 i2 ... iN/2−1where the boxed quantities are overdetermined due to the symmetry. Noticethat g˜0 = g˜∗0 requires that Im(g˜0) = 0. Said differently, Im(g˜0)=0 because g˜0is the average of the time domain trace, and because our trace componentsgk are real, the imaginary part of the average of gk components is zero.Im(g˜−N/2) = 0 for a separate reason that is related to aliasing. Forfrequencies exceeding the Nyquist frequency but not exceeding twice theNyquist frequency (N/2 < n ≤ N), it can be shown thatg˜n = g˜∗N−n, (A.2)(proof in Appenix B) and therefore g˜N/2 = g˜∗N/2, giving Im(g˜−N/2) = 0.Note that the proof of eq. A.2 depends on uniformly sampled data. Ittherefore doesn’t hold, in general, for the hybrid Fourier coefficients. How-ever, because only the fast sampled data has any sensitivity to the Nyquist58Appendix A. The Imaginary Component of Zero and Nyquist FrequencyFourier coefficient, and the fast sampled data is uniformly sampled, thesymmetry holds for the Nyquist Fourier coefficient of the hybrid traces.59Appendix BDerivation of AliasingFrequenciesThe standard DFT of an N point time domain trace, g, sampled at rate Rand spanning time T is given by:g˜n =1NN2 −1∑k=−N2gke−2piifntk (B.1)where g˜n is evaluated for n = −N2 → N2 − 1 in integer steps, fn = nT ,and tk = kR . Therefore the Nyquist frequency, fNyq = R/2, corresponds ton = TR2 = N2 .In order to derive which frequencies above fNyq alias to a single frequencybelow fNyq, we calculate the DFT of a time domain trace composed of asingle frequency greater than the Nyquist, fGN . The time domain trace,with amplitude 1, is:gk = exp[2piifGN tk] (B.2)and we definefGN =1T[N2 +m], (B.3)so our signal’s frequency exceeds the Nyquist frequency by mT . The DFT ofthis trace becomes:g˜n =1NN2 −1∑k=−N2exp[−2piitk(fn − fGN )] (B.4)g˜n =1NN2 −1∑k=−N2exp[−2pii kN (n−N2 −m)]60Appendix B. Derivation of Aliasing FrequenciesNotice that if the argument of the exponential is equal to an integer multipleof 2pi, that is:g˜n =1NN2 −1∑k=−N2exp[−2piiz]where z ∈ Z, each element of the sum equals 1+0i and they add coherently.5Without coherent adding, each term will be between -1 and 1, and the sumwill average to ∼0.Therefore kN (n − N2 −m) = z is the condition for the signal to alias tothe g˜n coefficient, which boils down to (n− N2 −m)/N = z because k is aninteger. Solving for m and plugging back in for fGN gives the frequenciesthat alias to nT :fGN =1T[−Nz + n]. (B.5)As an example, the frequencies that alias to N6T are:... , −116NT ,−56NT ,76NT ,136NT , ...and recalling the symmetry of Fourier coefficient about the zero frequencyfor real traces,6g˜n = g˜∗−nthe complete list of aliased frequencies is:... , −136NT ,−116NT ,−76NT ,−56NT ,56NT ,76NT ,116NT ,136NT , ... .All frequencies that alias to nT can therefore be expressed concisely as:falias = zNT ±nT . (B.6)Using eq. , we compute the falias values as they correspond to fig. 5.4.We are examining aliasing below the slow sampled Nyquist frequency ofhybrid phonon waveforms, fNyq,slow = 19531.25Hz, and so NT = 39062.5Hz.5notice that there is an additional proof (omitted here) for the coherent adding of theimaginary components6it seems wrong to cite this symmetry given that we used a complex time domain trace(eq. B.2), but had we used gk = cos[2piifGN tk] we still would have obtained eq. B.5, justwith more steps.61Appendix B. Derivation of Aliasing FrequenciesWe are also examining aliasing to 9538Hz, so nT = 50052.4288ms = 9538Hz. Thefalias, for z = 1→ 8, are:29524.5, 48600.5, 68587, 87663, 107649.5, 126725.5, 146712, ......165788, 185774.5, 204850.5, 224837, 243913, 263899.5, 282975.5, 302962which agree with the frequencies in figure 5.4.62


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items