EVALUATING OBSERVING STRATEGIES FOR T H E B A M COSMIC MICROWAVE BACKGROUND ANISOTROPY EXPERIMENT B y Col in James Ke lv in Borys B . Sc. (Engineering Physics) University of Saskatchewan, 1993 A THESIS S U B M I T T E D IN PARTIAL F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R O F S C I E N C E in T H E F A C U L T Y O F G R A D U A T E STUDIES D E P A R T M E N T O F PHYSICS A N D A S T R O N O M Y We accept this thesis as conforming to the required s^cindard T H E UNIVERSITY O F BRITISH C O L U M B I A October 1997 © Col in James Ke lv in Borys, 1997 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Physics and Astronomy The University of British Columbia 6224 Agricultural Road Vancouver, B .C . , Canada V6T 1Z1 Date: Abstract A n observing strategy for the Balloon-Borne Anisotropy Measurement (BAM) is chosen by performing simulations on a computer. Several strategies are modeled and the per-formance is graded by its ability to constrain a parameterized description of the Cosmic Microwave Background (CMB) angular power spectrum. We have chosen a model that is described by a normalization and slope in the area of the power spectrum B A M is sensitive too. We argue that during the next flight of B A M from Palestine Texas, a 20° azimuthal scan centered on the meridian at a declination of 70°, will in six hours of data collection verify anisotropy at scales of a few degrees, and determine whether or not the power spectrum is rising towards an expected Doppler peak at scales of ~ 1°. A l l of the steps necessary to conduct such an investigation are described in detail. Particular attention is given to developing efficient analysis techniques that make manipulation of the large data sets feasible. This is necessary to conduct Monte-Carlo studies of the instruments' ability to constrain the model parameters. Based on these results, we for-mulate an objective for the next flight of B A M that includes a detailed measurement of its beam size and shape; these details are necessary to avoid biasing the maximum like-lihood fits of the model parameters. Although B A M is chosen as a particular example, the techniques are general enough to be applied to any similar experiment that studies C M B temperature anisotropies. n Table of Contents Abstract ii Table of Contents iii List of Tables vi List of Figures vii Acknowledgments ix 1 Introduction 1 1.1 The Cosmic Microwave Background 4 1.1.1 The Thermal History of the Universe 5 1.1.2 Inflation 6 1.1.3 Sound Waves in the Early Universe-CMB Anisotropics 8 1.2 The B A M Experiment 11 1.3 A Note on Computers 12 2 Map-making 14 2.1 Introduction 14 2.2 Mathematical Framework 14 2.3 Theoretical Power Spectra 15 2.3.1 Definition 15 2.3.2 The Sachs-Wolfe (Flat) Angular Power Spectrum 16 iii 2.4 Constructing a Map of the C M B via Multipole Expansion 17 2.5 Small Area Maps of the C M B using a Fast Fourier Transform (FFT) . . 22 3 Scanning Strategy 1: Basics 26 3.1 Window Functions 26 3.2 Modifying the Observing Strategy to Choose the Scale of Sensitivity . . . 31 3.3 The Importance of Off-Diagonal Components in the Correlation Matrix . 33 3.4 A First Attempt at an Observing Strategy 36 3.4.1 A Useful Rule of Thumb 38 3.4.2 Smooth Scans Across the Sky 40 4 Statistical Analysis of the Data 43 4.1 Basic Maximum Likelihood Theory 43 4.2 The Maximum Likelihood Method Applied to Anisotropy Data 45 4.3 Signal to Noise Eigenmode Decomposition 45 4.3.1 The Noise Covariance Matrix 47 4.4 Parameterization of the Estimator 48 4.4.1 Efficient Application of the K - L Transformation 49 4.5 The Prior 49 4.6 The Frequentist Approach 50 5 Scanning Strategy 2: Specifics 52 5.1 The Test Cases 52 5.1.1 Constructing the Correlation Matrix and Finding the Eigenvalues 52 5.1.2 A Single Realization 54 5.1.3 Many Realizations 54 5.2 The 20° Azimuthal Scan 58 iv 5.2.1 Data Analysis 60 5.2.2 Measuring a Realistic Sky 64 5.2.3 Uncertain Beam Parameters 65 5.3 The Next Flight of B A M 67 Bibliography 68 v Lis t of Tables 3.1 Sampling the C M B using azimuthal scans 40 5.1 The five observing strategies modeled in this thesis 53 5.2 Statistics describing the likelihood fits of the 5 strategies under test. . . . 57 vi Lis t o f Figures 1.1 The angular power spectrum of C M B anisotropics 9 1.2 A schematic of the B A M experiment 12 2.1 Aitoff projections of various maps 20 2.2 The effect of beam smearing on the angular power spectrum 21 2.3 Six maps of the C M B created using the F F T algorithm 25 3.1 The zero lag window function for the B A M experiment 29 3.2 Affecting the window function by changing the parameters of the beams. 30 3.3 Altering the window function by changing the scan pattern 31 3.4 Window function for the 6 beam scan 32 3.5 Correlating data points taken at a given angle, 9 apart 34 3.6 Using off axis window functions to increase the sensitivity to the shape of the angular power spectrum 35 3.7 Determining the optimal correlation angle to get maximum sensitivity of the slope and curvature of the power spectrum 37 3.8 Two types of scans for the next B A M flight 39 5.1 Likelihood contours for five different scanning strategies 55 5.2 Monte-Carlo results for the five strategies 56 5.3 Monte-Carlo statistics for the 20° smooth scan 59 5.4 The S /N eigenvalues for the 20° scan 61 5.5 The effect of truncating eigenmodes on the likelihood fits 62 vii 5.6 The effect of truncating eigenmodes on the likelihood fits, using the sim-plified parameterization of the correlation matrix 63 5.7 Measuring a realistic sky 65 5.8 How uncertainty in the beam size affects the likelihood fits 66 v m Acknowledgments It is standard practice to acknowledge the help of one's advisors in this section of the thesis. However, even if it were not, I would sti l l be compelled to do so. Mark Halpern proved to be an excellent resource for questions on experimental cosmology, and in fact, experimental physics in general. He generously diverted lab funds to purchase the computing power required to do the calculations described in this thesis, and he even believed me when I told h im it wouldn't be used to play D O O M . On the theoretical end, Douglas Scott was invaluable. He wasn't even my supervisor in any official capacity, yet he found the t ime to answer any questions about cosmology that I put to h im. It was difficult to work in an environment sparsely populated with people who share the fascination and excitement for cosmology, let alone who do it for a l iv ing and understand the subtle points that new recruits to the field invariably fumble over. This is al l the more reason I am indebted to Mark and Douglas for enduring my tireless barrage of questions, and for trying to create an environment conducive to research. I would be remiss not to mention the support of Gregory Tucker who, as a member of the project since its inception, and as the person responsible for analyzing the data from B A M ' s first flight, bridged the gap between experiment and theory. His input often saved me hours of work that was doomed to head in the wrong direction. The two other graduate students in the lab, Chris Padwick and Mi randa Jackson, provided just the right amount of insanity that any good research environment needs, and they were a wonderful audience for my own twisted sense of humour. I'm also grateful for all the friends who have in one form or another enriched my life at U B C ; in part icular Stephanie Curnoe, in part for her many useful comments on this manuscript, but especially for lett ing me fi l l a page in her book. ix Chapter 1 Introduction The Universe, as has been observed before, is an unsettlingly big place; a fact which, for the sake of a quiet life, most people tend to ignore. Douglas Adams [1] The Cosmic Microwave Background (CMB) Radiation pervades the universe and can be used to measure several fundamental quantities in cosmology, such as the geometry, age, and density of the universe. The early history and geometry of the universe are en-coded in the anisotropies of the C M B , which are deviations about the mean temperature, T, of the radiation. Unfortunately, the signal level is extremely low ( ^ r - w 10~5), and it is difficult for an experiment to study the C M B over the entire sky with acceptable signal-to-noise levels. There is a trade-off between sky coverage and observing time per spot, and a balance must be found to extract the maximum amount of information from the data. The goal of this work is to simulate, using a computer, the Balloon-Borne Anisotropy Measurement (BAM) experiment to determine an optimal way to collect data. Only re-cently have C M B experiments like B A M been able to achieve the resolution and detector sensitivity necessary to constrain cosmological parameters. To fully understand their in-struments' ability to measure these quantities, each experimental group must perform a detailed analysis of the instrument prior to taking data. In fact, such an analysis is used to govern how data are taken. There are two approaches to this problem: a Monte-Carlo simulation of the experiment, and an analytic solution that requires a few assumptions 1 Chapter 1. Introduction 2 about the response of the instrument. Both will be used in this thesis, but the focus will be on the former, as it relates directly to the actual data collection, reduction, and analysis phases of a real experiment. The analytic solution is discussed near the end, but mostly in the context of how it can be adapted to streamline the analysis stage of the Monte-Carlo data. This thesis will detail the simulation procedure in chronological order. In other words, assuming the reader was starting with little background, it could be used as a "recipe" for this or any similar C M B experiment. Chapter Two introduces the mathematical formalism that is standard in the field, and which is used exclusively in this thesis. Using this framework, two methods of creating a map of the C M B anisotropies that is consistent with a given cosmological model are discussed. The first involves calculating the temperature field using a spherical harmonic expansion. Although more accurate, is computationally demanding, making it impossible to produce maps of the anisotropy quickly. An alternate method can be used provided the experiment does not observe a large portion of the sky, which is precisely the case for B A M . For small enough patches, the sky is well approximated as a flat stirface, and a Fast-Fourier Transform (FFT) routine can be used. The maps become more distorted as the size of the sky being studied increases because the approximation breaks down, but for the 60° x 60° maps required in the B A M simulation, the errors in computing angles between points are, at worst, about 2%. From these maps, an observing strategy is chosen and data are collected. The quantity being deduced from the data is the temperature variance of the anisotropies. When measured over all angular scales, this variance describes the angular power spectrum of anisotropies, which is used to constrain cosmological parameters. Experiments like B A M that are restricted to finite sky coverage are able to characterise the spectrum only within a region described by its "window function". Chapter Three describes how to choose an observing strategy that samples particular scales with sufficient signal-to-noise ratios. Chapter 1. Introduction 3 The scale that an experiment is most sensitive to is described by its "window function." A "chopped" scan that measures discrete points on the sky offers high S /N, but does not easily allow the experiment to change its scale of sensitivity. A smooth scan trades more sample points for noise, but can be used to synthesize different window functions and measure more areas of the power spectrum. Analysis of C M B anisotropy data is of particular importance, and Chapter Four is dedicated to describing the statistical techniques required to extract cosmological in-formation from a B A M dataset. The idea is to compare the data with a theoretical correlation matrix that predicts how temperature differences correlate on different angu-lar scales. For an experiment with N data points, the correlation matrix used is N x N. B A M can potentially sample ./V sa 2000 points, so any brute-force numerical method of data reduction is time consuming, due to the computational difficulty in inverting large matrices. It is at this point that the analytic solution to the optimal observing strategy problem is introduced. It involves a series of mathematical operations based on the Karhunen-Loeve transformation that diagonalize the matrices, making them trivial to manipulate. This is especially important when conducting a Monte-Carlo analysis to uncover hidden biases, where computation speed is essential to produce enough points for the statistics. Finally, the techniques developed throughout are applied to determine the data taking program for B A M ' s next flight. Chapter Five describes the analysis of two chopped and three smooth scans. Their performance is evaluated in terms of how well they can recover the parameters of a power spectrum characterised by a normalization and a slope. The rest of Chapter One briefly reviews the theory and terminology associated with the cosmic microwave background (CMB). A more detailed discussion can be found in the textbooks by Peebles [2] and Padmanabhan [3], or the excellent notes by Lyth [4]. The advanced reader may wish to jump directly to the discussion of the B A M experiment, Chapter 1. Introduction 4 which appears near the end of this chapter. 1 . 1 The Cosmic Microwave Background In 1965 Penzias and Wilson [5] measured a microwave signal that was cosmological in origin. Dicke et al. [6] correctly interpreted this as radiation left over from the creation the microwave regime. This radiation serves as one of three "pillars" that support what we call the hot big-bang theory. The second pillar is based on the observed redshifts of galaxies. In the 1920's, Edwin Hubble [7] noticed that the spectrum of distant galaxies is redshifted. We know from the relativistic Doppler effect that objects moving towards us appear more blue, and those traveling away are slightly more red, with a change in colour that is more pronounced as the velocity increases. Redshift, labeled z, is calculated [3] by comparing the wavelength of the light we observe with what it was when emitted by the source: If the universe were "static", one would expect that some galaxies would be moving towards us, and some away, depending on the gravitational fields affecting them and their initial velocity vector. However, Hubble's work showed that galaxies are systematically receding, and he went on to prove that the further a galaxy is from the Earth, the faster it moves. We interpret this observation as a result of the expansion of the universe, in which the distance between every point in space increases over time. It is logical to conclude that since it is expanding, the universe must have been smaller1 in the past, and consequently more dense. Known physics breaks down for sizes less than the Planck xIt should be clarified that the universe is boundless and does not have a "size" per se. It would be more correct to say the space between things was smaller in the past. of the universe, now cooled over time so that it appears to us as low energy photons in l + 2 = 'observed 'emitted Chapter 1. Introduction 5 length (PS 10 _ 3 3 cm) , and thus it is not at all obvious that the universe started from a singularity, but it is clear that just after its birth, the universe was very dense. The big bang is a misnomer that describes the subsequent expansion of the universe. The third pillar is the agreement between the observed abundances of primordial helium and other light elements, and the theory of big bang nucleosynthesis [3, 8] which is based on the well understood physics of the early universe. 1.1.1 The Thermal History of the Universe The early universe was populated with extremely energetic, or "hot" photons that pre-vented bound nuclei from forming. The photons were in thermal equilibrium with all these particles, forming a perfect blackbody distribution. The photons and matter2 re-mained tightly coupled due to Compton scattering up until the number density of photons with an energy greater than that of neutral hydrogen dropped below the matter density. No longer able to ionize the hydrogen, the photons decoupled from the matter, and free-streamed3 with no further electro-magnetic interactions with matter until they strike our detectors. As the universe expands, the form of the blackbody spectrum is preserved, but its characteristic temperature decreases [3]. Cosmologists frequently use redshift as a measure of time. Objects that are far away appear to us as they were in the past, because it takes time for the light to reach us. From Hubble's work, we also know that objects far away from us are redshifted, and so the farther back we look in time, the higher the redshift is. The temperature of the C M B scales with redshift as T(z) = T0(l + z), (1.2) 2The matter consists of neutrons, protons, and electrons. Cosmologists frequently substitute baryon for matter even though electrons are leptons, because the electron mass is negligibly small compared to the mass contribution of the nucleons. 3Free-stream simply means that they travel unimpeded, interacting with nothing Chapter 1. Introduction 6 where To is the temperature of the C M B today, and T(z) is its temperature at the given redshift. The hydrogen in the early universe remained ionized until the temperature dropped below about 4000 K. Since the temperature of C M B today is 2.73K [9, 10], the redshift at decoupling is z 1400, corresponding to about 300,000 years after the big bang. Cosmologists refer to this event as the epoch of recombination. The sphere around us at this redshift is the surface of last-scattering, and the photons that scattered towards us at decoupling are the ones C M B experiments study. Because the universe has been expanding since this epoch, the C M B photons have been redshifted considerably4 and appear to us as microwaves with a temperature of 2.73 K . 1.1.2 Inflation The universe is isotropic and homogeneous on scales above 300 Mpc (f« 10 2 2m), but below that it demonstrates considerable structure. Distances on this scale are what astronomers call "large", and it is generally assumed that this large scale structure (LSS) [11, 12] grew via gravitational instability over time from "seeds" of slightly over and underdense regions. Maps of galaxies show huge clusters and superclusters of galaxies [13], as well as immense "voids" where there are practically no galaxies. This observation supports the notion that there must be deviations in the isotropic C M B that are related to today's LSS, because the temperature of the C M B photons traces the density of baryons on the surface of last scattering. Prior to the results of C O B E , however, the C M B was measured to be completely uniform and isotropic in temperature to the 1% level 5 (ie:^£ < 10 - 2 ) [14]. In other words, there was no evidence that linked the C M B with the seeds of large scale structure. The idea that the C M B should be uniform at all was troubling, and became known 4 in fact, by a factor of 1400! 5The dipole was known, but not the cosmological contribution to it. The signal is primarily caused by the earth's motion relative to the CMB. Chapter 1. Introduction 7 as the horizon problem. No physical process operates acausally, meaning that in a given time t no part of the universe can affect another at distances greater than ct, where c is the speed of light. This limiting scale is called the "horizon6", and the universe has expanded such that the horizon size at recombination subtends only about 1° on the sky today. If every one of.these 1° patches is independent, it is curious that the sky is so perfectly uniform in temperature. The theory of inflation, formulated in 1981 by Alan Guth [15], offered an explanation. Simply stated, within 1 0 - 3 2 seconds after the big bang, when the universe was still fully contained within one horizon size, an inflationary phase occurred where the universe expanded at an exponential rate. This satisfies the horizon problem by placing the whole universe within one causal patch, so that the contents did, in fact, have time to establish thermal equilibrium prior to inflation. The quantum fluctuations created by inflation were boosted to classical scales where they became big enough to act as the seeds of LSS. The anisotropy in the C M B is only one part in 105, so it safe to assume that the overdensities are small, ( y <C 1)- This justifies using a linear approximation, so we can decompose the perturbations in the density field in terms of Fourier modes and study the development of each independently. Prior to inflation, all modes have a size less than that of the horizon. During inflation, modes were boosted in scale until they reached the horizon size and were "frozen" in. After inflation is complete, the universe expanded normally, and in time eventually caught up with these frozen in modes, which are then said to "re-enter" the horizon. Only modes within the horizon at any given epoch can be operated on by physical processes, which explains why the universe on large scales appears uniform and isotropic. Inflation predicts the amplitude of the fluctuations, and for most generic models, the spectrum of fluctuations 6 Actual ly, since the universe is expanding as the light travels, the horizon is not strictly ct. For a matter dominated universe, the horizon is actually 3ct. Chapter 1. Introduction 8 is scale invariant (ie: the density perturbation is the same amplitude for all modes). It was not until the C O B E detection of anisotropics [16] at the 10~5 level in 1992 that the inflationary scenario was verified. Although there is no single theory of inflation, its ability to explain a variety of cosmological problems makes it the most popular extension of standard big bang model. 1.1.3 Sound Waves in the Early Universe-CMB Anisotropies Recall that prior to decoupling, the photons and baryons were tightly coupled due to Compton scattering. This "photon-baryon fluid" [17] responded acoustically to areas of local density perturbations. In an overdense region, the baryons fell into a gravitational potential well, but in doing so, their coupled photons eventually provided enough radi-ation pressure to push them away. This set up acoustic oscillations in the fluid, where the gravity pulls the baryons into the well, and the photons provide the restoring force to push them out. At decoupling, the radiation pressure was finally released, and the baryons fell into the well, forming the seeds of the LSS we see today. The photons, on the other hand, free-streamed through the universe, carrying the imprint of the matter distribution at the time of decoupling [18]. The three dimensional power spectrum of the Fourier modes is given by P(k), with k the wavenumber of the mode [2]. Since from our perspective, the surface of last scattering is a sphere, we can only study two dimensions. We use d, which is the angular power spectrum of these modes, where £ is an integer and is related to angle on the sky via the spherical harmonics (£ « I/O)- Figure 1.1 shows the angular power spectrum of two representative cosmological theories. Clearly, the density of photons coming from regions in their compression phase during decoupling will be higher, and for those in a rarefaction phase, lower. Consider a mode that is just coming into the horizon at decoupling. It begins to collapse under its own Chapter 1. Introduction 9 i i i i i i i—r~| 1 1 1—i—i—i"?-! Theoretical CDM angular power spectra 5 h + 2 h —, ! j—p_ -CDM: a = 0.05 CDM: n b = 0.01 100 1000 Figure 1.1: The angular power spectrum of C M B anisotropies. The variance, Ct of the temperature about the mean for a given mode is plotted against the mode number, £. Both are based on a standard C D M model with O 0 = 1 and a Hubble constant of 5 0 k m s _ 1 M p c - 1 (h = 0.5). The only difference between the two is the contribution of baryons to the density of the universe. ((|B) Chapter 1. Introduction 10 gravity, but at the point where it becomes maximally dense, the photons decouple. One might expect that this would show up as a "hot spot" on the sky, but it turns out that these photons lose energy as they climb out of the gravitational well [19], and end up with a temperature lower than average. This deviation from average shows up as a peak on the power spectrum. This first peak is so important, it has a name: the "Doppler peak", and its position on the spectrum can be used [20, 18] to compute the density of the universe relative to the critical density, p0. Since the redshift at decoupling is well characterized, the horizon size at decoupling is known. The angle this scale subtends on the sky is dependent on the geodesies the light travels along through the universe. A given scale in a flat (0 = 1) universe subtends a larger angle on the sky than in an open (Q < 1) universe. Naturally, any other mode that has undergone an integral number of oscillations just prior to decoupling will exhibit high variance as well. These are the odd harmonics on the power spectrum. Their height is clearly dependent on the density of baryons and photons, because they govern the oscillation properties. Modes that are maximally underdense show up as the even harmonics. The velocity maxima are 90° out of phase with the density maxima, so photons are Doppler shifted by the baryons as they oscillate between compression and rarefaction phases. This contributes to the variance, preventing the spectrum from going to zero between the peaks associated with density extremum. Since recombination did not happen instantaneously, photons had an opportunity to random walk out of overdense regions and into underdense ones, thus wiping out any structure on the smallest scales (high £). This effect is called Silk damping [21, 17]. Different theoretical models predict different shapes for the power spectrum. Therein lies the excitement and promise of C M B research; by measuring the angular power spec-trum accurately, we can constrain cosmological parameters, and indeed dismiss some theories altogether. Chapter 1. Introduction 11 1.2 The B A M Experiment The heart of B A M is a differential Fourier transform spectrometer that measures the frequency spectrum of the temperature difference between two spots of size 0.7° separated on the sky by 3.6°. This spectrometer was originally used in the C O B R A [10] experiment to confirm the blackbody spectrum of the C M B , results of which can be found in [22]. The first flight of B A M took place at the National Scientific Ballooning Facility (Palestine, Texas, USA) during the summer of 1995, and successfully measured C M B anisotropy at an angular scale of 1.2° [23]. Mounted inside a cryogenic dewar, the spectrometer receives light from an off-axis prime focus telescope, and the whole apparatus is attached to a gondola that is carried by a scientific balloon to an altitude of about 41 km. The telescope is pointed via a three-axis closed-loop servo system that use guide stars as the reference signal. A ground based computer sends commands to the instrument telling it where to point and when to take data, and it also receives telemetry from the instrument. A schematic of the B A M gondola is shown in Figure 1.2. B A M ' s strength is in its ability to measure the frequency spectrum of anisotropies. Since we know that this spectrum must be a related to two points on the sky characterised by a perfect blackbody7 any deviation would indicate a contaminated signal. Since the only thing that can modify the spectrum are interactions with sources that formed after recombination, they are labeled foreground sources. The ability to discriminate between these foreground signals and the underlying cosmological signal is vital to any experiment that measures the C M B . One quantity that experimentalists use in describing the effectiveness of their instru-ment is the sensitivity of the receivers. B A M uses cryogenically cooled bolometers and 7perfect to the 10~ 5 level because that is the upper limit on observed spectral distortions. Chapter 1. Introduction 12 Figure 1.2: A schematic of the B A M experiment. records the intensity of radiation that falls on them. The measurement is more accurate when the measurement time is longer, the effect being described by s Noise =-T, (1 .3) where t is the integration time in seconds, and S is the sensitivity. The B A M 95 flight used bolometers with S ~ 6 0 0 / J K S - 1 / 2 , but with new receivers installed, a four-fold increase is expected for the upcoming flights. Unless stated otherwise, the simulations described within this thesis use S = 164 / i K s - 1 / 2 . 1.3 A Note on Computers No single part of this thesis could be performed without the aid of a computer, and it would be an oversight not to include details of the algorithms used (although source code Chapter 1. Introduction 13 will not be given). It is assumed that the reader has a rudimentary understanding of programming and the limitations of speed, memory, and disk space. For completeness, all the computations related to the work contained herein are performed on an Intel Pentium Pro 200, with 64 MBytes of R A M and a 2.1 GByte hard-drive. The Linux operating system, with its built in G N U C compiler, are used as the program development environment. Chapter 2 Map-making 2.1 Introduction The B A M experiment collects data by taking temperature differences on the sky. To model the experiment, it thus makes sense that we have a "map" that shows what the sky looks like. Unfortunately, a detailed map of the anisotropy in our universe has never been measured1. As shown later in the chapter, if such a map did exist, there would be no need to conduct experiments like B A M . Measuring a full sky map of the C M B with good angular resolution is one of the goals of the next, generation of cosmic background satellites, M A P and P L A N C K . For the purpose of simulating how well B A M can reconstruct parameters of the sky, we need to create a map using a theory of what the sky should look like. At a glance, this procedure may seem unreasonable; the simulation will be able to tell how well B A M could do if the real sky was similar to the "fake" sky. However, the procedure is relatively insensitive to the choice of angular power spectra, provided a reasonable model is used. 2.2 Mathematical Framework The natural coordinate system to use in C M B research is spherical coordinates, since from our perspective, the surface of last scattering is a sphere. We are interested in expressing the temperature of the C M B in a given direction in terms of an expansion in 1 C O B E produced a full sky map with a resolution of about 10°. Since current experiments, including BAM, have a resolution about an order of magnitude smaller, the COBE map is of little use. 14 Chapter 2. Map-making 15 modes: CO I AT(0,</>) = E E o.lmYtm{9,<f>). (2.1) 1=2 m=-t The Yim are the spherical harmonics, and the a,(m are just scalars that denote the "weight" of a particular £, m mode. We will use aim with units of pK. Another convention is to keep aim dimensionless by dividing both sides of Eq. 2.1 by T0. Those familiar with spherical harmonics may note the absence of the first two I modes {£ = 0,1), in the above expression. The £ = 0 mode represents the mean temperature, To — 2.73 K and is not included in the expansion because B A M inherently removes it by taking a difference in temperature between two points on the sky. The dipole, or £ — 1 mode is also discarded. The earth is moving relative to the expansion of the universe, which means that via the Doppler effect, the C M B appears warmer in the direction of motion, and colder in the direction we are traveling away from. This results in a term that overwhelms the cosmological contribution to the dipole. Of course, since the dipole is not a constant, one would expect that a differential measurement would contain some component of it. However, since the dipole spans L80°, it is safe to conclude that its amplitude is essentially constant over the B A M beam throw of only 3.6°. 2.3 Theoretical Power Spectra 2.3.1 Definition The temperature anisotropy is derived using the complete set of aim. No theory can predict the amplitude of each individual mode, but they do quantify the statistics of the set. Inflation dictates that at each £, these atm are Gaussian distributed with zero mean. Because the only other free parameter in Gaussian statistics is the variance, the Chapter 2. Map-making 16 anisotropy is completely characterised at each scale by 1 1 C? = (a*ema(m) = 7 ^ — YI atma(m- (2-2) ' m=—l The full set of Ce is called the angular power spectrum, and as discussed in the previous chapter, each theory predicts a different shape for this curve. If a full sky map of the anisotropics were available, each a-em could be calculated by the inverse of Eq. 2.1: atm = I AT(0,<l>)Ytm{0,<f>)<m, (2.3) J s where Q is an area element on the 2-sphere. Since the aem are random numbers, there will be an error associated with using Eq. 2.2 to reconstruct the Ce even if they are known precisely. The fractional uncertainty associated with a given Ce is yj2/(2£ + l) and is called the cosmic variance [24, 25, 26, 27]. The effect of incomplete sky coverage is to enhance the cosmic variance by a factor roughly inversely proportional to the amount of sky covered [28]. This additional uncertainly is the sample variance. 2.3.2 The Sachs-Wolfe (Flat) Angular Power Spectrum One particular form of the power spectrum will be referred to often enough in this thesis that it merits discussion here. Inflation creates an initial power spectrum of fluctuations with an amplitude that scales as a power law in wavenumber, P(k) oc kn, (2.4) where n is the spectral index. By considering the gravitational well the photons had to overcome after last scattering [29, 19, 30], this can be converted into n = 4TT r ( ( 3 - n ) / 2 ) r ( ( 2 £ + n - l ) / 2 ) ' 5 * r ( (4-n) /2) r ( (2* + 5- ra) /2) ' 1 1 Chapter 2. Map-making 17 where Q is an overall normalization. The specific case of n = 1 describes a spectrum that is scale-invariant, or "flat". In this case, the angular power spectrum reduces to _ 24TT Qlat C e ~ —JiiTTy ( 2 - 6 ) This form of the Ce is used as a reference for comparing different experiments, because unlike C D M or other more realistic models that account for processing of the spectrum, there is only one parameter, Qflat , required to interpret the results [31]. It is essentially a measure of the power within the experimental bandpass, or window function, which will be described in the next chapter. 2.4 Cons t ruc t ing a M a p of the C M B v ia M u l t i p o l e Expansion Constructing a map of the C M B is straightforward in principal. We select a set of a^m that are drawn from a zero-mean, Gaussian deviate with variance Ct and apply Eq. 2.1 for those directions of interest. As simple as this may seem, the procedure is inherently slow; the number of computations required per point on the sky is roughly £ 2 , thus it is imperative to make the algorithm as efficient as possible. This is done by precomputing as many things as possible before entering the summing loop over £ and m. We start by choosing the set of atm and placing them in a lookup table that can be accessed quickly later on. The spherical harmonics themselves contain terms that are only dependent on (£,m), and can be precomputed as well. Although the and Y(m are typically complex quantities, the temperature field is not; we use real-valued spherical harmonics in our expansion. These differ in form, but not in property [30], from the conventional definition of spherical harmonics as described in Abramowitz & Chapter 2. Map-making 18 Stegun [32]. The complete set of real valued spherical harmonics are 1 m = 0 Ytm{9,4>) 2£ + l (£-\m\)\ (cosfl) x I cos m<f> m > 0 (2.7) \ | 4TT (£+\m.\)\ { (-l)l m IV2sin|m|<^ m < 0, where the are the associated Legendre polynomials. It should be clear by looking at these equations that any term that does not depend on angle can be precomputed. That way, only the Pp and cos/sin terms have to be calculated for each point. Computing the Legendre polynomials from either their integral form or from an expansion [32] can be very labour intensive. Numerical Recipes [33] describes how computing these polynomials can be performed more efficiently using the recurrence relation where nil is the product of all odd integers less than or equal to n. These polynomials tend to become very large at increasing £, and the floating point limit of a typical computer is reached at £ PS 150. In order to continue the expansion, the computer must be told to use 32-bit precision math. 2 The only other caveat is to watch the definition of angles in the expansion. The limits on the angles used in Y™(0, cb) are 0 < 6 < ir and 0 < d> < 2ir. The position angle on the sky, however, is usually quoted using equatorial co-ordinates, with —7r/2 < 8 < 7r /2 and 0 < a < 2n, where (a, 5) are the right ascension and declination of a target spot on the sky. The above equations can be used to compute the temperature anisotropy at any point on the sky with perfect resolution. A real experiment, however, does not fare as well due 2In the C language, this simply means using the variable type long double. (I - m)Pp(x) = x(2£ - \)Fp_x (x)-(£ + m- l)PP2(x), (2.8) and a starting value (2.9) Chapter 2. Map-making 19 to the finite size of the telescope beams. Most C M B anisotropy experiments, B A M included, assume3 a Gaussian beam profile. In other words, as one looks at an angle 9 away from a point source, the response falls off as exp[—62/2a2] where a is related to the F W H M of the beam by a =FWHM/V8H2. The beam response can be accounted for in the spherical harmonic expansion by simply replacing ci(m with a'im = a i m e ~ ^ ° \ (2.10) provided the beam is not too large [34]. The exponential term provides a cutoff that can be used to set the maximum £ in the multipole expansion. Using all of the above information, we are now able to create a set of maps based on various models of C M B anisotropies. Full sky maps are usually shown in an Aitoff form, which is an equal area projection that maps the the entire sphere onto a flat surface [30]. Figure 2.1 illustrates several full sky maps, and Figure 2.2 plots the power spectra used to calculate them. The picture of the earth is shown to help the reader understand the nature of the Aitoff projection. The C O B E data, smoothed on a 10° scale, is also provided as an example of real data. The four theoretical maps are all constructed using the same random number seeds so that they all contain similar features. A C D M spectrum contains more small scale structure than a flat power spectrum and this is clearly evident by comparing the centre two panels. Smoothing the maps with a 10° beam wipes out all but the largest scales, making it impossible to discriminate between C D M and flat power spectra. Despite the attention to efficiency, the method is too slow to make several realizations of a sky, which is a task which we must do to effectively Monte-Carlo the experiment. To produce a full sky map at 0.7° resolution using a maximum £ of 400, the C P U time is roughly 18.5 hours per realization. Of course, if it is known ahead of time exactly 3 or measure directly Chapter 2. Map-making •20 Figure 2.1: Aitoff projections of various maps. The C M B maps are all C O B E normalized, and bright (dark) denotes hot (cold) regions. The temperature limits are given in brackets after the description. The following describes the maps from left to right going down the page, a) The elevation of points on the earth's surface, b) C O B E map of C M B anisotropy at 1 0 ° resolution. [—100 / / K , 100 / i K ] c) Theoretical map of C M B with a flat power spectrum and 0 . 7 ° resolution. [—275/iK, 275 / iK] d) Theoretical map of C M B with a standard C D M power spectrum and 0 . 7 ° resolution [—200/iK, 200 / iK] e) Theoretical map of C M B with a flat power spectrum and 1 0 ° resolution. [—75/iK, 75 / iK] f) Theoretical map of C M B with a standard C D M power spectrum and 1 0 ° resolution [—75 / i K , 7 5 / i K ] Chapter 2. Map-making 21 Figure 2.2: The effect of beam smearing on the angular power spectrum. The ordinate is scaled such that 6C 2 = h Plotted are flat (dashed line) and C D M (solid) spectra with no smearing (a), convolved with a B A M sized beam (0.7°) (b) and finally with a 10° beam (c) which clearly shows how difficult it is to distinguish between spectra when the sky is observed with such a large beam. Chapter 2. Map-making 22 which points the experiment will sample, then the expansion need only be performed at those locations, sacrificing a complete map of the sample area. However, if the observing strategy is changed even slightly, the data need to be recomputed. The simulations described later in this thesis can sample about 30,000 points, which translates to about 9 hours of computation. This is roughly the amount of time the real experiment would spend taking data. 2.5 Small Area Maps of the C M B using a Fast Fourier Transform (FFT) If we restrict ourselves to'smaller patches of the sky, a technique using the two dimensional Fourier transform is available. This is a particularly appealing alternative, because the Fast Fourier Transform algorithm is easily implemented on a computer, and can provide detailed C M B maps with just a few seconds of C P U time. It should be relatively easy to get a feel for why this should work. The spherical harmonics, after all, are just a 2-D decomposition into modes of a field projected on the surface of a sphere. A 2-D Fourier transform decomposes a field on a flat, Cartesian surface. For small enough areas, a portion of a sphere is essentially flat, so it stands to reason that there would be some relation between the F F T and the spherical harmonics for small patch sizes. There are two approaches to derive the transformation. 'The first is to take the three-dimensional radiation power spectrum, Prad(k) and using the small angle approximation, integrate out the z component to get the 2-D field [35]. The alternative approach begins by mapping the celestial sphere on a plane tangent to the center of the observation region. Points at a distance 9 from this center are mapped onto circles of radius r = 2sin(0/2). Clearly, r ~ 8 for small angles. This prescription uses the Wiener-Khinchine relationship to translate between the autocorrelation function of the temperature field and the power spectrum, and can be used to create maps from Ct Chapter 2. Map-making 23 directly [36, 37]. Both procedures yield the same result: AT{0,4>)=I v a{k)e?k*dk\ (2.11) which is accurate for large k and small x (distance from the center of the map). This equation is simply a two dimensional Fourier transform, and a(k) is interpreted as the 2-D power spectrum of the temperature field. For a temperature field with an angular size of A, £ = ^ • k and a(k) is a drawn from a Gaussian random variable of variance Ct, where £ = •SJ£?X + £2y. It is also given a random phase between 0 and 2ix from a uniform deviate. The approximation used to develop this map-making procedure is inaccurate at low £, because using a box in A;—space causes ringing in x —space [38]. For the continuous Fourier transform, there is no set upper limit on /, although for most theories, the Ct fall to zero around £ ?s 3000. To implement this on a computer, the continuous F T must be recast into a discrete form. The details are unenlightening [38], and the result is , JV/2-1 N/2-X L\T{0,<p)= jr E E a(fc)e-*'^ ( e*- +**» ). (2.12) The above equation can be computed directly, but if one restricts the grid size to be a power of two, the Fast Fourier transform (FFT) algorithm can be used to expedite the calculation. There are many ways to define the convention of the F F T . Many take the bounds on the sum to be from 0 to N—1. This is fine, as long as one remembers that in the F F T framework, values 0 through N/2 are positive frequencies, and N/2 through N —1 are negative. The easier approach is define the bounds as -N/2 to N/2—1, which makes it clear which are negative frequencies and which are positive. This is especially important when using the reality conditions that must be enforced when performing the F F T . In general, the frequency grid will be complex, but the final F F T map of the sky should be purely real. The reality condition is simply: Chapter 2. Map-making 24 a(k) = a*(-k) (2.13) A simple way to implement this condition is to fill up quadrants I and II (the upper half of the frequency plane) and then apply the reality condition to fill up the rest. Most F F T algorithms require the reordering of the quadrants in both the input and output array, and so care should be taken to ensure the map is presented correctly. Including the smearing effects of a Gaussian beam is straightforward, because a con-volution in real space is simple a multiplication in the frequency domain. This simply involves replacing a(k) by which is equivalent to 2.10 for £ >^ 1. For the simulations that will be described later in this thesis, a map with A = 60° is required. The error in using the small angle approximation for this scale is only 1%, but note however, that 60° corresponds to a minimum £ of only 6, which is close to the breakdown of the approximation used to derive the Ycm to F F T conversion. We will find in the next chapter that B A M is relatively insensitive to these low £ modes, and so we can safely dismiss them. The maps used in these simulations start at £ m- m = 12. When sampling the maps, it is necessary to interpolate between points because sam-ples rarely fall on the center of a pixel. Depending on the interpolation algorithm, this can cause a small error in the temperature estimate. To minimize this error, a grid size should be chosen that results in pixels much smaller than the beam size. Because C P U time is no longer an issue in creating maps, this restriction can be easily accommodated. For the B A M simulations in this thesis, a grid size of 512 x 512 producing 0.1° pixels, is used. The C P U time is roughly 3 seconds per realization, which is three orders of magnitude faster than the method of multipole expansion. Figure 2.3 shows six results a\k) = a(k)er^k^2 (2.14) Chapter 2. Map-making 25 from applying the method. Again, the random number seeds were kept the same in order to facilitate comparison between different maps. Just as with those constructed using the multipole expansion, the C D M maps demonstrate more small scale power except when convolved with a 10° beam which, as expected, looks just like a flat model. Figure 2.3: Six maps of the C M B created using the F F T algorithm. Each is a 60° x 60° realization with 512x512 pixels using the same random number seed so that the structure can be compared. The top three panels are derived using a flat power spectrum, and the lower three with a standard C D M model. From the left to right, the beam size (FWHM) is 0.1°,0.7°, and 10.0°. Chapter 3 Scanning Strategy 1: Basics In order to sample the maps described in the previous chapter, it is necessary to choose a scanning strategy that governs what area of the map to take data from as a function of time, and how long the integration needs to be to ensure acceptable signal to noise ratios. The concepts required to help choose a strategy are developed in this chapter, and although the discussion can be applied to other C M B experiments, B A M will be used as a specific example. 3.1 Window Functions In order to .get a good estimate of the amplitude of the anisotropy at a given £ mode, it is necessary to sample as many cycles of that mode as possible. With incomplete sky coverage, an experiment 'is clearly unable to sample every cycle of that mode on the sky, and will in fact be insensitive to modes with scales larger than that of the scan area. As we saw in the previous chapter, the beam shape smears out power on small angular scales, and clearly influences the high £ modes. In addition, the observing strategy adopted by an experiment must be folded into the analysis. The effects of the latter two are described by the window function [39], which characterises the range of the angular power spectrum that the experiment is able to study. Since most theories predict Gaussian statistics, the angular power spectrum can be completely described by the two-point correlation function, ( A T Q A T ^ ) , where a, {3 denote different spots on the sky. Using the orthogonality of the spherical harmonics [32] and 26 Chapter 3. Scanning Strategy 1: Basics 27 Eq. 2.1, it is easy to show that -1 oo C ( M = (L\TaL\Tp) = —Y,W+l)CePe(na-hp)e-^+1^. (3.1) 4 l T 1=2 where n is a unit vector in the direction of the data point, 9ap is the angle between the two points, and Pt are the Legendre polynomials. The exponential term at the end is added to account for the Gaussian beam shape, as per Eq. 2.10. For an experiment with N data points, we compute the above for each pair of points and construct C ^ , the theoretical correlation matrix. It completely describes the relationships between the data, and depends on the model of the anisotropy (the Ct) the positions on the sky that the experiment studies (the Legendre polynomials), and the beam response function (the exponential). The last two terms can be considered as a weighting function, Wt, for the Ct, which is exactly what we are interested in: We(ea0) = Pt(ha • np)e-^+1^. (3.2) This is the simplified case for an experiment that measures the temperature of single spots on the sky. Generally, a measurement consists of a sum of M temperatures with weights, wa sampled in the vicinity of the point of interest: M ATI = E " « A r ( n ' a ) , (3.3) a=l which changes the window function and correlation matrix to M M Wt(0afi) = E E WaWpPeihi, • nfre-^W, (3.4) a p M M ? = E L v ^ ( M . (3.5) a 0 If the sampling is done smoothly instead of "chopped" from point to point, the sums in the above equations become integrals. Chapter 3. Scanning Strategy 1: Basics 28 B A M is configured to perform a 2-beam chop that takes the difference between two spots on the sky separated by 7 = 3.6°. When the beams are weighted as ±1, the mea-surement is called a single difference. If we denote the positive and negatively weighted beams as a = 1 and a = 2 respectively, and use the positive beam as the reference, Eq. 3.4 expands to Wtfa) = [Pt{h] -n})- Pe(hl • n 2 ) - P f (n 2 • h)) + T,(n 2 • f $ ] e-'«+ 1>* a. (3.6) As in the single beam case, the window function consists of the Legendre polynomials and the beam response function. The form of the window function depends on the angular separation between the data points i and j, but the case i = j'is particularly important. ( A T / A T ' ) reduces to (AT ' 2 ) = A T / ^ S , the variance of the temperature anisotropy, which is what we are trying to measure. For i = j, 9{j is zero, and We is defined as the window function at zero lag, for which Eq. 3.2, 3.6 reduce to We = e-e^\ (3.7) We=[2-Pe(cos7)}e-e^2. (3.8) Figure 3.1 is a plot of the zero lag window functions relevant for the B A M experiment. The single beam window function has nearly equal weight to all modes at large angular scales, but attenuates rapidly when the mode size is on the order of the beam size. The single difference window function also suffers from this same high £ cutoff, but has reduced sensitivity at low £ as well. Any mode with a wavelength larger than the beam throw will contribute the same signal in both the positive and negative beams, and thus be subtracted off. The effect of changing the beam size and beam throw by roughly 50% are shown in Fig. 3.2. Clearly, the beam size is responsible for the high £ cutoff, and the size of the beam throw controls the low £ cutoff. The effect of adding uncertainty to the Chapter 3. Scanning Strategy 1: Basics 29 Chapter 3. Scanning Strategy 1: Basics 30 beam parameters on the ability to deduce spectral parameters will be investigated later. 1000 Figure 3.2: Affecting the window function by changing the parameters of the beams. Although the single difference window function peaks at £ = 59, the value usually given when reporting the scale of sensitivity is the centroid of the window function, calculated by oo Y,t-{u + i)ctwt effective — oo (3.9) ^(2£ + l)CeWt Except for a factor of 4ir that will be discussed in section 3.3, the term (2£ + l)CeWt is the contribution to the measured power from that particular multipole. This equation weights the Ce with the window function to give a scale relevant to the underlying power spectrum. For example, if the power spectrum were zero for £ < 100, then clearly the location of the secondary peak {£ ~ 170) of the single difference window function would be a better description of the scale of sensitivity. The accepted way to report the an experiments' scale of sensitivity is to calculate Eq. 3.9 using a flat power spectrum, which Chapter 3. Scanning Strategy 1: Basics 31 yields ^eff == 73 for the B A M single difference measurement. 3.2 Modifying the Observing Strategy to Choose the Scale of Sensitivity If B A M took several single difference measurements on the sky with reasonable signal to noise ratios, it would be able to say something about the power spectrum around £ e f f = 73. However, by choosing a different description for a single datum, and altering the observing strategy to compensate, it is possible to study other areas of the power spectrum. We have already seen how decreasing the size of the beam throw attenuates more large scale power (low £). B A M was designed with a specific beam throw, and cannot be altered without significant redesign of the instrument. Consider, however, a scan where the positive beam is moved across the sky in steps of 1.2°, with the weight of each single difference alternating between ± 1 . This yields the pattern shown in Figure 3.3. The numbered bars below the beam spots denote a single difference measurement. Scan Direction > a © 0 0 0 0 0 Figure 3.3: Altering the window function by changing the scan pattern. "+" denotes the positive beam, and "—" the negative beam. The first and third single differences have a positive weight, and the second is negative. After three measurements, the positive beam has moved exactly one beam throw. What initially started as three overlapping single differences with a beam throw of 3.6° now Chapter 3. Scanning Strategy 1: Basics 32 appears as three non-overlapping single differences with a beam throw of 1.2° (labeled a,b,c on the figure). This effectively moves the scale of sensitivity to smaller angular scales, and plotting the associated zero lag window function verifies this expectation. This new datum is said to be "synthesized" from the original data. 3 j I I I I I I I [ I I I I l I I l | I ! 1 1 1—i—i—i—j © 0 © 0 © 0 10 100 1000 Figure 3.4: Window function for the 6 beam scan (solid), and the original B A M single difference (dotted). The beam pattern is also shown. The important concept to recognize is that the data set can be analyzed as a set of single differences and as a 6-beam experiment, thus providing information about the Chapter 3. Scanning Strategy 1: Basics 33 power spectrum at two different scales. This extra information comes with a price, however. B A M is constrained to less than 12 hours of observing time per flight. A scan such as the 6-beam chop spends a lot of time in one part of the sky, and thus the 2-beam version of the data samples fewer modes on its scale of sensitivity than it would if the observing strategy were purely a 2-beam chop. The area under the window function, when weighted with the Ct, is the total power the experiment can measure. It is clear from Figure 3.4 that the 6-beam chop has a lot less power associated with it than the single difference case. The integration time required to extract the anisotropy with reasonable S /N needs to be higher than that required to estimate the variance at the 2-beam scale. On the other hand, the resolution is much better for the 6-beam chop, as the window function peaks sharply at £ ~ 170. The 6-beam chop is just one example for an alternative observing strategy. By chang-ing the spacing of the steps and/or the relative weights of each single difference datum, several scales can be examined. For instance, the Saskatoon [40] C M B experimental group use a smooth scanning strategy that allows them to synthesize window functions for 23 different multipole bands. What started as an inquiry into the experiments' scale of sensitivity has led us to our first important observing strategy consideration: the number of modes to study. 3.3 The Importance of Off-Diagonal Components in the Correlation Matrix In the last section we have shown how the window function is applied to help determine the amplitude of the anisotropy at a given scale. It is reasonable to assume that since the window function spans a range a multipoles, it may be possible to measure the shape of the power spectrum within the bandpass. The RMS temperature measurement was obtained by computing the window function Chapter 3. Scanning Strategy 1: Basics 34 at zero lag. In other words, by computing the correlation of each data point with itself. Consider, however, correlating a data point with another spaced a distance 8 away. As a specific example, we choose the B A M single difference measurement as the chop, and an observing strategy that simply involves taking measurements along a scan axis with the positive beam moved by 0 between samples. Note that we are not synthesizing a new datum as we did with the 6-beam chop in the last section. Rather, we are restricting our attention to the single difference datum and how it correlates with other single difference data points. e> s c a n a x i s — e — ^ Figure 3.5: Correlating data points taken at a given angle, 9 apart. The shaded circles represent a data point taken at time ti, and the dashed circles are a single difference taken at By using Eq. 3.6 with j — i ± 1, the window function for this correlation becomes We{9) = [2P,(cos 7) - P*(cos[7 - 9]) + P,(cos[ 7 - 9})} e - ^ + 1 > 2 . (3.10) The Wt{9 = 2.6°) case was chosen for the left panel of Figure 3.6, because it has its first zero crossing at around the peak of the zero lag window function. This point acts as a "lever arm", about which the window function is sensitive to the slope of the power spectrum. To illustrate this further, the window function has been multiplied with (2£ + l)C^/47r to give the multipole by multipole contribution to A T r 2 m s (see Eq. 3.5) and displayed in the right panel of Figure 3.6. A flat power spectrum is used for Ct. Since the area beneath the curve is nearly zero, any deviation in the slope of the power spectrum will show up as an excess positive or negative C t h , depending on the sign of the slope (negative slope gives a positive correlation). Chapter 3. Scanning Strategy 1: Basics 35 x 10-3 10 100 1000 0 100 200 Figure 3.6: Using off axis window functions to increase the sensitivity to the shape of the angular power spectrum. The left panel shows the single difference B A M window function at zero lag (solid) and at 0 = 2.6° (dashed). The multipole by multipole contribution to the correlation is shown on the right panel. The 6 = 2.6° curve is shaded in order to make it clear that the total area (power) under the curve is nearly zero. The ordinate is labeled as temperature squared but is scaled down by the overall normalization of the power spectrum, making it dimensionless. Chapter 3. Scanning Strategy 1: Basics 36 By looking at the off-axis (C'£ •) correlations, we can infer something about the shape of the power spectrum near 4 f f • To proceed, we use the prescription developed by White [41] and parameterize the power spectrum by Taylor expanding it about a lever arm, £ 0: _£0(£0 + l) ~ £(£ + 1) C l ° I 1 + m ln — + £q (3.11) where Ct0 is the amplitude of the power spectrum at £ 0 and m is the slope of the power spectrum in the vicinity of £ 0 . Higher order terms can be expanded as well, but the sensi-tivity to these are much lower, and measured with difficulty. Using this parameterization and the window function given by Eq. 3.10, we can calculate the correlation between two points and estimate the sensitivity to the shape parameters: C t h (0 , m = 0) v ; A and B are the sensitivity of the off diagonal correlation to the amplitude and slope of the power spectrum respectively. The equation is normalized by a flat power spectrum at zero lag. Figure 3.7 plots the A , B coefficients as a function of the separation angle. The maximum sensitivity to slope occurs at 2.6°, which, not surprisingly, is the same point where the sensitivity to the amplitude of the power spectrum is zero. The point of maximum sensitivity is not singular, despite its appearance. The sensitivity and slope parameters flip sign at this point, simply illustrating that the negative "lobe" of the off diagonal window function carries more weight than the positive. 3.4 A First Attempt at an Observing Strategy The sensitivity to slope is still high at 9 = 2.4°, so it is possible to adopt the 6-beam chop, sampling every 1.2°, to obtain another scale to study and still retain slope information at the single difference scale (every second data point gives the 2.4° scale). Repeating this off-axis analysis on the 6-beam chop shows a very weak dependence on the slope. This Chapter 3. Scanning Strategy 1: Basics 37 14.0 \ 12.0 I- \ 10.0 h 8.0 6.0 4.0 h 2.0 B 0.0 -2.0 -4.0 -6.0 -8.0 -10.0 -12.0 h -14.0 H—I— I— I—I— r~ 0 \ \ 3 4 5 6 6 (degrees) 1.0 0.8 0.6 0.4 0.2 0.0 A -0.2 -0.6 - O i -1.0 Figure 3.7: Determining the optimal correlation angle to get maximum sensitivity of the slope of the power spectrum. This example expands the power spectrum about the centroid of the B A M single difference window function, £Q = 73. The results show the strongest sensitivity to slope occurs at 6 = 2.6°. Chapter 3. Scanning Strategy 1: Basics 38 is to be expected because the window function is too narrow to notice an appreciable change in the slope of the power spectrum within it. This observing strategy would then offer the potential to measure the amplitude of the temperature at two different scales, and the slope of the power spectrum at one scale. There is a tradeoff to consider now: Does spending time filling in the extra point to synthesize the 6-beam chop detract from the data obtained for the single differences? 3.4.1 A U s e f u l R u l e of T h u m b It has been shown [42, 43, 44] that the best observing strategy is one where the S /N per beam sized pixel is of order unity. In other words, an experiment is better off observing a lot of the sky with lower signal to noise than a small part of it with high S /N. This should make intuitive sense, because without measuring within a large enough area, it is impossible to tell whether or not the experiment is sampling a local hot or cold region on the sky. In the 1995 flight, B A M sampled data at a constant elevation angle near the meridian, using the earth's rotation to chop on a ring of constant declination, as shown in Figure 3.8 below. If it were to adopt the 1.2° chopping strategy described in the last section using the proposed new bolometers, the integrating time available per point would be roughly 14 minutes, resulting in a S/N of about 8. This result assumes a flat spectrum with Qflat = 15 pK. We can expect higher levels (> 11) using a more realistic C D M model and the C O B E normalization of Qflat = 18.2//K. With such a high signal level, we can afford to concoct a new strategy that covers more of the sky at the expense of increased noise per point. The approach we adopt here uses wide azimuthal swings centered on the meridian. The size of swing is limited by the star trackers' field of view, which is currently 10° but anticipated to be upgraded to Chapter 3. Scanning Strategy 1: Basics 39 Figure 3.8: Two types of scans for the next B A M flight with a 30° azimuthal sweep. A 1.2° chop and a smooth scan Chapter 3. Scanning Strategy 1: Basics 40 perhaps 30° for the next flight. It is desirable to have the endpoints of the scan at 8 — 70° so that the BAM-95 data can be used to close the arc. Using astrometric formulas (such as those found in the excellent book by Duffett-Smith [45]) we can calculate the size of the arc as a function of scan size. We are free to choose the desired S/N level and set the integration time accordingly, although there is a natural choice suggested by the geometry of the scan. At a declination of 70°, the earth turns through one beam width in 8 minutes, and we can use this to fix the integration time required per point such that each subsequent scan starts precisely one beam width away from the last. The results are summarized in the table below. Scan size So N S/N Area (square degrees) N • 30° 73.53° 15 1.6 404.4 1051 20° 71.43° 9 2.1 173.3 450 10° 70.34° 4 3.2 42.4 110 Table 3.1: Sampling the C M B using azimuthal scans. Each point in the scan gets sampled twice as the scan is swept from one end of the ring to the other and back again. The scan peaks at 8o, and the endpoints are at 8 — 70.0°. iV is the number of points taken along the scan, and Npix are the number of beam sized pixels that are contained within the annulus which would be swept out by the scan in 24 hours. 3.4.2 Smooth Scans Across the Sky Again, using a more realistic power spectrum, we can scale the S/N ratios up by around 20%, but even with the widest scan, it is greater than one. These are the S /N ratios per point, but with such an azimuthal scan, most of the data on successive scans overlap the previous ones. Recall that it is only the S/N per beam size pixel that needs to be of order unity, and so it is feasible to drop the integration time per point and scan smoothly in azimuth, filling in the points between the 1.2° chop points. Since there are roughly Chapter 3. Scanning Strategy 1: Basics 41 2 B A M beam widths between each chop point, the S/N values in Table 3.1 should be reduced by about a factor of 3. The scan velocity across the sky is constrained by the physical limits of the servos, and by the finite time required to measure one frequency spectrum using the interferometer. One beam width on the sky at the elevation angle B A M studies corresponds to about 1 degree in azimuth. Using the 30° limit as the worst case, the velocity needs to be at least 7.5 arcminutes per second, which corresponds to motion of about 0.1 beam widths during the time it takes to measure one interferogram (0.6 seconds). It is advantageous to scan faster than this for two reasons. The faster the scan, the more certainty one has in covering the entire scan area, especially at the endpoints. Secondly, slow scans are more sensitive to changing atmospheric conditions which can introduce additional uncertainty in the data. These simulations assume a constant veloc-ity scan of 19 arcmin/second, which allows several complete sweeps before the earth has turned one beam width. This is about the limit of speed we would be comfortable with, because the beam moves by roughly a third of its width in the time it takes to measure an interferogram. Because the high declination ring is oversampled, it is of benefit to use a variable velocity scan that spends more time on the outer tails of the scan. To some degree, this will happen regardless, because the servo cannot switch directions instantly at the endpoints. The electronics that control the pointing do not currently support variable velocity scans, although it is currently being investigated. We will use constant velocity scans in these simulations and assume that the conclusions we draw on which observing strategy to use will not vary significantly. We are pushing the technical limits using such a strategy, especially considering that B A M needs to observe in different frequency bands to verify the spectrum of the C M B . The S/N values in the table need to be divided up equally for each band being studied. However, the potential rewards in increasing the number of ^-bands is a compelling Chapter 3. Scanning Strategy 1: Basics 42 argument favoring the smooth scan. In addition, the smooth scan observes the same point on the annulus from several different angles, which is a requirement to construct a map of the C M B within the scan area [46, 47, 48]. Chapter 4 Statistical Analysis of the Data The last section developed our intuition about what makes an observing strategy more robust. It is necessary to analyze the data and quantify how well the observing strategy is able to recover the parameters of the power spectrum. We start by using the most common C M B analysis tool used when confronted with a data set. The idea is to choose a model for the data and construct an estimator which, when compared to the actual data set, will give a likelihood of the model parameters. The maximum likelihood will occur at the parameter value that makes the model and data match the closest. An observing strategy is considered "good" if it constrains the parameters within some acceptable limit. The estimates of the uncertainties on the parameters are strictly true only for the particular data set used in the analysis. To test how robust an observing strategy to different realizations of the C M B sky, a Monte-Carlo analysis must be performed. 4.1 Basic Maximum Likelihood Theory We start by taking iV noise-free measurements of a Gaussian random process that has some variance, a2 that we want to find. For simplicity, we assume that the mean is zero. The model of the data is just the simple Gaussian probability function: di is the i th measurement of the process. The LHS is read as "The probability of obtaining a value di given the known value, <7, is...". Provided the measurements are independent, 1 (4.1) 43 Chapter 4. Statistical Analysis of the Data 44 we can write the joint probability distribution function (PDF) as: P(du ...dN\o) = J ] f{di\o) = \ -j==\ exp (4.2) t=i This reads as, "Given the value of cr, the probability of obtaining this particular data set is..." We are more interested in the answer to a slightly different question, "Given this data set, what is the most probable value of a?" Using Bayes' theorem, we have P(<r|data) = P(data|<r)/(<7) = L(a). (4.3) Here L(o) is called the likelihood of o, and /(<x) is called the prior, which accounts for any prior knowledge we have about the parameter being tested. We will address this in more detail later, but for now we assume / is constant. To find the maximum likelihood value of cr, we simply set the derivative of L(cr) to zero and solve for cr. When the algebra is complete, we see that most likely value is - 2 = ^ I X (4-4) J V i=i which is not much of a surprise. Alternatively, we can compute Eq. 4.2 across a range of o to find which is the most likely value. Note that by dividing both sides of Eq. 4.4 by cr2, we derive the "goodness of fit" condition, which is described by 1 N d2 i—i The likelihood fit indicates the most likely value of the parameter given by the model. The RHS, or reduced x 2 , is a measure of how well the model describes the data, and is of order unity for a "good fit". A more detailed treatment of the maximum likelihood technique is available in [49, 50]. Chapter 4. Statistical Analysis of the Data 45 4.2 The Maximum Likelihood Method Applied to Anisotropy Data We saw in the last chapter that each measurement of the CMB is correlated with another, and thus the concept of independent measurements does not apply as it did in the above derivation. It is necessary to use a multivariate Gaussian matrix that describes the corre-lations between the data. This is precisely the Cjj matrix derived earlier [51], although in this section we will change notation slightly and call it Cfj, the signal covariance1 matrix to set it apart from the noise covariance matrix, Cfj which we will discuss shortly. The form of the likelihood function is similar to that in Eq. 4.2: Instead of o~, the estimator is given by the vector a which is a set parameters characterising the power spectrum. ; No measurement will be free from some level of uncertainty. To account for the noise in the data, it is necessary to compute the noise correlation matrix C/J, and incorporate it into the estimator: Kit = C g + C!j. (4.7) 4.3 Signal to Noise Eigenmode Decomposition Note that unlike the single variance Gaussian described earlier, we cannot compute the derivative and solve for the maximum likelihood in closed form. We are forced to compute the above for every value within the parameter space. Inverting the K matrix is an 0(N3) process, and therefore a daunting task if it is large. So daunting, in fact, that it is 1We can use the the terms "covariance" and "correlation" interchangeably, provided we assume that the mean is zero. , Chapter 4. Statistical Analysis of the Data 46 impractical to do using direct methods2, and alternate strategies, such as the Karhunen-Loeve (K-L) eigenmode analysis technique must be employed [52, 53]. The K - L method transforms the signal and noise correlation matrices such that they are diagonal, thus reducing the matrix inversion to an O(N) process [54]. We employ the notation and procedure adopted by Knox [44] and start by transform-ing the data via < = Rii[(CN)-l"]jkdk, (4.8) where Rij is the rotation matrix that diagonalizes M = {CN)-1/2CS(CN)-1/2 (4.9) By definition, R is the matrix of eigenvalues of M, and from the similarity transform, RMR~l yields a matrix with the eigenvalues, A; along the diagonal (and zero elsewhere). We must now rotate Kij to reflect the transformation of the data: K = (R[(CNr^})K(R[(CN)-^})T (4.10) = R[(CN)-^2}K[(CN)-1/2]TRT = i ? [ ( C ; v ) - 1 / 2 ] / v [ ( C ; v ) - 1 / 2 ] J R - 1 = J R [ ( C A r ) - 1 / 2 ] C 5 [ ( C A r ) - 1 / 2 ] i ? - 1 + RiiC")-1^"^)-1'2^-1 = RMR-1 + RR-1 = <$0-(A,- + l ) . This derivation depends on the fact that [CN)~1^2 is symmetric, and is therefore equal to its transpose. Because R is a matrix of eigenvalues of a real, symmetric matrix, it is unitary and satisfies RT = R~l. The final equality in the above equation shows explicitly 2the method of Singular Value Decomposition, for instance. Chapter 4. Statistical Analysis of the Data 47 that in the new basis, the matrix we need to invert is reduced to diagonal form. The likelihood equation to solve reduces to N 1 r i A rf? i ( a ) = by * ,\ exp i=l •-T 2feA,-(a) + l J /(a). (4.11) Solving for the eigenmodes of a matrix is also an 0(N3) process, so on the surface it may seem that we are no better off than adapting a more conventional matrix inver-sion algorithm. However, any parameter that is linear in A; does not require repeated calculation of the eigenvectors in order to determine its most likely value. For instance, consider a parameter, a that scales Cs, so that Cs = aDs. The matrix to invert is K = aDs + CN. (4.12) By repeating the derivation in Eq. 4.10 with this parameterization, we find that we need only compute the eigenvectors of M = (CN)~1^2DS\CN)~1/2and scale the A,- by aA,-. For each choice of a, we are are required only to perform the one sum and one product described in Eq. 4.11. 4.3.1 The Noise Covariance Matrix In the general case, computing [CN)~ll2 is another 0(N3) process, but unlike K, it needs to be computed only once. We can make this calculation trivial if we assume that the noise is uncorrelated from measurement to measurement. In this case, Cf- = erf Si j, where ct, is the measured error in the ith data point. Besides saving an extra matrix inversion step, it removes the requirement of two matrix multiplications to compute M . With this simplified expression for the noise covariance matrix, we get <- = ^ A (4-13) Chapter 4. Statistical Analysis of the Data 48 where Rij is the rotation matrix that diagonalizes M = o-'afCf3. (4.14) d! now has the units of signal-to-noise, which is why the Karhunen-Loeve (K-L) eigenmode analysis is commonly referred to as the "signal-to-noise eigenmode decomposition". We will consider a strictly diagonal CN for the rest of this thesis. A more detailed calculation would include galactic foregrounds and atmospheric signals that would add correlated noise into the matrix. Unless these elements are treated properly, we can only consider the results of the likelihood fits to be upper limits on B A M ' s ability to measure the angular power spectrum. 4.4 Parameterization of the Estimator We would like a parameterization that makes the inversion easy while still allowing a good description of the power spectrum. We return to the 2-parameter description derived in section 3.3, which describes the power spectrum in terms of a slope, m, and normalization, Q. We use the flat power spectrum described in Eq. 2.6 to deduce the value of Ce0 required in Eq. 3.11, resulting in The signal covariance matrix is then expressed in terms of its amplitude and derivative as Ct = 24?r Q2 5 £ ( £ + ! 1 + m ln o (4.15) (4.16) where, using Eq. 4.15 (4.17) (4.18) Chapter 4. Statistical Analysis of the Data 49 Although Q can be pulled outside the expression, m cannot, meaning a full likelihood analysis on it, in principal, requires recomputing the eigenmodes for each value of slope to be tested. In the next section, we will relax even this requirement by discussing a useful assumption. This parameterization does allow some computational savings; and C^ need only be computed once. 4.4.1 Efficient Application of the K - L Transformation By design, the K - L technique transforms the data into a S/N basis. This fact has been used by others to reduce the size of the matrices by factors up to 80%. The idea is that modes with S /N < 1 contribute very little to the results, and can be safely thrown out. The reader may ask why truncating modes is necessary at all, because we have to compute the whole eigenmode spectrum anyway before deciding which modes to dismiss. Since in the new basis the computation time is small, why bother at all truncating modes? The hypothesis is that the eigenmodes are insensitive to the choice of power spectra used in calculating the correlation matrix. The procedure is to choose a "fiducial" theory, calculate the eigenmodes for it, and then apply those same eigenmodes to the other correlation matrices being tested. In the new basis, the non-fiducial correlation matrices will not be diagonal, but by truncating the noisy modes, the matrix inversion becomes feasible. The choice of a fiducial is somewhat arbitrary, but can be iteratively improved to fit the data. A flat ('m = 0) spectrum is used as the fiducial for the likelihood fits performed in the next chapter. 4.5 The Prior It will turn out that with "good data", the choice of prior won't affect the fit appreciably. This makes quite a bit of sense because "good data" means that the error bar on the fit Chapter 4. Statistical Analysis of the Data 50 is fairly small. Thus, the range of a where L(a) peaks is narrow enough that any realistic prior function is essentially a constant over that range. With "bad data", this region is much wider, and the prior cannot be considered constant [55, 56]. There is an important application for a prior in this work however, that requires discussion. Our parameterization of the power spectrum will result in the non-physical condition of negative Ct at extreme values of m. For £0 = 73, the bounds on slope are 0.55 < m < 0.58, using 12 < £ < 400 as the limits on the window function bandpass. For observing strategies that poorly constrain the slope, there is a statistical possibility that a slope outside this range will maximize the likelihood function. Of course, the true value of slope could be outside the range to begin with. To compensate, one might construct a prior that discourages such values of slope, but that would require a priori knowledge that the true value of the slope is within the limits. It is also possible to modify the power spectrum so that Ct = 0 for the regions that fall outside the range, but that would require recomputing the correlation matrix at each value of slope that cause the non-physical Ct- In addition, this would complicate the interpretation of the estimate of Q, because we are neglecting power that falls within the bandpass. However, if the true slope is within this range, and the observing strategy is able to reasonably constrain it, then these issues are not a problem. 4.6 The Frequentist Approach We have concentrated on finding the most likely value of a parameter based on a given dataset and a theory. The full likelihood function can be used to set confidence limits on the estimate and when reported, it will be said that "The true value falls within these bounds with the set confidence". It is a subtle difference, but it is also of interest to know what the probability of obtaining the most likely value is in the first place. For Chapter 4. Statistical Analysis of the Data 51 example, if we conducted the same experiment many times and plot the most likely value of the parameter, we can get another estimate of the "error bar". The power of this Monte-Carlo concept, however, is to provide a way of checking for hidden biases or other anomalies in the results. In the present context, we can create several realizations of the microwave sky with known parameters and record the most likely (Q,m) derived from a dataset obtained from the map. By accumulating enough statistics, it is possible to study in detail the effect the observing strategy has on the estimate of the model parameters. Chapter 5 Scanning Strategy 2: Specifics We have now developed all the tools necessary to assess an observing strategy's ability to measure the angular power spectrum of the C M B . This chapter begins by applying these techniques to examine the smooth and chopped scan observing strategies, and finishes with an objective for the next flight of B A M . 5.1 The Test Cases Although it is possible for B A M to take data for about 12 hours during a flight, it is more probable that considerably less will be available due to time spent reaching float altitude, performing calibrations, and addressing unforeseen problems which may arise in testing a relatively new experiment. These simulations will assume a six hour uninterrupted data-taking period, which should be sufficient to select one of the five observing strategies described in Table. 5.1 For each strategy, a full two dimensional likelihood analysis was performed on 500 realizations of a C M B sky created using a flat (m = 0) power spectrum with Q = 15 ^ K . The parameter space tested was 1 < Q < 50 fJ,K and — 1 < m < 1. 5.1.1 Constructing the Correlation Matrix and Finding the Eigenvalues Because the constraining power of the observing strategies is not known before the anal-ysis is conducted, we have to address the problem of the non-physical Ce for the extreme values of slope. The approach adopted here is to compute a new correlation matrix for 52 Chapter 5. Scanning Strategy 2: Specifics 53 Scan Type Size Number of points S/N % of sky covered chopped along ring at 6 = 70° 26 8.3 0.024 chopped along azimuth 20° 650 1.7 0.101 smooth azimuthal scan 10° 291 2.6 0.025 smooth azimuthal scan 20° 606 1.7 0.101 smooth azimuthal scan 30° 1000 1.3 0.241 Table 5.1: The five observing strategies modeled in this thesis. The number of points are the total number of binned data used to construct the correlation matrix, and not the number of beam sized pixels within the annulus swept out by the scan (except for the chop along 8 = 70°). each slope to be studied, truncating at the £ mode that causes the power spectrum to go non-physical with the B A M bandpass. If the output of the likelihood fits is reasonable enough, then we can revert to the computationally less intensive procedure of computing C<°> and C^. The first step in computing the correlation matrix is to sort out where on the sky samples were taken, and how they are grouped together. For the constant velocity scans used here, the azimuthal direction is broken up into beam sized bins into which data from several successive scans were placed. The number of co-added scans used depends on the time it takes for the earth to turn one beam width at the scan edge. For 30°, 20°, and 10° this is 3, 5, and 12 scans respectively. Both the positive and negative beam are at constant elevation angle, but because B A M uses the positive horn as the reference, the negative beam traces out an arc due to sky rotation. For simplicity, the two chopped scans being tested neglect the effect of sky rotation on the negative beam. Next, a flat power spectrum is used to calculate the angular correlation function (Eq. 3.1). This is a fairly smooth function, so it is sufficient to evaluate it at a selected number of points and then fit a spline [57] to get the rest. Finally, Eq. 3.5 is applied to construct the matrix. Chapter 5. Scanning Strategy 2: Specifics 54 Constructing the correlation matrices is relatively fast, but computing the eigenvec-tors is not. To ensure that truncating low S/N modes does not introduce any bias in the parameter estimates, the full set of eigenvectors is computed for all correlation matrices to test. For each model, 11 correlation matrices were constructed with the slope rang-ing from -1 to 1 in steps of 0.2. For the 30° scan, which had the largest matrices, the computation time was just over 2 days. 5.1.2 A Single Realization It took only 3 minutes each to calculate the five likelihood fits plotted in Figure 5.1. The confidence limits are defined slightly differently for 2-D fits. The joint probability of the true value of (Q,m) falling within the la contour is only 39%, and not 68%, which is the 1.5<T contour in these 2-D fits. However, the projection of the edges of the la ellipse [58] onto each axis gives the marginalized error bar for that parameter. For this particular realization, each of the strategies recover the input parameters within 1.5cr. The 1.2° chop along 8 = 70° clearly has the broadest error bars, but its maximum likelihood peak coincides with the \2 — 1 contour. The other strategies have considerably lower S /N ratios, and this tends to push the x2 away from one. A l l of them, however, are of order unity, which is evident because the %2 = 2,3,4,... contours do not even appear on the figure. The smooth scans verify the expectation that with increased sky coverage, the constraining power becomes greater. It is interesting to note that the 1.2° chop along a 20° azimuthal scan does slightly better than a smooth 20° azimuthal scan. 5.1.3 Many Realizations The peak of the the likelihood function is found for 500 realizations of a C M B sky, and plotted in Figure 5.2. The fitting procedure also provides the la limits on the best fit values. It is expected that the estimate of the error on a parameter should be the same Chapter 5. Scanning Strategy 2: Specifics 55 Figure 5.1: Likelihood contours for five different scanning strategies. Contours are plotted every 0.5cr from 0.5 to 2.0. The cross indicates the input parameters (Q = 15 pK, m = 0), and the dotted lines are the the x2 contours. Chapter 5. Scanning Strategy 2: Specifics 56 -0.6 I 1 I 1 I 1 I 1 I 1 I c h o p a t 6=70' S o.o 1 0 * s m o o t h s c a n I 1 I ' I 1 I 1 I ' I 1 I 1 I 1 I 1 I 1 1 * ' • I. 1 0 11 1 2 1 3 1 4 1 5 1 6 17 1 8 1 9 2 0 « ( M K ) c h o p a l o n g 2 0 ' s c a q 9 1 0 11 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9 2(1 Q ( M K ) 2 0 * s m o o t h s c a n s o.o f- g 0 . 0 I ' I 1 I i * S * . 1 0 11 1 2 1 3 14 1 5 1 6 17 1 8 1 9 2 0 Q 0*10 9 1 0 11 1 2 1 3 14 1 5 1 6 1 7 I B 1 9 2 0 Q 3 0 * s m o o t h s c a n S 0.0 9 1 0 11 1 2 1 3 1 4 1 5 1 6 1 7 I B 1 9 2 0 « ( M K ) Figure 5.2: Monte-Carlo results for the five strategies. The figures shown consist of points after a moderate cut (see text) was applied. The value of the input parameters are given by the horizontal and vertical dashed lines. The marginal distributions are shown as histograms on the relevant axis. Chapter 5. Scanning Strategy 2: Specifics 57 as the variance of the distribution of the most likely value of that parameter. Table 5.2 verifies that this is indeed the case. Scan Type Q m Cm chopped along 5 — 70° 14.15 ±2 .35 -0.062 ± 0.459 2.12 ±0 .32 0.316 ±0.168 chopped along azimuth 14.60 ± 1.60 -0.142 ±0.268 1.28 ± 0.15 0.198 ±0 .031 10° smooth scan 14.77 ±2 .16 -0.020 ± 0.383 1.67 ±0 .25 0.234 ±0.096 20° smooth scan 14.23 ±1 .67 0.008 ± 0.346 1.43 ±0 .17 0.231 ±0 .063 30° smooth scan 14.27 ± 1.20 0.095 ±0.240 1.19 ±0 .08 0.203 ± 0.046 Table 5.2: Statistics describing the likelihood fits of the 5 strategies under test. The Monte-Carlos indicate a slight bias in the fit for Q at about the 0.5 pK level. The difference is well within the la confidence range, but all of the strategies tested underestimate Q. The contour plots a show hint of this also, because there appears to be a correlation between the most likely value of Q and of m. This is caused by an incorrect choice of £o, which is unavoidable for two reasons. Truncating the range of £ used in the maps and in computing the correlation matrices alters the centroid of the window function, which is itself altered via the effects of binning. Unfortunately, the only way to deal with this is to recompute the correlation matrices and eigenvalues using different centroids until the bias disappears. This trial and error method, albeit computationally intensive, converges after a few guesses. There is bias in the slope, evident from the "banding" of points at discrete values of m. This is clearly a result of choosing a likelihood sample grid that is too coarse. After the fits were performed for the 11 values of slope, a spline was fit in order to fill out the contours. However, with only 10 degrees of freedom, the algorithm failed to model the 2-D curve very effectively. This result only stresses the need to find efficient matrix inversion algorithms, so that more correlation matrices can be tested. On top of all these effects, we must take into account the ability of the fitting procedure Chapter 5. Scanning Strategy 2: Specifics 58 to obtain the most likely point and la confidence interval. There are some "events" for which the Gaussian fitting procedure fails to converge on a good estimate of the peak and variance of the likelihood function. This is caused either by the the contours being too close to the m = —1,1 limits, in which case a Gaussian is not a good description for the fit, or in the rarer case of the likelihood fit being singular within the parameter space. Events with am < 0.08 were removed. This was an empirical choice based on an examination of the spread of the error estimates. For the 30° scan with the best constraining power on ra, the mean and variance of am is 0.20 ± 0.05, and so the cut removes events that deviate more than 2a. The cut is actually very minimal; at most, 5% of the events were removed, except for the 10° scan which, as expected, had difficulty constraining the slope. Approximately 17% of those events were removed. Even with the modest cut applied, the bias still remains. This is not entirely unexpected, as no cut will remove the correlation between Q and m, and hence the bias in Q. The slope bias would be reduced by sampling a finer grid. Although am is relatively constant, there is a trend in the error estimate of Q, as evident in left center panel of Figure 5.3. The relationship is linear, with the fractional uncertainty in Q being roughly 10%. The Monte-Carlo results tell a story similar to the contour plots; the larger azimuthal scans do a better job constraining the power spectrum. Although there are small biases and gridding effects to contend with, the results are certainly sufficient at this stage to select an observing strategy for the next flight of B A M . . 5.2 T h e 20° A z i m u t h a l S c a n Although the 30° sweep is clearly the best of the observing strategies we have tested, there are many experimentally motivated reasons to reject it. These have already been discussed in the context of S /N ratio per frequency channel, scan velocity, and the star Chapter 5. Scanning Strategy 2: Specifics 59 m Q Figure 5.3: Monte-Carlo statistics for the 20° smooth scan. The top left plot shows the raw data plotted as a histogram, and the one beside it is the same thing after the erm < 0.08 cut has been applied. The other plots contain all 500 events, and show the correlation of error estimates with the most likely parameter. When the Gaussian fit failed to converge, the om was returned as a negative number. This happened for only about 2% of the events, as evident in the center right panel. Chapter 5. Scanning Strategy 2: Specifics 60 tracker field of view. The ability for B A M to constrain the slope and amplitude of the power spectrum does not take a serious hit by decreasing the size of the scan. The 20° chop does a little better than the smooth scan, but the B A M electronics have a much easier time doing a constant velocity scan than a chop along azimuth. Also, by filling in the entire scan area, we can synthesize more window functions and even contemplate making a map. The rest of the thesis will concentrate on this scan. 5.2.1 Data Analysis Having chosen a strategy, we are now able to examine whatever might affect the instru-ment's response. Because this will involve creating a new set of correlation matrices for each new thing to test, it is prudent to revisit the time saving data reduction concepts derived earlier The eigenmodes for the 20° scan using an ra = 0 fiducial power spectrum are shown in Figure 5.4. Of the 606 modes available, only the first 60 have S/N ratio above one. In practice we should use more modes to be sure that useful information is not being thrown out, but it seems clear that we can substantially reduce the size of the matrices. Now we must check to see that these modes can be applied to the ra ^ 0 correlation matrices. Figure 5.5 shows that no appreciable changes in the likelihood fits occur until more than about 80% of the modes are removed. It is interesting, but not entirely unexpected to note that the x 2 = 1 contour moves towards the most likely value as more of the noisy modes are removed. Although we can get away with using a handful of modes, it is still necessary to compute the full N x N correlation matrices. We have already noted a need to increase the number of ra values in the likelihood sample grid, which can be easily done by using the C ' 0 ) and C^ matrices described earlier, and the assumption that areas of the power Figure 5.4: The S / N eigenvalues for the 2 0 ° scan. The modes were sorted from highest to lowest eigenvalues after they were calculated. The amplitude of the eigenvalues depend on the normalization, Q, which was taken as 1 5 / i K . Chapter 5. Scanning Strategy 2: Specifics Figure 5.5: The effect of truncating eigenmodes on the likelihood fits. The contours comparable until less than about 60 modes are used. Chapter 5. Scanning Strategy 2: Specifics 63 spectrum that dip below zero do not significantly alter the results. Unfortunately, this is not a good assumption, and the non-physical area must be handled properly. Figure 5.6 contains the same series of likelihood contours as in the preceding figure, but using the correlation matrices constructed using this simplified parameterization. Although the fits are generally the same, it is clear from the x2 contours that the high slope values are seriously compromised. In the region that B A M is sensitive too, realistic power spectra, such as C D M , do have a positive slope, and therefore we are required to compute the correlation matrices "the hard way." Figure 5.6: The effect of truncating eigenmodes on the likelihood fits, using the simpli-fied parameterization of the correlation matrix that allows Ce < 0. The discontinuous behaviour of the x2 contours for m > 0.5 indicates that the simplification cannot be used without compromising the likelihood estimates in that range. Chapter 5. Scanning Strategy 2: Specifics 64 5.2.2 Measuring a Realistic Sky We now change our input model to something that describes the universe more accurately than a flat power spectrum. We use a standard C D M model, with J) = 1, O B = 0.05, h = 0.5, and C O B E normalized with Q = 18.2 ^ K . In order to determine what the results of the (Q,m) fit should be, we perform what basically amounts to a linear regression. We define the statistic De = Cfm-Ce(Q,m), (5.1) and solve for the Q,m that minimizes ^ m a x x2 = £ [(2£ + l)WeDe} 2, (5.2) subject to the constraint ^ m a x ^ m a x £ (2£ + l ) W i C | d m = £ (2t + l)WeCe(Q,m). (5.3) The constraint ensures that both the C D M and parameterized power spectra have the same RMS power. There is no closed form solution, and the x 2 must be evaluated over the parameter space being tested. As with the correlation matrices, Ci was forced to be greater than 0, and so the limits on the above equation vary with slope. For the C D M spectra described above, the best fit parameter values are (Q,m) = (28.8/xK, 0.44). A Monte-Carlo with 300 events was performed, this time with correlation matrices spaced every 0.1 for —0.4 < m < 1.5 in order to decrease gridding effects. Figure 5.7 summarizes the effort. The mean (Q,m) are (29.4 ± 2.5^K, 0.53 ± 0.24). Although Q seems to be well constrained, the Monte-Carlo result for the slope is biased high by roughly 20%. The window function used to compute the theoretical best fit was created using the binned data for the first data point. The window function changes slightly from point to Chapter 5. Scanning Strategy 2: Specifics 65 point, because each is constructed using several binned beams from different positions. By using the simple 2-beam window function described earlier, the best fit to Eq. 5.2 is (Q,m) = (29.1 /J,K, 0.47). The results suggest that only a Monte-Carlo analysis of the experiment can determine the expected parameters. In any event, the 2<J limit on slope is just above m = 0, meaning that B A M should be able to detect a rise in the power spectrum with 95% confidence. Figure 5.7: Measuring a realistic sky. The left panel is the result of a 300 trial Monte Carlo. The right panel shows the C D M spectra used with the theoretical best fit model (solid) and Monte-Carlo result (dashed). 5.2.3 Uncertain Beam Parameters A l l along we have been assuming that the B A M beams are described using a Gaussian profile with a F W H M of 0.70°. The beam shape is, in fact, not known to better than 10%, and it is therefore of interest to see how this ambiguity affects the likelihood estimates. We restrict our likelihood fits to the m = 0 model, which will give an estimate for what is often referred to as Qflat- The model assumes that the beam width is 0.70°, and 100 realizations of the C M B are created using the C D M model described earlier, convolved Chapter 5. Scanning Strategy 2: Specifics 66 with a beam with size 0.65°, 0.70°, and 0.75°. In every trial, the Qflat estimate is lower (higher) for the 0.65° (0.75°) maps by roughly 2 /J,K. 20 30 40 50 2 4 6 8 10 Q [ l a t (MK) Trial Number Figure 5.8: How uncertainty in the beam size affects the likelihood fits. The left panel shows the likelihood fits to one realization of a C D M sky. The solid line indicates the best fit when the input model and assumed model use the same beam size (0.70°). The dashed (dotted) line is the best fit when the input is smeared with a 0.65° (0.75°) beam. The right panel is a subset of the Monte-Carlo results, showing how Qnat is always biased when the incorrect beam size is used. This is not entirely unexpected, because smaller beams allow more small scale power into the measurements, lowering inferred values for Qnat- The same reasoning explains the overestimate of Qflat with larger beams. The important thing to note is that a 7% uncertainty in the beam shape results in a la bias in the estimate of Q. The BAM-95 data was re-analyzed to study this effect, and it was found that there is an even greater dependence on the beam throw, as a 10% uncertainty in beam throw corresponds to a 5 /J.K error in Qflat-Chapter 5. Scanning Strategy 2: Specifics 67 5.3 The Next Flight of B A M The next scheduled flight for B A M is spring 1998, by which time the new pointing electronics should be integrated and improved bolometers installed. Based on the Monte-Carlo results, a six hour data run, and the assumption that standard C D M is a good description of the universe we live in, B A M can expect to constrain Q within 2yuK, and show with high confidence that the power spectrum is rising towards a Doppler peak. We have found a need to study B A M ' s beam parameters in detail; this is accomplished while the experiment is in the air by performing a raster scan over Jupiter. Sacrificing observing time for the next flight to measure the beam in this manner is critical in understanding the data from any future flight. Sky coverage is essential in constraining the slope of the angular power spectrum. Future modifications of the instrument should consider ways to allow B A M to scan more of the sky with higher signal-to-noise. The easiest way to accommodate this is to stay in the air for longer than 12 hours, and in fact, a three day flight from high northern latitudes is currently being investigated. The tools developed to analyze the data sampled from simulated skies can be used with very little modification on the real dataset once it is obtained. Indeed, the en-tire Monte-Carlo procedure can be repeated once the positional data from the flight is available. The simulations described in this thesis prove to be a powerful way to assess B A M ' s ability to study the C M B , and it is anticipated that the B A M team will continue to revisit these procedures and examine the impact of new observing strategy ideas as they arise. Bibl iography [1] D. Adams. The Restaurant at the End of the Universe. Harmony Books, 1980. [2] P. J .E. Peebles. Principles of Physical Cosmology. Princeton University Press, 1993. [3] T. Padmanabhan. Structure Formation in the Universe. Cambridge University Press, 1993. [4] D. Lyth. Introduction to cosmology, astro/ph, 9312022, 1993. [5] A . A . Penzias and R. Wilson. A measurement of excess temperature at 4,080 mega-cycles per second. Astrophys. J., 142:419, 1965. [6] R . H . Dicke, P.J.E. Peebles, P.G. Roll, and D.T. Wilkinson. Cosmic black-body radiation. Astrophys. J., 142:414, 1965. [7] E. Hubble. Proc. N.A.S., 15:168, 1929. [8] K . A . Olive and D.N. Schramm. Big bang nucleosynthesis. Phys. Rev. D, 54(1):109— 111, 1996. [9] J . C. Mather, E . S. Cheng, Jr. Eplee, R. E. , R. B. Isaacman, S. S. Meyer, R. A . Shafer, R. Weiss, E. L. Wright, C. L. Bennett, N . W. Boggess, E. Dwek, S. Gulkis, M . G. Hauser, M . Janssen, T. Kelsall, P. M . Lubin, Jr. Moseley, S. H . , T. L. Murdock, R. F. Silverberg, G. F. Smoot, and D. T. Wilkinson. A preliminary measurement of the cosmic microwave background spectrum by the cosmic background explorer C O B E satellite. Astrophys. J., 354:L37-L40, 1990. [10] H.P. Gush and M . Halpern. A cooled submillimeter fourier transform spectrometer flown on a rocket. Rev. Sci. Instrum, 63(6):3249-3260, 1992. [11] N . Bahcall. Large-scale structure in the universe, astro/ph, 9612046, 1996. [12] P. Coles. The large-scale structure of the universe. Contemporary Physics, 37(6):429-440, 1996. [13] J . Cornell, editor. Bubbles, voids, and bumps in time : the new cosmology. Cam-bridge University Press, 1989. 68 Bibliography 69 [14] D. J . Fixsen, E. S. Cheng, D. A . Cottingham, R. E. Eplee, D. L. Massa, J . C. Mather, S. M . Meyer, R. A . Shafer, and E. L. Wright. The far infrared absolute spectrophotometer (FIRAS): Cosmic background spectrum, dipole and distortions. Bull. American Astron. Soc, 181:1807+, 1992. [15] A . Guth. Phys. Rev. D, 23:347, 1981. [16] G. F. Smoot, C. L. Bennett, A . Kogut, E. L. Wright, J. Aymon, N . W. Boggess, E. S. Cheng, G. De Amici, S. Gulkis, M . G. Hauser, G. Hinshaw, P. D. Jackson, M . Janssen, E. Kaita, T. Kelsall, P. Keegstra, C. Lineweaver, K . Loewenstein, P. Lu-bin, J . Mather, S. S. Meyer, S. H . Moseley, T. Murdock, L. Rokke, R. F. Silverberg, L. Tenorio, R. Weiss, and D. T. Wilkinson. Structure in the cobe differential mi-crowave radiometer first-year maps. Astrophys. J., 396:L1-L5, 1992. [17] W. Hu, N . Sugiyama, and J.Silk. The physics of microwave background anisotropies. Nature, 386:37-43, 1997. [18] D. Scott and M . White. Echos of gravity. Gen. Rel. and Grav., 27(10):1023-1030, 1996. [19] M . White and W. Hu. The Sachs-Wolfe effect. A & A, 321:8-9, 1997. [20] D. Scott, J . Silk, and M . White. From microwave anisotropies to cosmology. Science, 268(5212):829, 1995. [21] J . Silk. Astrophys. J., 151:459-471, 1967. [22] H.P. Gush, M . Halpern, and E. Wishnow. Rocket measurement of the cosmic-background-radiation mm-wave spectrum. Phys. Rev. Let., 65(5):537-540, 1990. [23] G.S. Tucker, H.P. Gush, M . Halpern, I. Shinkoda, and W. Towlson. Anisotropy in the microwave sky: Results from the first flight of the balloon-borne anisotropy measurement (BAM) . Astrophys. J., 475:L73-76, 1997. [24] L. F. Abbott and M . B . Wise. Large-scale anisotropy of the microwave background and the amplitude of energy density fluctuations in the early universe. Astrophys. J., 282:L47-L50, 1984. [25] L. Cayon, J . L. Sanz, and E. Martinez-Gonzalez. The probability density function of the temperature autocorrelation of the cosmic microwave background for a gaussian cosmological model. MNRAS, 253:599-604, 1991. [26] R. Scaramella and N . Vittorio. Non-gaussian temperature fluctuations in the cosmic microwave background sky from a random gaussian density field. Astrophys. J., 375:439-442, 1991. Bibliography 70 [27] N . Vittorio and J . Silk. Anisotropics of the cosmic microwave background in non-standard cold dark matter models. Astrophys. J., 385:L9-L12, 1992. [28] D. Scott, M . Srednicki, and M . White. Sample variance in small-scale cosmic mi-crowave background anisotropy experiments. Astrophys. J., 421:L5-L7, 1994. [29] R . K . Sachs and A . M . Wolfe. Astrophys. J., 147:73, 1967. [30] E .F . Bunn. Statistical Analysis of Cosmic Microwave Background Anisotropy. Ph.D. thesis, University of California at Berkeley, 1995. [31] M . White and D. Scott. Quoting experimental information. In L . M . Krauss, editor, CMB Anisotropics 2 Years After COBE: Observations, Theory and the Future, pages 254-258, 1994. [32] M . Abramowitz and L A . Stegun. Handbook of Mathematical Functions. Dover, 1972. [33] W . H . Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. Numerical Recipes in C. Cambridge, second edition, 1992. [34] M . White. Contribution of long-wavelength gravitational waves to the cosmic mi-crowave background anisotropy. Phys. Rev. D, 46(10):4198-4205, 1992. [35] N . Vittorio and P.F. Mucciacia. Observational bounds on the cosmic microwave background anisotropy at small and intermediate angular scales. In S.S. Holt, C L . Bennett, and V. Trimble, editors, After the First Three Minutes, pages 141-149. AIP, American Institute of Physics, 1991. [36] J.R. Bond and G. Efstathiou. The statistics of cosmic microwave background radi-ation fluctuations. MNRAS, 226:655-687, 1987. [37] M.P . Hobson and J . Magueijo. Observability of secondary doppler peaks in the C M B R power spectrum by experiments with small fields. MNRAS, 283(4):1133-1146, 1996. [38] 0 . Brigham. The FFT And It's Applications. Prentice Hall, 1988. [39] K . M . Gorski. Experiment-specific cosmic microwave background calculations made easier - approximation formula for smoothed delta t/t windows. Astrophys. J., 410:L65-L69, 1993. [40] C.B Netterfield, M . J . Devlin, N . Jarosik, L. Page, and E .J . Wollack. A measure-ment of the angular power spectrum of the anisotropy in the cosmic microwave background. Astrophys. J., 474:47-66, 1997. Bibliography 71 [41] M . White. Reconstructing the C M B power spectrum. A & A, 290:L1-L4, 1994. [42] G. Hinshaw, C L . Bennett, and A. Kogut. Simulated cosmic microwave background maps at 0.5° resolution: Basic results. Astrophys. J., 441:L1-L4, 1995. [43] R . K . Schaefer and L. Piccirillio. Optimizing observing patterns in cosmic microwave background radiation anisotropy experiments. Astrophys. J., 461:553-559, 1996. [44] L. Knox. Cosmic microwave background anisotropy observing strategy assessment. Astrophys. J., 480:72-78, 1997. [45] P. Duffett-Smith. Practical Astronomy With Your Calculator. Cambridge University Press, 1970. [46] M . White and E.F . Bunn. A first map of the cosmic microwave background at 0.5 deg resolution. Astrophys. J. , 443:L53-L56, 1995. [47] E .L . Wright. Scanning and mapping strategies for C M B experiments, astro-ph/, 9612006, 1996. [48] L. Piccirillo, G. Romeo, R . K . Scheafer, and M . Limon. An efficient technique for making maps from observations of the cosmic microwave background radiation. astro-ph/, 9608065, 1996. [49] J.O. Berger. Statistical Decision Theory and bayesian Analysis. Springer-Verlag, 1985. [50] K.S. Shanmugan and A . M . Breipohl. Random Signals: Detection, Estimation and Data Analysis. John Wiley & Sons, 1988. [51] A . C. S. Readhead, C. R. Lawrence, S. T. Myers, W. L. W. Sargent, H. E. Hardebeck, and A . T. Moffet. A limit of the anisotropy of the microwave background radiation on arc minute scales. Astrophys. J., 346:566-587, 1989. [52] K . Karhunen. Uber Lineare Methoden in der Wahrschleinlichkeitsrechnung. HelsinkkKirjapaino, 1947. [53] J.R. Bond. Signal-to-noise eigenmode analysis of the two year C O B E maps. Phys. Rev. Let, 72:4369-4372, 1995. [54] Lloyd Knox, private communication. [55] E .F . Bunn, M . White, M . Srednicki, and D. Scott. Are SP 91 and C O B E inconsistent with cold dark matter? Astrophys. J., 429:1-3, 1994. Bibliography 72 [56] M . Srednicki, M . White, D. Scott, and E.F. Bunn. Implications of the millimeter-wave anisotropy experiment for cold dark matter models. Phys. Rev. Let., 71(23):3747-3750, 1993. [57] Martin White. Data analysis for C M B experiments, private communication. [58] American Physical Society. Statistics. Phys. Rev. D, 54(1):159-167, 1996.
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Evaluating observing strategies for the Bam cosmic...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Evaluating observing strategies for the Bam cosmic microwave background anisotropy experiment Borys, Colin James Kelvin 1997
pdf
Page Metadata
Item Metadata
Title | Evaluating observing strategies for the Bam cosmic microwave background anisotropy experiment |
Creator |
Borys, Colin James Kelvin |
Date Issued | 1997 |
Description | An observing strategy for the Balloon-Borne Anisotropy Measurement (BAM) is chosen by performing simulations on a computer. Several strategies are modeled and the performance is graded by its ability to constrain a parameterized description of the Cosmic Microwave Background (CMB) angular power spectrum. We have chosen a model that is described by a normalization and slope in the area of the power spectrum BAM is sensitive too. We argue that during the next flight of BAM from Palestine Texas, a 20° azimuthal scan centered on the meridian at a declination of 70°, will in six hours of data collection verify anisotropy at scales of a few degrees, and determine whether or not the power spectrum is rising towards an expected Doppler peak at scales of ~ 1°. All of the steps necessary to conduct such an investigation are described in detail. Particular attention is given to developing efficient analysis techniques that make manipulation of the large data sets feasible. This is necessary to conduct Monte-Carlo studies of the instruments' ability to constrain the model parameters. Based on these results, we formulate an objective for the next flight of BAM that includes a detailed measurement of its beam size and shape; these details are necessary to avoid biasing the maximum likelihood fits of the model parameters. Although BAM is chosen as a particular example, the techniques are general enough to be applied to any similar experiment that studies CMB temperature anisotropies. |
Extent | 5170485 bytes |
Genre |
Thesis/Dissertation |
Type |
Text |
FileFormat | application/pdf |
Language | eng |
Date Available | 2009-03-25 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
IsShownAt | 10.14288/1.0087788 |
URI | http://hdl.handle.net/2429/6482 |
Degree |
Master of Science - MSc |
Program |
Physics |
Affiliation |
Science, Faculty of Physics and Astronomy, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 1997-11 |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-ubc_1997-0505.pdf [ 4.93MB ]
- Metadata
- JSON: 831-1.0087788.json
- JSON-LD: 831-1.0087788-ld.json
- RDF/XML (Pretty): 831-1.0087788-rdf.xml
- RDF/JSON: 831-1.0087788-rdf.json
- Turtle: 831-1.0087788-turtle.txt
- N-Triples: 831-1.0087788-rdf-ntriples.txt
- Original Record: 831-1.0087788-source.json
- Full Text
- 831-1.0087788-fulltext.txt
- Citation
- 831-1.0087788.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0087788/manifest