UBC Faculty Research and Publications

Systems and control software for the Atacama Cosmology Telescope. Amiri, Mandana 2011

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata


Halpern_SPIE_7019_70192L.pdf [ 590.06kB ]
JSON: 1.0107593.json
JSON-LD: 1.0107593+ld.json
RDF/XML (Pretty): 1.0107593.xml
RDF/JSON: 1.0107593+rdf.json
Turtle: 1.0107593+rdf-turtle.txt
N-Triples: 1.0107593+rdf-ntriples.txt

Full Text

Systems and control software for the Atacama Cosmology Telescope E. R. Switzer a, C. Allen b, M. Amiri c, J. W. Appel a, E. S. Battistelli m, c, B. Burger c, J. A. Chervenak b, A. J. Dahlen a, S. Das d, M. J. Devlin e, S. R. Dicker e, W. B. Doriese f, R. Dünner l,a, T. Essinger-Hileman a, X. Gao g, M. Halpern c, M. Hasselfield c, G. C. Hilton f, A. D. Hincks a, K. D. Irwin f, S. Knotek c, R. P. Fisher a, J. W. Fowler a, N. Jarosik a, M. Kaul e , J. Klein e, J. M. Lau h, M. Limon i, R. H. Lupton d, T. A. Marriage d, K. L. Martocci n, S. H. Moseley b, C. B. Netterfield j, M. D. Niemack a, M. R. Nolta k, L. Page a, L. P. Parker a, B. A. Reid a, C. D. Reintsema 6, A. J. Sederberg a, J. L. Sievers k, D. N. Spergel d, S. T. Staggs a , O. R. Stryzak a, D. S. Swetz e, R. J. Thornton e, E. J. Wollack b, Y. Zhao a aDept. of Physics, Jadwin Hall, Princeton University, Princeton, NJ 08544-0708, USA bCode 665, NASA/Goddard Space Flight Center, Greenbelt, MD 20771, USA cDept. of Physics and Astronomy, University of British Columbia, Vancouver, BC V6T 1Z1, Canada dDept. of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08544, USA eDept. of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104, USA fNational Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA gUK Astronomy Technology Centre, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ, UK hStanford University Physics Dept., 382 Via Pueblo Mall, Stanford, CA 94305, USA iColumbia Astrophysics Laboratory, 550 W. 120th St. Mail Code 5247, New York, NY 10027, USA jDept. of Physics, University of Toronto, 60 St. George Street, Toronto, ON M5S 1A7, Canada kCanadian Institute for Theoretical Astrophysics, 60 St. George St, University of Toronto, Toronto, ON M5S 3H8, Canada lDept. de Astronomı́a y Astrof́ısica, Facultad de F́ısica, Pontificia Universidad Católica de Chile, Casilla 306, Santiago 22, Chile mDept. of Physics, University of Rome “La Sapienza”, Piazzale Aldo Moro 5, I-00185 Rome, Italy nDept. of Physics, City University of New York, 365 5th Avenue, Suite 6412, New York, NY 10016, USA ABSTRACT The Atacama Cosmology Telescope (ACT) is designed to measure temperature anisotropies of the cosmic mi- crowave background (CMB) at arcminute resolution. It is the first CMB experiment to employ a 32x32 close- packed array of free-space-coupled transition-edge superconducting bolometers. We describe the organization of the telescope systems and software for autonomous, scheduled operations. When paired with real-time data streaming and display, we are able to operate the telescope at the remote site in the Chilean Altiplano via the Internet from North America. The telescope had a data rate of 70 GB/day in the 2007 season, and the 2008 upgrade to three arrays will bring this to 210 GB/day. Advanced Software and Control for Astronomy II, edited by Alan Bridger, Nicole M. Radziwill Proc. of SPIE Vol. 7019, 70192L, (2008) · 0277-786X/08/$18 · doi: 10.1117/12.790006 Proc. of SPIE Vol. 7019  70192L-1 Downloaded from SPIE Digital Library on 13 Sep 2011 to Terms of Use:  http://spiedl.org/terms 1. INTRODUCTION The cosmic microwave background (CMB) is a clean source of information about the Universe at large, and can be analyzed to produce sharp constraints on cosmological models. The best limits on the CMB temperature anisotropy over the sky to ∼ 0.4◦ are provided by the Wilkinson Microwave Anisotropy Probe (WMAP).1 In conjunction with other surveys, these data tightly constrain cosmological model parameters.2 Several recent CMB experimental efforts aim to measure the intensity anisotropy3–5 on smaller scales than those measured by WMAP. Small-scale intensity anisotropy measurements are well-motivated because they contain further information from the primary CMB, from galaxy clusters through the Sunyaev-Zel’dovich (SZ) CMB spectral distortion,6 and possibly from other intervening matter through weak lensing of the CMB.7 Measurements in this regime are expected to improve constraints on the primordial perturbation power spectrum, the dark energy equation of state, neutrino mass, large scale structure, and homogeneous cosmological parameters such as the baryon density.8 The Atacama Cosmology Telescope (ACT) is a 6 m telescope designed to map the intensity (and ultimately the polarization) anisotropy of millimetric radiation with arcminute resolution over approximately 1500 square degrees. The primary ACT camera, the millimetric bolometer array camera (MBAC), comprises three arrays that span the SZ distortion’s decrement, null, and increment, centered at 145 GHz, 220 GHz, and 280 GHz, respectively. Each band’s detectors are arrayed into a 32 × 32 square grid of free-space transition edge super- conducting (TES) detectors, cooled to 0.3 K.∗ Measurements across these three bands permit a discrimination between foreground emission sources, galaxy clusters (through the SZ effect), and the primary CMB.9 To min- imize atmospheric contamination, ACT operates from Cerro Toco at 5190 m in the dry Chilean Altiplano. In this proceedings, we describe the technical challenges associated with operating in this remote location, focusing in particular on the software and electronic readout systems. To arrange it so that the sky signal frequency is above the flicker noise, and below the inherent detector time constants† we scan the telescope azimuth at a constant angular velocity of 1◦ per second over 10◦ peak-to-peak with 400 ms turnarounds. Observations take place at 50.5◦ elevation, so this becomes ∼ 0.8◦ per second on the sky. The overall scan strategy is to map two to three regions by scanning at fixed elevation and central azimuth as those regions rise and set. The diffraction-limited scale on the sky for the 280 GHz array (where the condition on the sampling rate is most stringent) has Dλ−1 ∼ 5500 cycles/radian, with a temporal frequency ωDλ−1 ∼ 70 Hz. We oversample the beam by a factor of ∼ 2 (and higher in the lower two bands) by filtering the array data to have a f3dB = 140 Hz, which is conservatively sampled and stored at 399 Hz. This sets the dominant data rate of the experiment. The second relevant fact for hardware and software is that the telescope motion encoders, the science camera acquisition, and the cryogenic housekeeping must be synchronized. Because of the remote location, it is also essential that the telescope systems be robust, largely autonomous, and controllable from North America. ACT observing seasons are naturally separated by the seasons at the site, where moisture from Bolivia makes measurement challenging from late December to May. Science data from the first season of ACT were acquired by the 145 GHz array from November 14 to December 17, 2007, in dedicated observation. This band was chosen because galaxy clusters are most identifiable in the SZ distortion decrement, the atmospheric opacity is lower, and the atmospheric loading and telescope alignment constraints are more relaxed than for the other two bands. This is one of a series of papers10–14 in these proceedings which describe technical aspects of ACT; the telescope, the cryogenic camera, the detectors, optical characterization, and the coordination of ACT systems. This paper describes the ACT data acquisition and telescope systems and their coordination in several parts. Sec. 2 describes the overall system layout; Sec. 3 describes the telescope systems for science data acquisition, housekeeping, and telescope motion; Sec. 4 describes data products and data component merging; Sec. 5 describes coordination of the various systems; Sec. 6 describes logistics of operating the telescope in Chile. We conclude in Sec. 7. ∗These bolometers are fabricated at NASA Goddard and the band-defining filter stack is provided by Cardiff University. †For the 145 GHz array, the typical detector response function is described by f3dB ≈ 90 Hz, with significant scatter; the other two arrays are designed to have typical f3dB higher in the ratio of their beamsizes, but have not yet been characterized in the field. Proc. of SPIE Vol. 7019  70192L-2 Downloaded from SPIE Digital Library on 13 Sep 2011 to Terms of Use:  http://spiedl.org/terms 2. OVERVIEW ACT hardware is broadly composed of the telescope, its camera and electronic systems, site infrastructure, a ground station, and a data analysis center. Here, we give an overview of these units and their coordination, and of how the data they produce are managed. We limit the description of the hardware to what is relevant for this task, and the reader is referred to previous work15–19 and other ACT articles in these proceedings10–14 for further background. The science camera hardware, detectors and electronics were built by a collaboration of teams at Cardiff University, NASA, NIST, Princeton, University of British Columbia, University of Pennsylvania, and the University of Toronto. The telescope was engineered and manufactured by Empire Dynamic Systems Ltd. (formerly AMEC Dynamic Structures Ltd.), and KUKA Robotics10 designed the control systems. It is an off-axis Gregorian design with warm (ambient site temperature) 6 m primary and 2 m secondary mirrors, a large (1◦) focal plane,17and clearance for a ∼ 1 m3 cryogenic camera.11 The telescope has a rigid, all aluminum structure, and has both an inner and an outer ground screen to minimize deformation and beam spill during the scan. (Throughout, we will take “telescope” to include the structure and motion systems. Sec. 3.4 describes the telescope motion in detail.) At the ACT site, there are three permanent shipping containers which serve as a workshop, an “equipment” container (a climate-controlled office plus an area for electronics and compressors), and a storage space. The telescope has a large receiver cabin between the secondary and primary mirrors which houses MBAC and the housekeeping acquisition electronics. MBAC is mounted close to the telescope’s rotation axis (to avoid jarring) and the cabin’s temperature is controlled to improve the stability of the readout systems. It is also large enough for several operators to stand in while installing the camera and electronics, and it provides access to the mirrors for alignment purposes. We will refer to those components located on Cerro Toco as being at the “site”, and those located in San Pedro de Atacama as being in the “ground station.” Fig. 1 gives a broad overview of the acquisition system. At the highest level, operations are coordinated by a scheduler and the data are synchronized by a system-wide serial number “heartbeat.” This ties the encoders, cryogenic housekeeping, and science data to 5 µs precision (after all constant delays have been accounted for). The synchronization generator and software as well as absolute timing are described in Sec. 3.1. All operations of the telescope are coordinated through a common interface server (Sec. 5). This presents a common interface to an observation scheduler and to telescope operators, and passes messages to the telescope systems. The housekeeping and science camera acquisition systems (see Sec. 3.2 and Sec. 3.3) are handled by separate computers (see Sec. 3.5) which write the incoming data to their respective hard drives. These data are then served over the network to real-time data component merger and visualization clients. During normal telescope operation, the housekeeping and science camera data are merged (Sec. 4) into a science data product. These data (as well as the raw data) are transmitted from the site to the ground station over a line-of-sight microwave link and buffered on a RAID storage array as they are copied to external hard drives which are flown back to North America. Both the housekeeping and camera data have associated entries in a file database which is periodically checked to automatically move data from the site computers to the ground station, onto transport disks, and to confirm that the data have been properly received in North America before being deleted from site computers. The database also has an associated webpage where operators can track the volume and type of data acquired in real time. The observation schedule is generated on a night-by-night basis, and coordinates all operations of ACT: the telescope mechanical warm-up, camera tuning and biasing, acquisition, calibration/beam measurements during the night, and cryogenic cycle timing. We thus have an automated system to process high-level observation requests, execute the measurement, and move data onto transport disks. 3. TELESCOPE SYSTEMS 3.1 Time and Synchronization ACT’s timing requirements are twofold. Because the telescope is moving during data acquisition, the camera data must be synchronized with the encoders to link the horizon coordinate pointing of the telescope’s boresight with a given data sample during the scan. The ACT data must also be tied to an absolute time reference Proc. of SPIE Vol. 7019  70192L-3 Downloaded from SPIE Digital Library on 13 Sep 2011 to Terms of Use:  http://spiedl.org/terms Interface Server Schedule Dispatcher Database Data Merger 42 kbps Housekeeping DAS x3 51 Mbps 51 M bps Monitor clients RAID storage North America Operator client Operator Client San Pedro Synchronization generator Telescope and encoders T elescope Site (M ountain) and N orth A m erica G round station Array DAS x3 Camera Electronics R eceiver Cabin Equipm ent room "Heartbeat" Observing schedule Figure 1. Overview of the ACT data and control systems, split into telescope, on-site (mountain), ground station, and North American systems. Solid lines show science data streams while dashed lines show commanding and file information channels. Cylinders represent data storage. We show only one of three array acquisition systems (“Array DAS”) for simplicity. The ground-station RAID array aggregates data from several machines at the site, so we do not show it as being connected to a particular node. Operators define a schedule each night and upload it to the handler at the site. Proc. of SPIE Vol. 7019  70192L-4 Downloaded from SPIE Digital Library on 13 Sep 2011 to Terms of Use:  http://spiedl.org/terms to find the celestial pointing. The beam-crossing time of a point source through scanning (∼ 10 ms) is much faster than the beam-crossing time of a point source through the Earth’s rotation (several seconds). Thus the synchronization of the encoders to camera data is more strict than of these data to the absolute GPS time. To obtain strict synchronization of the science camera data with the telescope’s encoders, a synchronization box in the telescope’s receiver cabin generates a system-wide trigger and an associated serial stamp. This is passed to the science camera acquisition electronics through a fiber optic, and to the housekeeping computer (in the equipment container) through an RS485 channel in the cable wrap. The synchronization box was designed at the University of British Columbia to be compatible with the camera acquisition electronics, and is used by several experiments.12, 15, 16 The synchronization box sets the fundamental rate for triggering observations by counting down a 25 MHz clock over 50 clock cycles per detector row, over 32 rows of detectors plus one row of dark SQUIDs for the array readout, over 38 array reads per sample trigger. This gives a final rate of ∼ 399 Hz. This also sets the clock rate on the three MCEs, and the division to 399 Hz coincides with the multiplexing rate. The synchronization serial stamp from the RS485 channel is incorporated into the housekeeping data (in- cluding the encoders) through the following chain: 1) in the housekeeping computer, the primary housekeeping PCI card (from the Balloon-borne Large Aperture Submillimeter Telescope, BLAST20) receives a 5 MHz biphase signal which encodes serial data stamps at 399 Hz over RS485 from the synchronization box; 2) these are asserted as CPU interrupts at 399 Hz; 3) a timing driver handles these interrupts by polling the encoders at 399 Hz; 4) the housekeeping PCI card clocks down the serial stamps to 99.7 Hz, which it uses to poll the housekeeping acquisition electronics; 5) the housekeeping software then assembles the housekeeping and encoder time streams, matching serial stamps; 6) these are written to disk in a flat file format, where for every data frame there are, for example, 4 times as many 399 Hz encoder values as there are 99.7 Hz housekeeping data frames. Thus, the encoders and housekeeping channels are combined into a product that can be merged with the science camera data (see Sec. 4.1). We use a Meinberg GPS-169 PCI card to discipline the system clock of the housekeeping computer, with a precision of < 1 ms to GPS time, sufficient for absolute timing. The absolute GPS time is attached to the data from each encoder read request, and the GPS position is averaged down to give a geodetic latitude and longitude for astrometry. The housekeeping computer provides a stratum 0 server for the other computers (data acquisition, merging) on the network to synchronize file time stamps which are used for organizing and tracking files. 3.2 Camera Acquisition and Commanding The ACT camera, MBAC, comprises three bands (145 GHz, 220 GHz, and 280 GHz), each with a 32 × 32 free-space coupled square array of TES detectors. Each array is read out by a time-domain SQUID multiplexing circuit. SQUID multiplexers have become the standard for low-impedance, low noise cryogenic detectors,21 where wire count and loading on the 0.3 K stage is at a premium. The multiplexer is controlled by a set of Multi-Channel Electronics (MCE) which seek to null the incoming signal (error) from the final stage of the SQUID amplifier chain by applying a feedback voltage to a coil at the input of the SQUID chain, where the TES circuit is also inductively coupled. The MCE is produced by the University of British Columbia group,12 and the multiplexing circuits are produced at NIST. (The readout system is described in significantly more detail elsewhere.12, 15, 16) Each of the three bands has an independent MCE which is mounted directly onto the MBAC cryostat and accesses the cold multiplexers through 5× 100 pin MDM connector ports (per MCE). The MCEs, their power supplies, and MBAC all reside in the telescope receiver cabin. The three MCEs are connected to three storage and control computers in the equipment container through 6 (3 TX/RX pairs, 250 Mbps) Tyco PRO BEAM Jr. rugged multimode fiber optic carriers. These are decoded on the PC side by a PCI card from Astronomical Research Cameras, Inc.‡ The base clock rate of the MCE is 50 MHz. This is divided down to 100 clock cycles per detector row by 32 detector rows plus one row of dark SQUIDs. Thus, the native read-rate of the array is 15.15 kHz as it leaves the cryostat. Nyquist inductors at 0.3 K in the LR-circuit of the detector loop band-limit the response ‡http://www.astro-cam.com/ Proc. of SPIE Vol. 7019  70192L-5 Downloaded from SPIE Digital Library on 13 Sep 2011 to Terms of Use:  http://spiedl.org/terms while maintaining stability of the loop16 (so the array can be multiplexed with minimal aliased noise). For science data, the only bandwidth constraint is to at least Nyquist sample the optical beam, as described in the introduction. Even though the Nyquist sky sampling condition is different for the three arrays, we sample each at 399 Hz. This is sufficient for 280 GHz array, which has the most stringent requirements. To downsample the 15.15 kHz multiplexing rate to 399 Hz, the MCE applies a 4-pole Butterworth filter with a rolloff f3dB = 140 Hz to the feedback stream from each detector. This filter is efficient to implement digitally and has a flat passband. The rolloff then defines our conservative downsampling to 399 Hz, which can be obtained by pulling every 38th record (at 15.15 kHz) from the filter stack, as synchronized by similar clock counting in the synchronization box, described in Sec. 3.1. Fixed delays and the phase response of the readout are accounted for in the analysis. Each time the three MCEs receive a synchronization serial stamp from the synchronization box, they package the output of the 32× 32 + 1× 32 (32 dark SQUIDs) array and send it over a fiber optic to a PCI card on each acquisition computer, where it is buffered and written to disk. Additional information that fully specifies the MCE state is written to a text file. The three MCE computers in the equipment container are also responsible for coordinating MCE operations. SQUID tuning,12 detector biasing, calibration and data acquisition are managed by a set of analysis codes in IDL and bash scripts. Messages from the high-level scheduler are passed through the interface server to python clients on each of the acquisition computers which initiate the request (see Fig. 1 and Sec. 5.1). The bash scripts also contain a call to a python code which passes the file information to a mySQL database for tracking, as described in Sec. 4.1. For the 2007 season, ACT used a prototype version of software developed for the SCUBA2 experiment for acquiring and storing the MCE data on the acquisition computer. The SCUBA2 software included custom firmware for the PCI acquisition card, a driver compiled against a real-time kernel to provide low latency handling of MCE data, and a master control program based on the DRAMA22 messaging system. For upcoming seasons, ACT will instead use the MCE Acquisition Software (MAS), recently developed at UBC. MAS provides a device driver for the PCI card, a C language API for low and intermediate-level interaction with the hardware, and applications for commanding the MCE from the Linux shell. The MAS driver does not require a real-time kernel, and can provide high data throughput and low single-frame latency without requiring sustained interrupt rates above 10 Hz. 3.3 Housekeeping and Thermal Control Housekeeping comprises all electronics and systems other than the camera and its electronics and the telescope motion control. The primary systems within housekeeping are the cryogenic thermal readouts and controls, the telescope health readouts, the telescope motion encoders, and auxiliary monitors, such as a flux-gate magne- tometer. The telescope’s azimuth and elevation pointing are read by two Heidenhain encoders (27 bit, 0.0097′′ accuracy) through a Heidenhain IK220 PCI card in the housekeeping computer over RS485 from the cable wrap. ACT uses both ruthenium oxide (ROx) and diode thermometers to monitor cryogenic temperatures. The readout system for the diodes simply measures the DC voltage drop across them. For the ROx sensors, we use an AC-bridged 4-lead measurement driven at 212 Hz for each channel. The diodes and ROx sensors in the dewar are driven and read out by pre-amplifier boards in an RF-tight VME enclosure. Data acquisition electronics designed for the BLAST20 experiment digitize these signals at 10 kHz and apply a 50 Hz anti-aliasing filter so that they can be decimated to a 99.7 Hz (399/4) output. This is triggered and read out by the PCI card on the housekeeping computer in the equipment container through an RS485 cable from the telescope. The BLAST electronics synchronously demodulate the ROx channels, and lookup tables on the housekeeping computer provide conversions to temperature. Temperature control of the camera is managed by a servo that uses either the diode or ROx for a particular stage and the heater on that stage. The respective servos are weakly linked and are taken individually as a PID loop with finite, decaying integral memory. For the first season, the 0.3 K stage temperature was controlled with a typical stability of < 40 µK. This stage holds the detector array, the bandpass filter and lens nearest to the array, and the cold enclosure surrounding the array. For subsequent seasons, the 4 K SQUID amplifier chain board and 0.3 K detector holder temperatures are controllable because we found that a small component of the Proc. of SPIE Vol. 7019  70192L-6 Downloaded from SPIE Digital Library on 13 Sep 2011 to Terms of Use:  http://spiedl.org/terms system drift (greatly sub-dominant to the atmospheric drift) was tied to the amplifier chain temperature. We found that lens temperatures were stable enough that they need not be servoed. The remaining servos are coarse and used for cycling the adsorption refrigerators. The heaters are driven by a specially designed, 18 channel heater board with 6 external channels, instrumented with current monitors and relays for driver management. Housekeeping includes data from a variety of auxiliary sources such as a 3-axis magnetometer, dewar pressure monitor, and other system health monitors. Auxiliary channels in the receiver cabin that do not need to be synchronized with the science camera data are read out from a Sensoray 2600 DAQ in the receiver cabin through the site network by the housekeeping computer with rates from 1 − 20 Hz. Weather data are provided both from an on-site WeatherHawk station, and from the APEX collaboration.§ A CCD star camera (developed for BLAST20) can also be mounted to the top of the primary structure as needed to develop a pointing solution, accurate to 5′′. This was experimental for the first season but will be standard subsequently. 3.4 Telescope motion The telescope’s azimuth base is driven by two counter-torqued helical gears to minimize backlash, while the elevation axis (which remains fixed during observation) is driven by two ball screws at the sides of the receiver cabin. The secondary mirror can be repositioned with three mirror and two frame actuators,10 read out by linear variable differential transformer sensors. All of the telescope degrees of freedom are driven by a KUKA Robotics controller. The most critical of these axes is the azimuth, where we require a tight tolerance of 400 ms turnarounds at either end of a 10◦ wide, constant-velocity scan with a maximum velocity of 2◦ per second (observations used a velocity of 1◦ per second). This was achieved with a state-space controller.10 The KUKA robot comprises the motor drivers, an un-interruptible power supply, an embedded computer with a solid-state drive, and a portable console. The robot embedded computer has a set of templates in the KUKA-native robot language for executing observations, re-pointing, slewing to home/stow/maintenance positions, and warming up the mechanical systems. Motion parameters for the templates (such as the scan center and speed) are transferred on a DeviceNet¶ network between the housekeeping computer and the KUKA robot controller. Once the program and its parameters are loaded, the housekeeping computer sets a run bit and the scan commences. The robot controller can be commanded to stop by setting a stop bit on devicenet, which smoothly stops the telescope. KUKA’s robot also produces a stream of telescope health information (its internal encoder and resolver readouts, motor currents and temperature) which are broadcast via the user datagram protocol and recorded by the housekeeping computer at 50 Hz. 3.5 Computing The ACT computer hardware is described in Table 1. Telescope operation and data acquisition are shared be- tween three science camera acquisition computers, one housekeeping computer, an on-site data merger computer, and a telescope robot (embedded). Of these, the housekeeping computer has the most varied responsibilities: 1) readout of the primary position encoders, 2) absolute timing discipline, 3) cryogenic housekeeping control and acquisition, 4) command routing, 5) telescope robot messaging, and 6) cryogenic housekeeping and encoder synchronization. Storage in Chile is buffered from smaller pressurized drives on the mountain to a 4-TB RAID storage array in San Pedro de Atacama. While the data rate of the final, merged product for one array is 70 GB/day, we also keep the un-merged, raw device products for assurance, yielding around 140 GB/day total. (Subsequent seasons will only keep and transport the merged product. Section 4 describes data merging and products in detail.) The RAID storage unit was therefore able to buffer for less than one month before data needed to be shipped to North America. On-site storage will be upgraded to an 11 TB array with a 6.5 TB compute node for the three-array deployment in 2008. §http://www.apex-telescope.org/weather/index.html ¶Open DeviceNet Vendors Association (ODVA; odva.org) Proc. of SPIE Vol. 7019  70192L-7 Downloaded from SPIE Digital Library on 13 Sep 2011 to Terms of Use:  http://spiedl.org/terms Table 1. Basic parameters of the ACT site and processing computers. Systems with “07”, “08” denote first and second season computing differences, respectively. The gateway server manages site access from the outside world, a website, and the file database. Terminals on-site and in the ground station display real-time data, and the “Chile analysis 08” node performs basic analysis and mapping tasks in the ground station to give immediate feedback to telescope operators. System Location CPU HD (GB) RAM (GB) Housekeeping Site 2 x 3 GHz 376 1 MCE acquisition ’07 Site 1 x 3.4 GHz 152 0.5 MCE acquisition ’08 Site (×3) 1 x 3.0 GHz 400 2 Data merging Site 2 x 3.0 GHz 752 1.5 Terminal 1 Site 1 x 2.5 GHz 330 2 Gateway, server Ground 1 x 2.5 GHz 236 1 Terminal 2 Ground 2 x 3 GHz 230 4 Chile storage ’07 Ground 2 x 3.0 GHz 4000 8 Chile storage ’08 Ground 2 x 3.0 GHz 11000 8 Chile analysis ’08 Ground 2 x 4 x 2.5 GHz 6500 16 Main storage/analysis node ’07 Princeton 4 x 2 x 2 GHz 9400 32 Main storage node ’08 Princeton 2 x 2 x 2.8 GHz 48000 16 Main analysis node ’08 Princeton 92 x 2 x 3 GHz Beowulf 2500 1-4 4. DATA PRODUCTS 4.1 Data merging During observation, data from the three arrays in the science camera and the housekeeping are recorded in- dependently. They are triggered synchronously and stamped (See Sec. 3.1), so it is convenient for analysis to merge these into a single data product, where the data records are aligned in time. From ∼ 10 AM to 9 PM, the adsorption refrigerators recycle, and we do not observe because of mirror distortions from solar heating.10 Thus, the telescope has two main data modes: one where the merged science data are produced and one where only engineering data are produced, during the day. Each on-site data acquisition computer has a server which can stream data (either in real-time, or in a seek mode) to multiple clients. These clients can be either merger processes or operators that display the data in real-time using kst.‖ Each camera observation is keyed to an entry in the mySQL file database through a python hook in the acquisition calls. The incoming camera data are flagged as either un-merged, in the process of merging, or merged. A python front-end of the merger queries the database to determine which files still need to be merged and calls the main merger code (implemented in C). For those data that need to be merged, the merger client connects to the camera and housekeeping data servers and requests the first ∼ 1 second of data for that time. (If it is not able to find a common serial stamp, it seeks through housekeeping until a data frame containing that stamp is found.) It then proceeds through the camera data, associating each record with a record in the housekeeping data with the same stamp. We note that the merger rate greatly exceeds the acquisition rate, so the merger client may be met with partial data at the end of the file as it is still being acquired. It will wait for ∼ 5 seconds for new data to be written, and if nothing appears, it is interpreted as an end of file – the merger does not have any expectation of the file length. The output merged data products are stored in a so-called “dirfile” file format. Here, each directory contains one file for each channel and a format file which specifies the calibrations for the channels. With 1024 detector and 366 housekeeping files, there is a moderate filesystem overhead, and opening the output requires a large number of open file handles. We did not find these issues to outweigh the convenience the format affords. This dirfile format is native to kst, and avoids large disk strides when reading a single detector at a time – this facilitates fast single-detector analysis tasks and does not impede full-array analysis. The alternative is to develop a flat ‖KST data visualization software, http://kst.kde.org/ Proc. of SPIE Vol. 7019  70192L-8 Downloaded from SPIE Digital Library on 13 Sep 2011 to Terms of Use:  http://spiedl.org/terms Table 2. A summary of merged data product channels. Size “m” denotes mixed (some 4-byte numbers and some bitfields). Group System size (bytes) rate (Hz) quantity camera data MCE 4 399 (32x32)+(1x32) encoders + timing Sync Box, Housekeeping 4 399 7 cryogenic thermometry Housekeeping 4 99.7 48 telescope mechanical KUKA, Housekeeping 4 1-99.7 80 housekeeping drivers Housekeeping m 1-99.7 46 auxiliary Housekeeping 4 1-99.7 44 parameters, flags Various m 1-399 213 format, where all channels are packed into a single file, such as the fits format standard. We found that this was awkward for the variety of different data types and sampling rates. Every computer runs a daemon process responsible for automatically transporting data from the telescope to the ground station and then to North America. Data are automatically relocated from the site computers to RAID storage in the ground station, and then removed from the site computers when the database has identified that they are older than three days and that they have been merged. Data in the ground station are also moved automatically to transfer disks, and then deleted after they have been received in Princeton. Throughout, md5 checksums confirm that the data were compressed, transferred, and transported properly. To manage the large data volume, we have developed a simple lossless delta compression scheme for each channel to reduce the bit length of each record (assuming each channel’s signal exercises only a fraction of the dynamic range of the data type). The algorithm achieves a compression rate ∼ 3 times faster than gzip, and an uncompression rate that is similar to gzip. For the camera data, a savings of ∼ 60% is typical for the algorithm, compared to ∼ 10% for gzip. Housekeeping data typically achieve 80% and 70% for our algorithm and gzip, respectively. (Huffman coding of the reduced bits led to modest compression gains but doubled the processing time.) 5. COORDINATING OBSERVATIONS 5.1 Message Passing The dominant design considerations for the software commanding system were that it: 1) provide a means to control telescope systems from North America through the Internet; 2) provide both a graphical interface as well as a “robot interface” for scheduled operation; 3) permit systems at the site to communicate if the microwave downlink were to fail; 4) tie the heterogeneous interfaces together into a common interface that is easy to manage; 5) provide centralized logging and accountability for system-wide commands. The model that we developed is related to an Internet “chat” client-server system. Here, both robot and human operators as well as telescope systems log in to a central command router on the mountain (the “In- terface Server” in Fig. 1). Instruments can then declare themselves as destinations of known command groups. An operator client can then send a command to a destination, and the server will handle passing that infor- mation to any clients identifying with that destination. Each client logs in with a unique user name, and all commands/messages are logged with that name and a time stamp. Admissible commands are specified in a version-controlled database that the clients and server access. The operator client GUI is built dynamically based on this database of admissible commands and their data; flags specify the type of entry field (on/off, value, string), default values, and human-readable descriptions of the command or variable. The GUI also pro- vides a server window which displays all current transactions and allows operators to send a global message (for other operators). We found this system to be convenient, especially during engineering periods, because several operators can run engineering tests simultaneously from multiple locations and stay mutually aware of the tests. Proc. of SPIE Vol. 7019  70192L-9 Downloaded from SPIE Digital Library on 13 Sep 2011 to Terms of Use:  http://spiedl.org/terms 5.2 Scheduled observation Observations with ACT are defined through a schedule file that specifies an execution time for a task with given parameters. There is no decision structure in the scheduler itself. The primary reason for this is that our scan strategy involves simple scanning over several patches at constant elevation and central azimuth through the night, with occasional repointings to map a planet. Further, each mapping region is approximately covered each night, so that a missed patch will be covered subsequently. To control systematic errors, keeping a constant pointing is a premium in favor of repointing to measure strips that were missed during a fault. Scheduled events are translated to specific device calls in the schedule dispatcher. The interface server then passes these to the designated devices (see Fig. 1). 6. OPERATIONAL CONSIDERATIONS In this section, we describe the engineering issues associated with the physical situation of the telescope. The most important of these are the site altitude, accessibility, and power consumption constraints of both the site and the ground station. The failure rate of hard drives rapidly increases at lower pressures as their head flying height decreases, making operation unreliable by 5190 m.23, 24 For the first season, we used a mixture of pressurized AT/IDE and USB-style hard drives, while subsequent seasons will use exclusively pressurized SATA or USB interfaces. We did not find solid state drives (either flash or RAM) to be a cost-effective solution for the data volumes required. For all computers at the site, the bandwidth of USB 2.0 is sufficient for the acquisition tasks and modern BIOS’s and Linux kernels permit boot directly from an external USB device. All systems that require fast transfers to transport disks and large (11 TB) storage requirements are in the ground station, which is at an altitude of 2400 m, where pressurized drives are no longer needed. The data downlink from the telescope site to the ground station is provided by a Motorola Canopy Power IDU, which operates at 5 GHz typically at a power of 63 mW. While the link capacity for this power is 94 Mbps (matched to the 100 Mbps ground station network), a typical data rate is somewhat lower. From the ground station, continuous Internet service is provided by Entel, permitting access to the site computers from anywhere on the Internet for telescope operators. Electricity in San Pedro de Atacama is sometimes sporadic, and restricted in the evening, so it must be supplemented by a diesel generator for the ground station electronics in order to keep the link to the Internet live, to display system health to operators in San Pedro de Atacama or North America, and to keep up with buffering of data from the site. This is provided by Astro Norte along with the Internet and office space. 7. CONCLUSIONS ACT shares many common design considerations with modern CMB and sub-mm experiments, where the goal is to operate large, cryogenically cooled detector arrays in high altitude, remote areas. These efforts present many unique hardware and software challenges. Here we have reviewed the science camera, housekeeping and telescope systems (hardware), the automated file transfer and merging process, and distributed commanding systems (software). Because of the particular constraints, we have developed a suite of software to schedule and command telescope-wide tasks, merge the resultant data, and track and transport this data to North America. Some parts of the system were inherited from the BLAST experiment, while the majority was developed from scratch specifically for ACT over several years. We believe that many of the components and concepts here are reusable in projects with similar constraints. ACKNOWLEDGMENTS This work is supported by the U.S. National Science Foundation through awards AST-0408698 and PHY-0355328. Princeton University and the University of Pennsylvania also provided financial support. We thank Walter Brzezik, Mike Cozza, Jeff Funke, and Ye Zhou for their work with the telescope; Bert Harrop, the Princeton machine shop and the University of Pennsylvania Machine shop for their work on the design and construction of MBAC and its electronics; and Vinod Gupta and Craig Loomis for their support with computing. Proc. of SPIE Vol. 7019  70192L-10 Downloaded from SPIE Digital Library on 13 Sep 2011 to Terms of Use:  http://spiedl.org/terms REFERENCES [1] Hinshaw, G., Weiland, J. L., Hill, R. S., Odegard, N., Larson, D., Bennett, C. L., Dunkley, J., Gold, B., Greason, M. R., Jarosik, N., Komatsu, E., Nolta, M. R., Page, L., Spergel, D. N., Wollack, E., Halpern, M., Kogut, A., Limon, M., Meyer, S. S., Tucker, G. S., and Wright, E. L., “Five-Year Wilkinson Mi- crowave Anisotropy Probe (WMAP) Observations: Data Processing, Sky Maps, and Basic Results,” ArXiv e-prints 803 (Mar. 2008). [2] Komatsu, E., Dunkley, J., Nolta, M. R., Bennett, C. L., Gold, B., Hinshaw, G., Jarosik, N., Larson, D., Limon, M., Page, L., Spergel, D. N., Halpern, M., Hill, R. S., Kogut, A., Meyer, S. S., Tucker, G. S., Weiland, J. L., Wollack, E., and Wright, E. L., “Five-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation,” ArXiv e-prints 803 (Mar. 2008). [3] Reichardt, C. L., Ade, P. A. R., Bock, J. J., Bond, J. R., Brevik, J. A., Contaldi, C. R., Daub, M. D., Dempsey, J. T., Goldstein, J. H., Holzapfel, W. L., Kuo, C. L., Lange, A. E., Lueker, M., Newcomb, M., Peterson, J. B., Ruhl, J., Runyan, M. C., and Staniszewski, Z., “High resolution CMB power spectrum from the complete ACBAR data set,” ArXiv e-prints 801 (Jan. 2008). [4] Ruhl, J., Ade, P. A. R., Carlstrom, J. E., Cho, H.-M., Crawford, T., Dobbs, M., Greer, C. H., Halverson, N. W., Holzapfel, W. L., Lanting, T. M., Lee, A. T., Leitch, E. M., Leong, J., Lu, W., Lueker, M., Mehl, J., Meyer, S. S., Mohr, J. J., Padin, S., Plagge, T., Pryke, C., Runyan, M. C., Schwan, D., Sharp, M. K., Spieler, H., Staniszewski, Z., and Stark, A. A., “The South Pole Telescope,” Proc. SPIE 5498, 11 (2004). [5] Sayers, J., Golwala, S. R., Rossinot, P., Ade, P. A. R., Aguirre, J. E., Bock, J. J., Edgington, S. F., Glenn, J., Goldin, A., Haig, D., Lange, A. E., Laurent, G. T., Mauskopf, P. D., and Nguyen, H. T., “A Search for Cosmic Microwave Background Anisotropies on Arcminute Scales with Bolocam,” ArXiv e-prints 805 (May 2008). [6] Carlstrom, J. E., Holder, G. P., and Reese, E. D., “Cosmology with the Sunyaev-Zel’dovich Effect,” Ann. Rev. Astron. Astrophys. 40, 643–680 (2002). [7] Lewis, A. and Challinor, A., “Weak gravitational lensing of the CMB,” Phys. Rep. 429, 1–65 (June 2006). [8] Kosowsky, A., “The Future of Microwave Background Physics,” in [The Emergence of Cosmic Structure ], Holt, S. H. and Reynolds, C. S., eds., American Institute of Physics Conference Series 666, 325–335 (May 2003). [9] Kosowsky, A., “The Atacama Cosmology Telescope project: A progress report,” New Astronomy Review 50, 969–976 (Dec. 2006). [10] Hincks, A. D., Ade, P. A. R., Allen, C., Amiri, M., Appel, J. W., Battistelli, E. S., Burger, B., Chervenak, J. A., Dahlen, A. J., Denny, S., Devlin, M. J., Dicker, S. R., Doriese, W. B., Dünner, R., Essinger-Hileman, T., Fisher, R. P., Fowler, J. W., Halpern, M., C.Hargrave, P., Hasselfield, M., Hilton, G. C., Irwin, K. D., Jarosik, N., Kaul, M., Klein, J., Lau, J. M., Limon, M., Lupton, R. H., Marriage, T. A., Martocci, K. L., Mauskopf, P., Moseley, S. H., Netterfield, C. B., Niemack, M. D., Nolta, M. R., Page, L., Parker, L. P., Sederberg, A. J., Staggs, S. T., Stryzak, O. R., Swetz, D. S., Switzer, E. R., Thornton, R. J., Tucker, C., Wollack, E. J., and Zhao, Y., “The effects of the mechanical performance and alignment of the Atacama Cosmology Telescope on the sensitivity of microwave observations,” Also in these Proc. SPIE . [11] Swetz, D. S., Ade, P. A. R., Allen, C., Amiri, M., Appel, J. W., Battistelli, E. S., Burger, B., Chervenak, J. A., Dahlen, A. J., Das, S., Denny, S., Devlin, M. J., Dicker, S. R., Doriese, W. B., Dünner, R., Essinger- Hileman, T., Fisher, R. P., Fowler, J. W., Gao, X., Hajian, A., Halpern, M., Hargrave, P. C., Hasselfield, M., Hilton, G. C., Hincks, A. D., Irwin, K. D., Jarosik, N., Kaul, M., Klein, J., Knotek, S., Lau, J. M., Limon, M., Lupton, R. H., Marriage, T. A., Martocci, K. L., Mauskopf, P., Moseley, S. H., Netterfield, C. B., Niemack, M. D., Nolta, M. R., Page, L., Parker, L. P., Reid, B. A., Reintsema, C. D., Sederberg, A., Sehgal, N., Sievers, J. L., Spergel, D. N., Staggs, S. T., Stryzak, O. R., Switzer, E. R., Thornton, R. J., Tucker, C., Wollack, E. J., and Zhao, Y., “Instrument design and characterization of the Millimeter Bolometer Array Camera on the Atacama Cosmology Telescope,” Also in these Proc. SPIE . [12] Battistelli, E. S., Amiri, M., Burger, B., Devlin, M. J., Dicker, S. R., Doriese, W. B., Dünner, R., Fisher, R. P., Fowler, J. W., Halpern, M., Hasselfield, M., Hilton, G. C., Hincks, A. D., Irwin, K. D., Kaul, M., Klein, J., Knotek, S., Lau, J. M., Limon, M., Marriage, T. A., Niemack, M. D., Page, L., Reintsema, C. D., Staggs, S. T., Swetz, D. S., Switzer, E. R., Thornton, R. J., and Zhao, Y., “Automated SQUID tuning procedure for kilopixel arrays of TES bolometers on ACT,” Also in these Proc. SPIE . Proc. of SPIE Vol. 7019  70192L-11 Downloaded from SPIE Digital Library on 13 Sep 2011 to Terms of Use:  http://spiedl.org/terms [13] Zhao, Y., Allen, C., Amiri, M., Appel, J. W., Battistelli, E. S., Burger, B., Chervenak, J. A., Dahlen, A., Denny, S., Devlin, M. J., Dicker, S. R., Doriese, W. B., Dünner, R., Essinger-Hileman, T., Fisher, R. P., Fowler, J. W., Halpern, M., Hilton, G. C., Hincks, A. D., Irwin, K. D., Jarosik, N., Klein, J., Lau, J. M., Marriage, T. A., Martocci, K., Moseley, H., Niemack, M. D., Page, L., Parker, L. P., Sederberg, A., Staggs, S. T., Stryzak, O. R., Swetz, D. S., Switzer, E. R., Thornton, R. J., and Wollack, E. J., “Characterization of transition edge sensors for the Millimeter Bolometer Array Camera on the Atacama Cosmology Telescope,” Also in these Proc. SPIE . [14] Thornton, R. J., Ade, P. A. R., Allen, C., Amiri, M., Appel, J. W., Battistelli, E. S., Burger, B., Chervenak, J. A., Devlin, M. J., Dicker, S. R., Doriese, W. B., Essinger-Hileman, T., Fisher, R. P., Fowler, J. W., Halpern, M., Hargrave, P. C., Hasselfield, M., Hilton, G. C., Hincks, A. D., Irwin, K. D., Jarosik, N., Kaul, M., Klein, J., Lau, J. M., Limon, M., Marriage, T. A., Martocci, K., Mauskopf, P., Moseley, H., Niemack, M. D., Page, L., Parker, L. P., Reidel, J., Reintsema, C. D., Staggs, S. T., Stryzak, O. R., Swetz, D. S., Switzer, E. R., Tucker, C., Wollack, E. J., and Zhao, Y., “Optomechanical design and performance of a compact three-frequency camera for the MBAC receiver on the Atacama Cosmology Telescope,” Also in these Proc. SPIE . [15] Battistelli, E. S., Amiri, M., Burger, B., Halpern, M., Knotek, S., Ellis, M., Gao, X., Kelly, D., Macintosh, M., Irwin, K., and Reintsema, C., “Functional Description of Read-out Electronics for Time-Domain Mul- tiplexed Bolometers for Millimeter and Sub-millimeter Astronomy,” Journal of Low Temperature Physics , 146–+ (Jan. 2008). [16] Niemack, M. D., Zhao, Y., Wollack, E., Thornton, R., Switzer, E. R., Swetz, D. S., Staggs, S. T., Page, L., Stryzak, O., Moseley, H., Marriage, T. A., Limon, M., Lau, J. M., Klein, J., Kaul, M., Jarosik, N., Irwin, K. D., Hincks, A. D., Hilton, G. C., Halpern, M., Fowler, J. W., Fisher, R. P., Dünner, R., Doriese, W. B., Dicker, S. R., Devlin, M. J., Chervenak, J., Burger, B., Battistelli, E. S., Appel, J., Amiri, M., Allen, C., and Aboobaker, A. M., “A Kilopixel Array of TES Bolometers for ACT: Development, Testing, and First Light,” Journal of Low Temperature Physics , 114 (Jan. 2008). [17] Fowler, J. W., Niemack, M. D., Dicker, S. R., Aboobaker, A. M., Ade, P. A. R., Battistelli, E. S., Devlin, M. J., Fisher, R. P., Halpern, M., Hargrave, P. C., Hincks, A. D., Kaul, M., Klein, J., Lau, J. M., Limon, M., Marriage, T. A., Mauskopf, P. D., Page, L., Staggs, S. T., Swetz, D. S., Switzer, E. R., Thornton, R. J., and Tucker, C. E., “Optical design of the Atacama Cosmology Telescope and the Millimeter Bolometric Array Camera,” Applied Optics 46, 3444–3454 (June 2007). [18] Lau, J., Fowler, J., Marriage, T., Page, L., Leong, J., Wishnow, E., Henry, R., Wollack, E., Halpern, M., Marsden, D., and Marsden, G., “Millimeter-wave antireflection coating for cryogenic silicon lenses,” Applied Optics 45, 3746–3751 (June 2006). [19] Lau, J., Benna, M., Devlin, M., Dicker, S., and Page, L., “Experimental tests and modeling of the optimal orifice size for a closed cycle 4He sorption refrigerator,” Cryogenics 46, 809–814 (Nov. 2006). [20] Pascale, E., Ade, P. A. R., Bock, J. J., Chapin, E. L., Chung, J., Devlin, M. J., Dicker, S., Griffin, M., Gundersen, J. O., Halpern, M., Hargrave, P. C., Hughes, D. H., Klein, J., MacTavish, C. J., Marsden, G., Martin, P. G., Martin, T. G., Mauskopf, P., Netterfield, C. B., Olmi, L., Patanchon, G., Rex, M., Scott, D., Semisch, C., Thomas, N., Truch, M. D. P., Tucker, C., Tucker, G. S., Viero, M. P., and Wiebe, D. V., “The Balloon-borne Large Aperture Submillimeter Telescope: BLAST,” ArXiv e-prints 711 (Nov. 2007). [21] Chervenak, J. A., Irwin, K. D., Grossman, E. N., Martinis, J. M., Reintsema, C. D., and Huber, M. E., “Superconducting multiplexer for arrays of transition edge sensors,” Applied Physics Letters 74, 4043–+ (June 1999). [22] Bailey, J. A., Farrell, T., and Shortridge, K., “DRAMA: an environment for distributed instrumentation software,” in [Proc. SPIE Vol. 2479, p. 62-68, Telescope Control Systems, Patrick T. Wallace; Ed. ], Wallace, P. T., ed., Presented at the Society of Photo-Optical Instrumentation Engineers (SPIE) Conference 2479, 62–68 (June 1995). [23] Deng, Y., Tsai, H.-C., and Nixon, B., “Drive-level flying height measurements and altitude effects,” Mag- netics, IEEE Transactions on 30(6), 4191–4193 (November 1994). [24] Strom, B., Lee, S., Tyndall, G., and Khurshudov, A., “Hard disk drive reliability modeling and failure prediction,” Asia-Pacific Magnetic Recording Conference, 2006 , 1–2 (Nov. 29 2006-Dec. 1 2006). Proc. of SPIE Vol. 7019  70192L-12 Downloaded from SPIE Digital Library on 13 Sep 2011 to Terms of Use:  http://spiedl.org/terms


Citation Scheme:


Usage Statistics

Country Views Downloads
China 9 9
Germany 3 0
City Views Downloads
Beijing 9 0
Unknown 3 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}


Share to:


Related Items