UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A study of image recovery techniques for radio long baseline interferometry Steer, David G. 1983

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1984_A1 S75.pdf [ 11.48MB ]
Metadata
JSON: 831-1.0096594.json
JSON-LD: 831-1.0096594-ld.json
RDF/XML (Pretty): 831-1.0096594-rdf.xml
RDF/JSON: 831-1.0096594-rdf.json
Turtle: 831-1.0096594-turtle.txt
N-Triples: 831-1.0096594-rdf-ntriples.txt
Original Record: 831-1.0096594-source.json
Full Text
831-1.0096594-fulltext.txt
Citation
831-1.0096594.ris

Full Text

A STUDY OF IMAGE RECOVERY TECHNIQUES FOR RADIO LONG BASELINE INTERFEROHETRY by DAVID G. STEER B.Sc. (Eng) Queen 1s University at Kingston, 1972 M.Sc. (Eng) Queen's University at Kingston, 1974 A Thesis Submitted i n P a r t i a l F u l f i l m e n t of the Requirements f o r the Degree of Doctor of Philosophy i n THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF ELECTRICAL ENGINEERING We accept t h i s t h e s i s as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA OCTOBER/ 983 © David G. Steer, 1983 In presenting t h i s t h e s i s i n p a r t i a l f u l f i l m e n t of the requirements for an advanced degree at the U n i v e r s i t y of B r i t i s h Columbia, I agree that the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e for reference and study by employees and students of the University of B r i t i s h Columbia. I further agree that permission for copying of t h i s t h e s i s for s c h o l a r l y purposes may be granted by the Head of the E l e c t r i c a l Engineering Department. I t i s understood that copying or p u b l i c a t i o n of t h i s t h e s i s for f i n a n c i a l gain s h a l l not be allowed without my written permission. This authorization does not t r a n s f e r copyright to the U n i v e r s i t y . Department of E l e c t r i c a l Engineering The U n i v e r s i t y of B r i t i s h Columbia, 1956 Main M a l l , Vancouver, Canada, V6T 1Y3 Page i i Abstract This thesis i s concerned with data processing techniques used to form images from long baseline interferometers (LBI). The basic problems to be overcome i n the imaging process are to corr e c t f o r the large phase errors introduced by the atmosphere and to allow for the sparse sampling of the aperture. The o b j e c t i v e s of the study were to determine the r e l a t i v e importance of the constraints placed on the image and to look for improvements to the process. I t i s concluded that only a l i m i t e d c l a s s of objects can be imaged. These objects must co n s i s t of separated features which remain separated i n the zero-phase (or auto-correlation) image. P o s i t i v i t y and confinement were determined to be.the most s i g n i f i c a n t constraints for the imaging process. Closure-phase information i s of secondary importance and i s used to extend the dynamic range of the image once the main features are es t a b l i s h e d . The main agent for enforcing the confinement cons t r a i n t i s the "CLEAN" algorithm. In the recovery of objects with extended features, the imaging process f a i l s due to the well known i n a b i l i t y of CLEAN to c o r r e c t l y restore these features. A new CLEAN algorithm has been developed which i s stable f o r extended objects. This algorithm w i l l be of i n t e r e s t to many synthesis telescope users as i t not only y i e l d s better r e s u l t s but i t i s also s i g n i f i c a n t l y f a s t e r and easier to use than the e a r l i e r algorithms. Other experiments and analysis have shown that imaging processes which operate by maximizing a global image parameter such as sharpness or entropy cannot be used f o r imaging with large phase e r r o r s . This c l a s s of algorithms w i l l always be confused by the zero-phase image. I t was possi b l e , however, to combine the global technique of maximum entropy with the a d d i t i o n a l constraints of confinement and closure-phase to form a working algorithm. Contents Page i i i i i Abstract i i i Contents v L i s t of Figures v i i i Acknowledgments ix Summary I A-l Introduction and Overview 3 A-1.1 The Imaging Problem 7 A-1.2 The LBI Imaging Problems 9 A-1.3 The LBI Image and Data Constraints 15 A-2 Imaging Algorithms 15 A-2.1 Signal Processing 17 A-2.2 Image Processing 18 A-2.2a Image Sharpening 20 A-2.2b I t e r a t i v e Constraint Algorithms 23 A-2.2c Experiments using Image Constraints 29 A-2.2d A New Algorithm for Deconvolving the Beam 3 0 A-2.3 Summary of Experimental Results 3 6 A-3 Conclusions 41 B-1 Aperture Synthesis Concepts 43 B-1.1 The Aperture Plane 44 B-1.2 Aperture Sampling 45 B-1.3 Earth Rotation Synthesis 46 B-1.4 Aperture Sampling Tracks 49 B-1.5 Long Baseline Interferometer System 50 B-1.6 Fringes 50 B-1.7 Image Reconstruction 65 B-2 Deconvolving the Beam 65 B-2.1 Reducing the E f f e c t s o f the Beam 68 B-2.2 The Standard CLEAN Algorithm 69 B-2.3 Forming St r i p e s with CLEAN 71 B-2.4 Arithmetic Errors 72 B-2.5 The Modified Algorithm 78 B-2.6 Example Images 80 B-2.7 Conclusions Contents Page 87 B-3 Images and Phase Errors 87 B-3.1 Examples of Images (Almost) without Phase 88 B-3.2 Phase Di s tor t ion - Spreading Out -89 B-3.3 Phase D i s to r t ion - Zero-Phase Map -90 B-3.4 Notes on Phase Constants Other Than ZERO 91 B-3.5 Speckle Patterns 98 B-4 Closure Phase Constraint 1 0 3 C Speckle Processing 1 0 3 C-1 Atmospheric D i s to r t ion 105 C-2 Autocorrelat ion Imaging - Averaging Magnitudes 105 C-3 Obtaining the Phases - Averaging Differences 107 C-4 Speckle and LBI 1 0 9 C-5 Speckle Simulation Experiments 122 D Global Image Parameter Maximization 122 D-1 Adaptive Optics 123 D-2 Global Parameters and LBI Imaging 128 D-3 Global Image Parameters 1 2 9 D-4 Correction Algorithm 130 D-5 Experiments 133 D-6 Results 1 3 6 D-7 Conclusions and Comments 149 E I t e ra t ive Reconstruction Methods 151 E-1 I t e r a t i v e Imaging Algorithms 154 E-2 Image Constraint Algorithms 158 E-3 Phase Closure Method 162 E-4 S e l f Ca l ibra t ion 168 E-5 F i l t e r i n g and De-convolving the Beam 170 E-6 Aperture Sampling Simulations 171 E-7 Summary of I t e ra t ive Algorithm Results 183 References L i s t of Figures Page v 12 A-•1.1 The Imaging Problem. 12 A-1.2 Imaging with A P r i o r i information. 12 A-•1.3 Venn Diagram showing i n t e r s e c t i o n of data & p o s i t i v e image sets. 13 A-•1.4 Schematic Diagram of Long Baseline Interferometer. 14 A--1.5 Sampling Tracks f o r 8 Antenna East-West Array. 33 A-•2.1 Generalized I t e r a t i v e Imaging Algorithm f o r LBI. 34 A-•2.2-7 Example Images Recovered by I t e r a t i v e Algorithms. 35 A-•2.8 Example zero-phase images showing separation of features. 52 B-•1.1 Image Formation by a Lens. 52 B-•1.2 Image Formation by Aperture Synthesis. 53 B-•1.3 Interference Fringe Pattern. 53 B-•1.4 Phase of Fringe Pattern. 54 B-•1.5 Baseline Rotation as seen by Source. 55 B--1.6a Location of Baseline on the Earth. 56 B-•1.6b Baseline C a l c u l a t i o n from Longitude and Latitude of Antennas. 57 B-•1.7 Baseline Sampling Tracks i n Aperture Plane. 58 B--1.8 Baseline Tracks for Various Source Decli n a t i o n s . 59 B--1.9 E f f e c t of Curvature of the Earth on Baseline Longitude. 60 B--1.10 Long-Baseline Interferometer System. 61 B--1.11 Real Part of Aperture Signal f o r Two Gaussian Sources. 62 B--1.12 Sampling Tracks f o r 8 Antenna East-West Array. 63 B--1.13 Simulated " f r i n g e " Records f or Two Source Object. 64 B--1.14 Aperture Synthesis Image Processing. 81 B--2.1 Diagram showing the "Standard" CLEAN algorithm. 82 B--2.2 I l l u s t r a t i o n of r i p p l e s impressed i n t o extended source. 83 B--2.3 Diagram showing the "Modified" CLEAN algorithm. 84 B--2.4 Test D i r t y Beam B^ (pattern and aperture). 85 B--2.5 The Test Results from CLEAN Algorithms. 85 a The te s t object 0„ 85 b The D i r t y map D. 85 c Density f i l e r e s u l t from "modified" CLEAN. 85 d Density f i l e r e s u l t from "spiked" CLEAN. 85 e Density f i l e r e s u l t from "standard" CLEAN. 86 B--2.6 Test image and r e s u l t s with noise added. L i s t of Figures Page v i 93 B-3.1 Phase Distorted Image Examples : Test Object. 93 B-3.2 Undistorted Image. 93 B-3.3 Distorted Image with constant phase e r r o r . 93 B-3.4 Distorted Image with 36 degree e r ror s . 94 B-3.5 Distorted Image with 180 degree er ror s . 94 B-3.6 Distorted Image with 360 degree er ror s . 94 B-3.7 Zero Phase Image. 94 B-3.8 Undistored Image. 95 B-3.9 Test Object CTB1. 95 B-3.10 Undistorted Image. 95 B-3.11 Zero Phase Image. 96 B-3.12 Distorted Image with 18 degree er ror s . 96 B-3.13 Distorted Image with 36 degree e r ror s . 96 B-3.1H Distorted Image with 72 degree er ror s . 97 B-3.15 Test Object with narrow sources. 97 B-3.16 Speckle Pattern with 108 degree errors and narrow sources. 97 B-3.17 Test Object with wide sources. 97 B-3.18 Speckle Pattern with 108 degree errors and wide sources. 101 B-4.1 Closure Phase Relation for Three Antenna Array. 102 B-4.2 Phase & Amplitude Closure for Non-redundant Array. 112 C-1 Imaging with Atmosphere. 113 C-2 Labeyrie Algorithm for Obtaining Autocorrelat ion Image. 114 C-3 M u l t i p l e Paths i n u-v Plane. 115 C-U Knox-Thompson Algorithm for Image Recovery. 116 C-5 u-v Sampling Tracks for East-West Synthesis Array (8 antennas). 117 C-6 Simulated LBI Speckle Imaging. 118 C-7 Simulated Knox-Thompson Algorithm (example 1) . 119 C-8 Simulated Knox-Thompson Algorithm (example 2 ) . 120 C-9 Modified Algorithm for LBI (example 1) . 121 C-10 Modified Algorithm for LBI (example 2 ) . L i s t of Figures Page v i i 138 D-1 Adaptive Optic System. 139 D-2a Sharpness Parameters for Images CTB1 Object. 140 D-2b Undistorted Image (with beam). 140 D-2c Zero Phase - Autocorrelation Image. 141 D-2d 10% Phase Er r o r s . 141 D-2e 20% Phase Er r o r s . 142 D-3 Sharpness Data Processing. 143 D-4 Sharpness Program Operation. 144 D-5 Va r i a t i o n i n Sharpness Parameter with Antenna Phase. 145 D-6 Sharpened Image (small errors XCTB1±36°). 146 D-7 Sharpened Image (larger errors XCTBU360'). 147 D-8 Sharpened Image MCTB1±100t 148 D-9 Sharpened Image S101 ±100? 173 E-1 Image Constraint Reconstruction (example 1). 174 E-2 Image Constraint Reconstruction (example 2). 175 E-3 CTB1 F i r s t Hybrid Maps V a r i a t i o n with Reference Antenna. 176 E-3 continued. 177 E-4 Hybrid Map Error shown with Antenna Parameter. 178 E-5 Hybrid Map Error shown with Average Antenna Baseline Length. 179 E-6 Hybrid Image from Averaged Closure Sums. 180 E-7 Object and Image recovered with Self-Calibration/CLEAN. (M„ « 1/2) nz 181 E-8 Object and Image recovered with Self-Calibration/CLEAN. (M„ ~ 1/2) nz 182 E-9 Extended Object recovered with S e l f - C a l i b r a t i o n and Modified CLEAN. (M ~ 1/2) nz Page v i i i Acknowledgements This study was made possible through the cooperation of the U n i v e r s i t y of B r i t i s h Columbia and the Herzberg I n s t i t u t e of Astrophysics. The work was performed at the Dominion Radio Astrophysical Observatory near Penticton. The Natural Science and Engineering Research Council of Canada and the H.R. MacMillan Family have been most generous i n providing a scholarship and a fellowship i n support of t h i s project. Their f i n a n c i a l support has been gre a t l y appreciated. Thanks to Peter Dewdney who supervised the work i n d e t a i l at the D.R.A.O s i t e , and to Mabo Ito who supervised things from U.B.C. S p e c i a l thanks to the Observatory Dir e c t o r , Lloyd Higgs, for encouraging the work and for providing a speedy version of the standard CLEAN algorithm and to Jim Caswell for discussions and for providing several t e s t images. Peter Dewdney, Carmen Costain, and Tom Landecker, a l l of D.R.A.O. also generously supplied data for objects to be used for t e s t i n g . John Gait, who was d i r e c t o r o f D.R.A.O. when the work was begun, was also very gracious i n providing support, o f f i c e space, and encouragement f o r the p r o j e c t . I would also l i k e to extend thanks to my parents, Russell and Ruth Steer, f o r making the whole project possible, through f i n a n c i a l support, by encouraging me to begin the studies, and by providing encouragement to f i n i s h them when the end seemed a long way o f f . A word of thanks i s also due to the management of Bell-Northern Research Limited for making possible the extended leave period during which the study was performed, and for p a t i e n t l y maintaining an o f f e r of re-employment through a d i f f i c u l t period of economic times. Page ix Summary Long baseline interferometry (LBI) imaging i s a d i f f i c u l t problem in both the areas of instrumentation and data processing. This t h e s i s study has focused on the data processing algorithms used to convert the c a l i b r a t e d measurements into images. Several d i f f e r e n t algorithms have been used by observers, however they are not always r e l i a b l e , and the objectives of the study have been to examine the algorithms, to determine the s i g n i f i c a n c e of the constraints applied, and to determine the conditions for which imaging w i l l be successful. I t has been possible to define the c l a s s of objects that can be imaged, to make s i g n i f i c a n t improvements to the CLEAN deconvolution algorithm, and to conclude that p o s i t i v i t y and confinement are the most important c o n s t r a i n t s for the algorithms. Two major problems must be overcome in the data processing.' These are to correct f or the large phase errors introduced by the atmosphere and to allow for the sparse sampling of the aperture. The large phase errors completely scramble the image, and the sparse sampling further contaminates the image with a complicated beam pattern. These problems cannot be solved by a n a l y t i c methods as the data, by i t s e l f , does not uniquely define an image, and i t i s necessary to incorporate information i n addition to the data to f i n d a s o l u t i o n . Various non-linear algorithms are used to combine extra c o n s t r a i n t s with the data. For t h i s study, the algorithms have been simulated with a s e r i e s of computer programs, and the e f f e c t s of various combinations of algorithms and c o n s t r a i n t s have been tested with d i f f e r e n t classes of objects. The algorithms and constraints tested include s i g n a l processing techniques (speckle), global image parameter maximization processes (sharpness or entropy), p o s i t i v i t y , confinement and closure-phase constraints, and the deconvolution algorithms CLEAN and the maximum entropy method (MEM). The experiments have shown that p o s i t i v i t y and confinement are the major forces in the recovery process, and that objects that are not confined cannot Page x be imaged. The closure-phase c o n s t r a i n t s are of secondary importance and are used to provide the f i n e d e t a i l s and to extend the dynamic range of the image once the major features are established by the confinement constraint. The published algorithms which incorporate closure-phase, force a strong confinement constraint by the use of the CLEAN algorithm. Experiments that have applied closure-phase c o n s t r a i n t s without confinement have f a i l e d to operate s u c c e s s f u l l y . The importance of confinement was demonstrated when these experiments operated s u c c e s s f u l l y with confinement introduced as an e x p l i c i t operation. From these experiments and by analogy with o p t i c a l imaging, i t i s concluded that for LBI imaging to be successful, the object must co n s i s t only of p o s i t i v e separated features which remain separated in the zero-phase (or auto-correlation) image. This means that for imaging without phase information, i t i s necessary that the area between the components ("white space") in the object be greater than 1/2 the t o t a l area. The prospects for successful imaging can thus be checked before work i s begun by t e s t i n g for overlap in the zero-phase image. I f the object meets the confinement constraint, and closure-phase information i s a v a i l a b l e , then the image can be determined to an accuracy independent of the phase errors. Other experiments coupled with analysis have shown that imaging algorithms which maximize a global image parameter such as entropy or sharpness cannot, by themselves, be used for imaging with large phase e r r o r s . Maximizing entropy i s , therefore, not a solution to the LBI imaging problem. Experiments have shown however, that MEM can be combined with the constraints of confinement and closure-phase to form a working algorithm. This new algorithm gives comparable r e s u l t s to the standard self-calibration/CLEAN algorithm, although i t i s slower i n operation. The CLEAN algorithm has been found to play an important r o l e i n the imaging process, both for i t s a b i l i t y to deconvolve the telescope beam, and for i t s a b i l i t y to separate the image into components. The f i n a l accuracy of the image i s mainly l i m i t e d by CLEAN's a b i l i t y to work with the array sampling pattern and the noise. The standard CLEAN algorithm however, often Page x i f a i l s to c o r r e c t l y reproduce extended object features. These are reproduced as a s e r i e s of s t r i p e s or ridges rather than as a smooth d i s t r i b u t i o n . A new form of the algorithm has been developed which i s stable for extended object s . This algorithm has enabled the recovery of simulated LBI images that were more extended than was possible with the standard algorithm. As the CLEAN algorithm i s widely used for synthesis telescopes, t h i s new algorithm w i l l also be of considerable i n t e r e s t for other than LBI imaging. The new algorithm gives better r e s u l t s for extended objects, and i t i s s i g n i f i c a n t l y f a s t e r than the standard algorithm. There are also fewer controls to be set by the user. Imaging i s possible from LBI data with large phase errors by the use of i t e r a t i v e algorithms incorporating S e l f - C a l i b r a t i o n and CLEAN when the object i s s u i t a b l y confined so that the features remain separated i n the auto-correlation image, and the CLEAN algorithm operates. These two conditions can be checked with the a u t o - c o r r e l a t i o n image before the imaging work begins. A-1 Introduction A-1 Introduction and Overview Page 1 Over the years astronomers have developed algorithms for imaging with t h e i r aperture synthesis instruments. With the proposals to b u i l d dedicated long baseline transcontinental arrays [41] has come some concern about the r e l i a b i l i t y of these algorithms. While many i n t e r e s t i n g r e s u l t s have been reported from long baseline observations, there i s concern i n two areas. In the f i r s t place the algorithms are reported to f a i l i n some cases, and secondly the algorithms often require an i n i t i a l "guess" about the object, and there i s concern that perhaps t h i s may lead to an image more influenced by the experimenter than by the data. The study of t h i s t h e s i s has been to examine the long baseline synthesis imaging algorithms (and some from other f i e l d s ) to see when they give a true r e s u l t , and under what conditions they f a i l . I t was of p a r t i c u l a r i n t e r e s t to learn the r e l a t i v e importance of the image constraints to the algorithm and to define the c l a s s of objects that could be r e l i a b l y imaged. I t also seemed a good idea to be on the lookout for ways to improve the methods. Several new approaches have been t r i e d , and some improvements have been made to the standard algorithms. I t i s concluded that very good imaging i s possible for a r e s t r i c t e d c l a s s of object and that some algorithms are better than others. The c o n s t r a i n t s of "confinement" and "closure-phase" proved to be c r u c i a l i n the imaging process, although neither alone would guarantee a s a t i s f a c t o r y image. The object must meet a separation c r i t e r i o n before the imaging w i l l be s u c c e s s f u l . The t h e s i s i s organized i n the following manner. The remainder of t h i s section "A" o u t l i n e s the problem, the experimental r e s u l t s , and the conclusions. This includes a general o u t l i n e of the imaging problem, the constraints s u i t a b l e for astronomical imaging, a summary of the r e s u l t s from the t e s t i n g of the algorithms, and the conclusions reached about LBI imaging. This i s intended to provide a complete p i c t u r e of the work at a general l e v e l without becoming involved i n d e t a i l s . The remaining sections provide the d e t a i l s . Section B provides some introductory background material on aperture synthesis, with sections C, D, and E providing the d e t a i l s of the t e s t s on the LBI imaging algorithms. I t i s a n t i c i p a t e d that by organizing things i n t h i s "top-down" fashion, that readers w i l l not become too d i s t r a c t e d by d e t a i l s u n t i l the o u t l i n e of the problem and the r e s u l t s are understood. A-1 Introduction Page 2 Section B i s divided into three parts with part B-1 introducing some concepts and d e t a i l s of LBI imaging. This i s not an exhaustive review, but i t i s intended to provide a convenient reference for background to set the context and the jargon for the other discussions. Part B-2 discusses the deconvolution algorithms used to separate the telescope "beam" from the image for both connected and long baseline interferometers. A s i g n i f i c a n t new algorithm f o r performing t h i s operation i s introduced i n t h i s section The l a s t part of section B i l l u s t r a t e s the e f f e c t s on the image of phase err o r s i n the aperture samples and serves to show why the phase error problem i s so severe. Section C, D and E o u t l i n e the experiments to tes t the three classes of algorithms that were tested f o r s u i t a b i l i t y for LBI imaging. These sections should be consulted to support the conclusions reached i n section "A". Section C covers the "signal processing" algorithms ("speckle" techniques) which attempt to correct the phase errors by averaging many samples of the di s t o r t e d image. The co n t r i b u t i o n i n t h i s section i s the adaptation of techniques from o p t i c a l imaging to the LBI problem, the t e s t i n g of several versions of the techniques, and the conclusion that these methods are not s u f f i c i e n t l y powerful for LBI imaging. Section D deals with algorithms which attempt to correct the err o r s by f i n d i n g an extremum of a "g l o b a l " parameter ("sharpness" or "entropy") of the image. The contribution i n t h i s section i s the in v e s t i g a t i o n of the p o t e n t i a l of these techniques f o r image c o r r e c t i o n , and the demonstration, both by theory and by experiment, that they are not s u f f i c i e n t l y powerful f o r LBI imaging. F i n a l l y , section E deals with the successful i t e r a t i v e algorithms which apply constraints to the image based on information i n addition to the data. There are several s i g n i f i c a n t contributions i n t h i s s e c t i o n . One i s the t e s t i n g of the various "standard" LBI algorithms with various combinations of objects and constra i n t s and the conclusion that the "confinement" of the object i s the dominant factor i n the recovery process. Imaging i s possible without any phase information i f the object i s s u i t a b l y confined. Other contributions include the demonstration that the phase-closure algorithm i s s e n s i t i v e to the array design, and the development of an adaptive version of the s e l f - c a l i b r a t i o n algorithm which permits i t to f i n d a solution when the phase errors are very large and there i s no p r i o r information about the object. Another con t r i b u t i o n i s the demonstration of an algorithm combining the confinement and the maximum entropy c o n s t r a i n t s . A-1 Introduction Page 3 The remainder of t h i s section w i l l o u t l i n e the imaging problem as faced by LBI astronomers (A - 1 .1 /1 .2), the constraints that are av a i l a b l e to help f i n d a sol u t i o n (A -1 .3), the r e s u l t s of the tes t s on a number of imaging algorithms (A-2), and a summary of the conclusions of the study (A-3). A-1.1 The Imaging Problem Many areas of science and engineering use images to observe and record things of i n t e r e s t . While frequently the images are unambiguous, i n the form of a photograph, i n many cases the "image" i s the r e s u l t of a more complex process which may not always define a unique r e s u l t . In t h i s and the following section, the problem of imaging from incomplete and corrupted data w i l l be outlined. The measured data do not define a unique image, and the problem for the observer i s to make use of ad d i t i o n a l outside information and knowledge of the errors of the instrument to create a unique image consistent with a l l of the ava i l a b l e information (in c l u d i n g the measurements). Data c o l l e c t i o n i s a sampling process and ambiguity may enter the r e s u l t i n g image though undersampling or d i s t o r t i o n due to noise, through uncontrollable e f f e c t s of the instrument, or through the basic physics of nature. Observers are frequently required to provide a unique image from i n s u f f i c i e n t data. Figure A-1.1 i l l u s t r a t e s t h i s process whereby the data are c o l l e c t e d as the r e s u l t of a fuzzy imaging system acting on an unknown object. The problem i s to determine something about the object from the data. In simple terms t h i s i s a deconvolution problem. The data c o l l e c t e d may be considered to be the convolution of the true object function 0(x,y) with some spreading and sampling function S(x,y) plus some noise N(x,y). X and y are coordinates i n the object or image planes. I d(x,y) = 0(x,y) • S(x,y) + N(x.y) (A.D The object function 0(x,y) i s to be estimated from the data c o l l e c t e d I d ( x , y ) . This i s not an easy problem even i f the spreading function S(x,y) i s well known and the noise i s small. The "obvious" s o l u t i o n of convolving by the inverse function, S ( x , y ) - 1 , i s frequently not p r a c t i c a l for two reasons. A-1 Introduction Page 4 S(x,y) i s a many-to-one function, i n which many input values give the same output value, and for which the inverse i s undefined. Also, the sampled nature of the process causes some sample points to be weighted more than others. In the inverse function, the data points measured with the smallest weight, and which are most in error due to noise, are m u l t i p l i e d by large f a c t o r s to form the image. Under these conditions, the image i s very much affected by small amounts of noise i n the data. The p r a c t i c a l s o l u t i o n for problems of t h i s type usually involves estimating (guessing) the object function 0(x,y) and working out 0(x,y)«S(x,y). This i s then compared with the data and, based on the differences, another guess i s made at 0(x,y) u n t i l a good f i t to the data i s achieved. Most of the d i f f e r e n c e s between the algorithms to be described involve "better" ways to guess at the function 0(x,y). Note however that the data are noisy, and any comparison between data and model must allow for the noise. I t i s u s u a l l y i n the tolerance for noise that the mathematical theorems are separated from the p r a c t i c a l methods. This t h e s i s i s concerned with radio astronomy imaging where the problem i s s i m i l a r but perhaps even more d i f f i c u l t . The astronomy data are the product of the Fourier transform of the object and an unknown spreading function. The product in the Fourier transform domain of course represents convolution i n the image domain. In t h i s case the data, D(u,v), are measured i n the "aperture plane" with coordinates (u,v). D(u,v) = [ FT{ 0(x,y) } S(u,v) ] + noise (A.2) The symbol FT denotes the Fourier transform operation, and S(u,v) denotes the unknown spreading function. The recovery of the object function 0(x,y) i s made more d i f f i c u l t by the Fourier transform r e l a t i o n between the data domain (u,v) and the image domain (x,y). There i s also the further p r a c t i c a l d i f f i c u l t y that many of the d e t a i l s of the spreading function S(u,v) are unknown. S(u,v) may be considered to be the combination of two functions. One i s a sampling function which i s determined by the placement of the antennas and the operation of the e l e c t r o n i c s . This i s the telescope "point-response-function" or "beam"; The other i s a d i s t o r t i n g function l a r g e l y due to the natural e f f e c t s of the atmosphere. A-1 Introduction Page 5 D(u,v) = [ FT{ 0(x,y) }^ (u,v )oUu,v) ] + noise (A.3) where i s the aperture sampling pattern or transform of the beam, and oL\s the (unknown) atmospheric d i s t o r t i o n . In essence, the complete LBI imaging problem i s to r e c o n s t i t u t e both the object function and the d i s t o r t i o n function, given the data and the sampling pattern. The sampling function may be well defined, but i t contributes to the problem in that i t i s usually sparse, and thus i t does not define a unique image. To compound the d i f f i c u l t i e s for the radio astronomer, the noise i s usually not n e g l i g i b l e . The s i t u a t i o n i s that the data c o l l e c t e d may represent any one of a very large number of images, and the work for the observer i s to somehow determine the "true" or " c o r r e c t " image from the t o t a l possible set. In the mathematical sense t h i s i s c l e a r l y impossible, and i t may seem best to design a new experiment to get more complete data. In many p r a c t i c a l cases however, imaging i s made possible by including i n the processing a d d i t i o n a l knowledge about the object and the instrument. This a d d i t i o n a l information i s frequently of a "general" nature (the image must be "confined" or " p o s i t i v e " ) that cannot be incorporated d i r e c t l y i n an a n a l y t i c way into the processing. The solutions to these problems, therefore, are i n the form of "non-linear algorithms" performed by a d i g i t a l computer. Figure A-1.2 i l l u s t r a t e s t h i s process of combining measured data with " a - p r i o r i " knowledge to generate a unique image by means of a "magic" algorithm. The extra information can reduce the s i z e of the set of possible images to a s u i t a b l y small number. The information may come from other observations of s i m i l a r objects, from knowledge of the physics of the object, from knowledge of the operation of the instrument, or simply from the biased prejudices of the observer. This information must be combined somehow with the measured data to y i e l d an acceptable image. This process i s not always easy to describe, and i t i s often only defined i n the form of an algorithm for constructing the image. Often the algorithm i t s e l f i s a form of a - p r i o r i knowledge as i t may include a model of the imaging process or the noise. The algorithm may include "feedback" i n the sense that the i n i t i a l data and knowledge are used to produce an estimated image which i s then incorporated into the knowledge base and the process i s i t e r a t e d u n t i l (with luck) a stable s o l u t i o n i s reached. This i s i l l u s t r a t e d by the dotted l i n e i n f i g u r e A - l Introduction Page 6 A-1.2. In f a c t , the most successful algorithms act i n t h i s way as a feedback-loop to correct a model image u n t i l i t has the best agreement with the data and the image domain c o n s t r a i n t s . The errors between the model and the constraints are used to generate corrections to the model. The " s e l f - c a l i b r a t i o n " algorithm and the new version of the "CLEAN" algorithm, to be discussed further on, are feedback algorithms of t h i s s o r t . This imaging process can be likened to the i t e r a t i v e methods that are sometimes used to solve non-linear equations that are not e a s i l y (or cannot be) inverted. A guess i s made at the answer or root, and then the error with the guess inserted into the equation i s used to make refinements u n t i l an answer of the desired accuracy i s achieved. One of the most common examples of e f f e c t i v e a - p r i o r i information i s the constraint that the object must be p o s i t i v e . The "magic algorithm" thus works to pick out only the p o s i t i v e images from the large set of images consistent with the data. This i s the i n t e r s e c t i o n of the set of a l l possible images consistent with the data with the set of a l l possible p o s i t i v e images. This i n t e r s e c t i o n set can be s u b s t a n t i a l l y smaller, and s u r p r i s i n g l y , i n some p r a c t i c a l cases, i t can be a s i n g l e , s a t i s f a c t o r y image. Some ambiguity w i l l s t i l l be present i n the form of a 180 degree r o t a t i o n or a t r a n s l a t i o n of the image i n r e l a t i o n to the sky coordinates. Most of the p r a c t i c a l techniques to be discussed i n t h i s t h e s i s make important use of t h i s powerful " p o s i t i v i t y " c o n s t r a i n t which i s i l l u s t r a t e d i n the Venn diagram of figure A-1.3. The problem, i n summary, i s to define a unique image given i n s u f f i c i e n t and d i s t o r t e d data. This imaging problem i s common to many d i c i p l i n e s i n cluding, medical imaging (tomography), astronomy (radio & o p t i c a l ) , radar and synthetic aperture radar, molecular spectroscopy and si g n a l processing. Workers i n each of these f i e l d s have developed t h e i r own algorithms s u i t a b l e for t h e i r data. Many of these schemes are quite s p e c i a l i z e d but are nonetheless e f f e c t i v e . Many systems make use of the p o s i t i v i t y c o n s t r a i n t , however each f i e l d has i t s own a d d i t i o n a l s p e c i a l data conditions which s i g n i f i c a n t l y a f f e c t the algorithm and make transport of algorithms between f i e l d s d i f f i c u l t . One of the objectives of t h i s study was to look at some of the techniques used i n other f i e l d s and t e s t t h e i r s u i t a b i l i t y f o r use i n LBI. I t has been found that some other techniques can give p a r t i a l r e s u l t s , A-1 Introduction Page 7 however they cannot compete with the powerful imaging techniques developed by astronomers d i r e c t l y for LBI. A-1.2 The LBI Imaging Problems Radio interferometry i s a technique of forming high r e s o l u t i o n images that uses a number of small separated antennas to simulate a large aperture. Section B-1 discusses the concepts of imaging by earth-rotation synthesis i n more d e t a i l . For (connected) instruments where the antennas are separated by a few tens or hundreds of meters and the signals can be accurately measured, t h i s i s an e f f e c t i v e imaging system. For (unconnected) instruments with antennas separated by thousands of kilometers and r e f e r r e d to as long baseline interferometers (LBI) some serious p r a c t i c a l problems a r i s e with the c a l i b r a t i o n of the signals. This section w i l l o u t l i n e the nature of these e r r o r s . Figure A-1.4 shows a schematic diagram of a small LBI imaging system. The s i g n a l s from the three separated antennas are brought together at a c a l i b r a t o r - c o r r e l a t o r u n i t . This device applies instrumental c a l i b r a t i o n s and then forms the complex c o r r e l a t i o n of the signals from each pair of antennas i n the array. These " c o r r e l a t i o n c o e f f i c i e n t s " are passed to a d i g i t a l computer where more c a l i b r a t i o n s and the Fourier transform to construct the image are performed. There are numerous t e c h n i c a l problems with t h i s system. For t h i s discussion however, the problem i s r e s t r i c t e d to two major areas. The f i r s t concerns the e f f e c t s of the atmosphere at the separated antenna s i t e s , and the second concerns the sparseness of the aperture sampling due to the ( r e l a t i v e l y ) small number of antennas i n the array. Figure A-1.4 shows the atmosphere and ionosphere diagramatically as "clouds" above each antenna. D i f f e r i n g r e f r a c t i v e indices i n these regions act to delay the signal passing through them by an amount that i s d i f f e r e n t at each s i t e and that varies with time. The delays are equivalent to many wavelengths and they change s i g n i f i c a n t l y over a time period of a few minutes [40]. These disturbances e f f e c t i v e l y delay the s i g n a l and do not attenuate i t , and they manifest themselves as "phase e r r o r s " i n the complex samples cal c u l a t e d by the c o r r e l a t o r . The e f f e c t of these errors i s to scramble the A-1 Introduction Page 8 image as i s i l l u s t r a t e d i n section B-3. These phase errors are the unknown spreading f u n c t i o n , ^ , i n the imaging process of equation A.3. I t i s important to note that these phase errors enter the signals independently at each antenna. In most small, connected aperture synthesis instruments, i t i s possible to c a l i b r a t e the phase s a t i s f a c t o r i l y by observing c a l i b r a t i o n sources. In the simplest form, an unresolved source provides a uniform phase reference s i g n a l that can be used to c a l i b r a t e the instrumental errors. This technique i s used for LBI instruments, however i t does not provide s u f f i c i e n t phase c a l i b r a t i o n to enable d i r e c t imaging. The LBI f i e l d of view i s (very) small and i t seldom includes a point source s u i t a b l e for c a l i b r a t i o n . The phase errors vary quite r a p i d l y with time and d i r e c t i o n as the atmosphere s h i f t s . Any observation made for c a l i b r a t i o n with an independent source i s only v a l i d for a few minutes a f t e r i t i s made. The c a l i b r a t i o n observations thus need to be repeated frequently. However, any time spent observing the c a l i b r a t o r and moving the antenna to point at the source reduces the time spent observing the primary source and reduces the s i g n a l to noise r a t i o for the observation. For these reasons, c a l i b r a t i o n observations cannot remove the continuous atmospheric phase errors for LBI imaging, nevertheless they are s t i l l necessary to provide o v e r a l l instrument c a l i b r a t i o n ( for such things as tape recorders, receiver delays, and clock errors) for LBI processing. The number of antennas i n an LBI array i s l i m i t e d ( for purely p r a c t i c a l reasons ) to a small number, and t h i s means that the sampling of the aperture s i g n a l leaves a number of s i g n i f i c a n t gaps or holes. The sampling pattern for an 8 antenna East-West array i s shown i n figure A-1.5. Each of the curved tracks represents the sampling path for the signals c o r r e l a t e d from one pair of antennas. For an 8 antenna array, there are 28 such p a i r s (see section B-1.4). The s i g n a l s may be sampled at about one minute i n t e r v a l s with the r e s u l t that nearly 1000 points are c o l l e c t e d along each arc i n the 12 hours the earth requires to turn through 180 degrees. The sampling i s thus very f i n e i n the d i r e c t i o n along the arc path, but i t i s sparse i n the perpendicular r a d i a l d i r e c t i o n across the arcs. This manifests i t s e l f as an instrument point response function or "beam" that has s i g n i f i c a n t sidelobes which extend over a wide area of the image. These sidelobes must be removed before the image i s useful to the astronomer. A-1 Introduction Page 9 Ambiguity i n the image a r i s e s here due to the unsampled areas of the aperture. Any number of images consistent with the measurements can be formed by adding to the data d i f f e r e n t signals from the unsampled areas of the aperture. This i s the condition that forces the aperture sampling function, 0 , i n equation A.3 to have no inverse. Radio astronomers can make s a t i s f a c t o r y images from t h e i r data i n s p i t e of the sparse aperture sampling by using an algorithm known as CLEAN. This algorithm, which i s described in more d e t a i l i n section B-2, i s a p r a c t i c a l method of deconvolving the telescope point-response-function (or beam) from the image. The algorithm i s able to perform t h i s task by i n c l u d i n g the a d d i t i o n a l information that the Qbject d i s t r i b u t i o n should be l i m i t e d to a small number of compact features. The algorithm i s e f f e c t i v e only i f both the phase errors and other noise components are small, and the sampling pattern i s accurately known. The standard algorithm also f a i l s to c o r r e c t l y reproduce extended sources. A new form of the algorithm i s introduced i n section B-2 that eliminates t h i s problem. The complete LBI imaging problem i s thus to correct for the large phase errors introduced by the atmosphere and to simultaneously compensate for the sparseness of the aperture sampling. A-1.3 The LBI Image and Data Constraints The f u l l LBI imaging problem i s to determine the object from the data with a combination of phase errors and sparse sampling. To solve t h i s problem, radio astronomers bring to bear a number of powerful items of p r i o r knowledge about the objects. This extra information enables the ambiguities i n the image formed from the data to be resolved. This section w i l l o u t l i n e these c o n s t r a i n t s . From the physics and other observations, the object i s known to be p o s i t i v e and confined. This i s a statement that only p o s i t i v e brightness can be seen, and that most object brightness i s known to be clumped into points or patches of l i m i t e d extent. This "confinement'' of the object i s often referred to as " l i m i t e d support" i n others areas of image processing [50]. A-1 Introduction Page 10 Between the regions of l i m i t e d support, the object brightness may be considered to be zero as i t i s below the noise l e v e l . These two c o n s t r a i n t s of confinement and p o s i t i v i t y are concluded to be the most important forces i n aid of the image recovery. C a l i b r a t i o n of the LBI instrument indicates the basic noise l e v e l and therefore the minimum brightness that can be imaged. This establishes a threshold of s e n s i t i v i t y f or the telescope, and i t i s not possible to see deeply i n t o the dark areas between the sources before the noise obscures the image. Measurements by other radio telescopes can determine the t o t a l power i n the f i e l d , and t h i s l i m i t s the maximum brightness of the object. The number and d i s t r i b u t i o n of sample points measured define the r e s o l u t i o n and thus the number of observable picture elements ("pixels") i n the f i n a l image. Altogether t h i s information sets the range and quantization of the p i x e l i n t e n s i t i e s . The brightest p i x e l cannot be brighter than the t o t a l , and the sum of a l l p i x e l values cannot exceed the t o t a l . The quantization of the data samples and the p r e c i s i o n of the arithmetic i n the data processing sets the quantization i n the p i x e l values. The number of (useful) quantization l e v e l s for the data i s also determined by the noise l e v e l . The t o t a l number of images consistent with the data i s thus r e s t r i c t e d to a (very) large, but f i n i t e number determined by the t o t a l number of p i x e l s , the number of quantization l e v e l s , and the constraint on the t o t a l brightness. This i s the reason that the set of images consistent with the data, shown i n the diagram of f i g u r e A-1 .3, i s not i n f i n i t e i n extent. Many of the preceeding image constraints are a v a i l a b l e i n other imaging f i e l d s , however for radio LBI an important a d d i t i o n a l constraint e x i s t s concerning the way the phase errors enter the data. As i s i l l u s t r a t e d i n f i g u r e A-1.4, the phase errors enter the s i g n a l path to each i n d i v i d u a l antenna. I t i s the c o r r e l a t i o n , or product, of the signals from p a i r s of antennas however which i s used to form the image. I f there are N antennas i n the array, then each antenna error component appears i n combination with another error at N-1 places i n the synthetic aperture data. While t h i s does not solve the problem, i t does reduce i t by reducing the number of unknowns i n the spreading function from N(N-1)/2 f o r the whole aperture to the much smaller number of N-1 . C l e a r l y , t h i s reduction i n the number of unknowns becomes dramatically more important as the number of antennas i n the array i s A-1 Introduction Page 11 increased, however i t can s t i l l be useful with arrays having 8 antennas. This c o n s t r a i n t i s often referred to as "closure-phase", and i t i s explained i n more d e t a i l i n section B-4. Note that these LBI imaging constraints apply to two domains. The p o s i t i v i t y and confinement are i n the image domain, while the phase e r r o r s are i n the aperture or data domain and a Fourier transform and a CLEANing operation (beam de-convolution) are required to go from one domain to the other. A great deal of the "work" i n the LBI imaging algorithms involves the repeated transformation of images and data between these two domains. The basic problem, i n summary, i s to determine the object function given data that has been disturbed by large phase errors from the atmosphere and that has been only sparsely sampled. This data does not define a unique image by i t s e l f , and additional information about the nature of the phase err o r s and the confinement and p o s i t i v i t y of the object must (somehow) be incorporated into the imaging process to s e l e c t a unique image. The next section w i l l o u t l i n e the algorithms for combining t h i s data with other information and which were tested for s u i t a b i l i t y for LBI imaging. 1 2 Noise Object (unknown) Imaging Process ( p a r t l y unknown) Data (measured) F IGURE A - 1 . 1 The Imaging P rob lem Measured Data Many p o s s i b l e Images "A P r i o r i " I nformation Magic" Al g o r i t h m Many p o s s i b l e L ^pages One " c o r r e c t " Image (maybe feedback) F IGURE A - 1 . 2 Imaging with A Pr ior i i n fo rmat ion * \ - A l l p o s s i b l e Images P o s i t i v e Images Constrained b y data F IGURE A - 1 . 3 Venn d i ag ram showing i n t e r s e c t i o n of da ta & po s i t i ve image s e t s 13 O b j e c t ^ F IGURE A - 1 . 4 S c h e m a t i c D iagram of Long Base l ine Interferometer F IGURE A - 1 . 5 A p e r t u r e sampl ing t r a c k s for 8 an tenna E a s t - W e s t a r r a y A-2 Imaging Algorithms A-2 Imaging Algorithms Page 15 In the development of algorithms for LBI imaging, there are e s s e n t i a l l y two approaches. One involves what can be c a l l e d " s i g n a l processing" and the other involves "image processing". In the si g n a l processing systems, an attempt i s made to correct the data so that i t y i e l d s an acceptable image. T y p i c a l l y , t h i s involves processes such as f i l t e r i n g i n which the extra information used to guide the algorithm i s i n the realm of s t a t i s t i c s and noise models. This processing works ex c l u s i v e l y i n the "data" domain and constra i n t s i n the image domain are not applied. The image processing systems accept the data as measured, and attempt to a l t e r the process so that the r e s u l t conforms to the known image constr a i n t s . As might be expected, the most successful algorithms perform operations i n both the data and the image domains. This section w i l l review the classes of algorithms and the experiments performed to t e s t them. The next section, A-2.1, w i l l review an (not very succcessful) algorithm for LBI imaging using s i g n a l processing methods adapted from o p t i c a l imaging. The following section, A-2.2, on image processing has two main sections. The f i r s t , A-2.2a, describes a possible, but l i m i t e d , algorithm which attempts to s e l e c t the image by maximizing a global image parameter. The second section, A-2.2b, reviews the major, successful, LBI imaging algorithms which i t e r a t i v e l y apply constraints i n both the image and the data domains. The f i n a l section A-3 w i l l o u t l i n e the conclusions f o r LBI imaging drawn from the experimental r e s u l t s . A-2.1 Signal Processing Signal processing algorithms attempt to obtain s a t i s f a c t o r y images by f i l t e r i n g or otherwise co r r e c t i n g the data to eliminate the e f f e c t s of the atmospherically induced phase errors. This i s not an easy thing to do for several reasons. The phase errors are large, and they have a spectrum which i s s i m i l a r to the data. This makes the errors d i f f i c u l t to separate by standard f i l t e r i n g techniques. The further d i f f i c u l t y i s that the sig n a l s are i n f a c t phase measurements, or angles, which are only known modulo 2% Angles are s l i p p e r y things to manipulate because of t h i s p e r i o d i c i t y . Techniques such as cepstrum processing or holomorphic f i l t e r i n g [49], which are A-2 Imaging Algorithms Page 16 successful at echo suppression i n radar s i g n a l s , are not applicable to LBI signa l s due to the s i z e of the errors and the d i f f i c u l t y of unwrapping phases i n two-dimensions. In recent years, o p t i c a l astronomers have been successful i n improving the r e s o l u t i o n of t h e i r telescopes by si g n a l processing to remove the e f f e c t s of the atmosphere. While there are a number of s i m i l a r techniques under development, they can be grouped together under the heading "speckle processes" and they are discussed i n more d e t a i l i n section C. These processes are not d i r e c t l y a p p l i c a b l e to LBI as an o p t i c a l telescope c o l l e c t s data d i r e c t l y i n the form of images. However, the speckle techniques are aimed at correcting for the e f f e c t s of the atmosphere, and i t seemed worthwhile to examine t h e i r a p p l i c a b i l i t y to LBI. The speckle process requires the c o l l e c t i o n of a large number of short-exposure atmospherically d i s t o r t e d images. Each of the d i s t o r t e d images w i l l contain the same object information but a d i f f e r e n t error pattern. These are then Fourier transformed to form a set of simulated aperture s i g n a l s . As i n LBI, the e f f e c t of the atmosphere i s represented by phase errors i n these aperture s i g n a l s . The speckle technique then "averages" the multiple aperture signals to supress the e f f e c t s of the er r o r s . The Fourier transform of t h i s average y i e l d s the corrected image. Some l i m i t e d p r a c t i c a l r e s u l t s have been reported using t h i s technique. The "averaging" must be done using s p e c i a l algorithms to allow for the d i f f i c u l t i e s i n averaging angles. The Knox-Thompson [93 method and i t s de r i v a t i v e s seem to be the most successful. The basic method then, i s to c o l l e c t several (many) sets of data and reduce the e f f e c t s of errors and noise by averaging. The speckle process i s not d i r e c t l y a pplicable to LBI as there i s only a si n g l e set of radio data. I f the radio astronomer did c o l l e c t many sets of data, then the speckle techniques could be used for imaging. Unfortunately, the number of sets needed i s large (about 100) and t h i s would require many nights for observations - too many to be p r a c t i c a l . The LBI set of data does contain however, considerable redundancy on the shorter baselines. This i s because the sampling rate i s constant at a l l antennas, and the short baselines only occupy very small tracks near the center of the aperture plane (see for example f i g u r e A-1.5). I t seemed conceivable that perhaps advantage could be taken of t h i s oversampling to provide several sets of data to be A-2 Imaging Algorithms Page 17 used i n a speckle-type process. Experiments described i n C-5 were t r i e d to see i f t h i s was p r a c t i c a l . These experiments expanded the simulated aperture records to make them redundant. The records were then separated into a number of sets which were "averaged" using the Knox-Thompson algorithm or a v a r i a t i o n thereof. The experiments demonstrated some improvement i n the image, however i t was not enough to be generally useful or to be worth further pursuit. The net conclusion i s that f i l t e r i n g methods are not p r a c t i c a l on t h e i r own for LBI because the noise i s well mixed with the data and there are not enough suitable redundant measurements.to be useful for s t a t i s t i c a l techniques. Also, these f i l t e r i n g techniques ignore the very important constraints a v a i l a b l e i n the image domain. The l i m i t e d amount of extra information provided by the s t a t i s t i c a l techniques i s i n s u f f i c i e n t to make up for the loss of phase data, and t h e i r performance cannot compete with the powerful image domain algorithms. A-2.2 Image Processing I t i s through the techniques of applying a - p r i o r i information about both object and the instrument that LBI imaging has been most su c c e s s f u l . By r e s t r i c t i n g the type of object being imaged, and by c a r e f u l l y incorporating knowledge of the way the phase errors enter the data, i t i s possible i n p r a c t i c e to obtain s a t i s f a c t o r y images. There are several popular algorithms i n use and each corresponds to applying d i f f e r e n t constraints i n the data and image domains. There are four techniques i n t h i s c l a s s . These are "image-sharpening", " p o s i t i v i t y " , "phase-closure" and " s e l f - c a l i b r a t i o n " . The "image-sharpening" algorithm applies only the closure-phase c o n s t r a i n t . The " p o s i t i v i t y " algorithm only constrains the image to be p o s i t i v e , and the "phase-closure" and " s e l f - c a l i b r a t i o n " algorithms force both a p o s i t i v e image and agreement with the closure sums (but i n d i f f e r e n t ways). This section w i l l o u t l i n e the various techniques and the r e s u l t s of the experiments t r i e d to.see how well the algorithms work. I t i s of i n t e r e s t to learn the r e l a t i v e importance of the constraints placed on the data and the image. A-2 Imaging Algorithms A-2.2a Image Sharpening Page 18 As i s i l l u s t r a t e d i n section B-3, the e f f e c t of the antenna phase err o r s i s to spread the compact object brightness i n t o a d i f f u s e scrambled image. Given the a - p r i o r i information that the object i s (probably) not d i f f u s e and that the phase errors enter the data at each antenna, an a l t e r n a t i v e imaging scheme was suggested by work i n adaptive o p t i c s [4], This new method for LBI i s described i n d e t a i l i n section E. The reconstruction procedure incorporates a phase correction factor f or each antenna which i s adjusted for the minimum spreading of the image. The spreading of the image i s measured by a "sharpness" parameter which summarizes the image i n a single number. For the experiments, the sharpness parameter, S s« was the summation of a l l image p i x e l values cubed : S s = S I „ 3 . (A.4) The entropy parameter, S e, which i s the sum of the logarithm of the p i x e l values, was also used i n some experiments. This method takes note of the antenna based nature of the phase errors and the r e s u l t i n g image i s compatible with the "closure-phase" constraint described in section B-4. Quite a number of experiments were conducted to t e s t t h i s new process. I f the phase errors were small ( i n the rangeJ->r/4 ), then a s u b s t a n t i a l improvement i n the image could be achieved. For large errors ( +-2<r), the method produced the autocorrelation (AC), or zero-phase, image (which i s not u s e f u l ) . For a range of errors of intermediate s i z e , an improved image was achieved only for objects with a dominant, bright feature. In no case was a f u l l y s a t i s f a c t o r y image recovered. The d i f f i c u l t i e s with the process r e l a t e to two areas. One i s the presence of multiple extrema i n the sharpness parameter as a function of the antenna phase c o r r e c t i o n , and the other i s a s e n s i t i v i t y of the parameter to the beam sidelobes i n the image. The sharpness parameter showed several l o c a l maxima, one of which was for the true image, but the global maximum was for the AC image. For data with large phase er r o r s , i t i s not possible to f i n d the l o c a l maximum at the correct image without stumbling on the global maximum for the AC image. This problem i s shared by other global image parameter algorithms such as the maximum entropy method (MEM) [42] [43]. The global image parameter simply does not A-2 Imaging Algorithms Page 19 "study" the image i n enough d e t a i l to be able to d i s t i n g u i s h the correct image. The image can only be recovered with the addition of other information about the object. The sharpness parameter i s also s e n s i t i v e to the beam sidelobes i n the 3 image. The p a r a m eterSI n was used i n the experiments and, for t h i s function, negative p i x e l values reduce the summation. This has the e f f e c t of d i s c r i m i n a t i n g against the negative regions i n the image due to the sidelobes and the r e s u l t i s to compact the sidelobes somewhat into the image sources. I f the aperture were f u l l y sampled, then the beam sidelobes would l i e outside the image f i e l d , and the image would be only p o s i t i v e and the sharpening could work c o r r e c t l y (as i t does for adaptive o p t i c s ) . A p o s i t i v e image could be formed for LBI data by means of the CLEAN algorithm, however t h i s i s not p r a c t i c a l for two reasons. In the f i r s t place, use of the CLEAN algorithm i s d i f f i c u l t because the beam pattern i s imperfectly known due to the phase errors, and secondly, CLEANing i s a very slow process and i t s use i n an adaptive system would be i n t o l e r a b l y slow. The conclusion i s that global parameters can be of some use for imaging i n cases where the phase errors are small, the data i s not too far away from the correct l o c a l maximum, and the e f f e c t s of the beam sidelobes are small. The technique i s not useful, however, f o r large errors where confusion e x i s t s among the multiple l o c a l maxima. The sharpening or Maximum Entropy Methods are therefore not, by themselves, a s o l u t i o n to the LBI imaging problem. In any case, equivalent r e s u l t s can be achieved, with a comparable amount of c a l c u l a t i o n , by the i t e r a t i v e methods described i n the next section. Note that t h i s method does not (by design) force p o s i t i v i t y on the image but i t does force the closure-phase c o n s t r a i n t . The f a c t that successful imaging was not achieved with only t h i s constraint applied indicates that closure-phase i s , by i t s e l f , not s u f f i c i e n t for successful imaging. A-2 Imaging Algorithms Page 20 A-2.2b I t e r a t i v e Constraint Algorithms The standard p r a c t i c a l LBI imaging algorithms use an i t e r a t i v e procedure to form the image. An i n i t i a l model image (or f i r s t guess) i s adjusted i t e r a t i v e l y u n t i l i t matches the known.data and i s consistent with a d d i t i o n a l constraints on an acceptable image. These techniques are useful for many kinds of deconvolution problems. Because these algorithms are so successful for LBI imaging, t h i s section w i l l o u t l i n e them i n more d e t a i l than was presented i n the preceeding sections. I t i s h e l p f u l to consider the concept of imaging as a deconvolution process. The data c o l l e c t e d , S(u,v), can be considered to be composed of three terms : the object v i s i b i l i t i e s ( u , v ) , the array sampling pattern j?(u,v), and the atmospheric distortion<*(u,v) where (u,v) are the coordinates i n the data domain (aperture plane). In general, these are complex Hermitian functions, and the (distorted) image I i s the Fourier transform of t h e i r product : where (x,y) are coordinates i n the image domain, and FT denotes the Fourier transform operation. For reference the " c o r r e c t " image I c i s defined as the Fourier transform of the object v i s i b i l i t i e s . The product i n the Fourier transform domain represents convolution i n the image domain so that the d i s t o r t e d image i s e f f e c t i v e l y the convolution of the object with the beam pattern and the d i s t o r t i n g e f f e c t of the atmosphere. The complex functions can be expressed i n polar form with magnitude and phase components : I(x,y) = FT { S(u,v) } = FT { * (u,v)£ (u,v ) o 6( u,v) } (A.5) I c ( x , y ) = FT { 1 (u.v) } (A.6) Kx.y) = FT( G(u,v)expUg(u,v)) B(u,v)exp( ib(u,v)) A(u,v)exp(ia(u,v))}(A.7) A-2 Imaging Algorithms The object function ]f i s i n general complex. Page 21 #(u,v) = G(u,v)exp^g(u,v)) (A.8) The array function $ i s e f f e c t i v e l y a sampling pattern such that : £(u,v) = B(u,v)exp(ib(u,v)) with b(u,v) = 0 and B(u,v) = {0 or 1} (A.9) The d i s t o r t i n g function oCis primarily a phase function such that : o6(u,v) = A(u,v)exp(ia(u,v)) with a(u,v) = r e a l function and A(u,v) = 1 (A.10) Thus we have Hx.y) = FT { G(u,v) B(u,v) exp( i [g(u,v)+a(u,v)] ) } (A.11) The measured data are thus considered to c o n s i s t of three parts : the object c o r r e l a t i o n c o e f f i c i e n t amplitudes, the known array sampling pattern, and the measured composite phase g + a. Note that i f the phase components of the data are ignored then a symmetric image r e s u l t s which i s r e f e r r e d to as the "zero-phase" image : where * denotes the convolution operation. This image has the same symmetry as the autocorrelation image. The c o r r e c t image could be obtained from the zero-phase image i f the phase function g(u,v) could be determined. There are, of course, an a r b i t r a r i l y large number of functions which meet t h i s requirement and a d d i t i o n a l constraints must be introduced to s e l e c t a unique image. For radio astronomy, these cons t r a i n t s have been discussed i n s e c t i o n A-1.3. The phase function g(u,v) must be selected to ensure that the image meets these c o n s t r a i n t s . Fienup [50] and Hayes [51] have determined that the confinement and p o s i t i v i t y constraints I z p ( x , y ) = FT { G(u,v) exp(0) } = I c(x,y) * FT { exp(-ig(u,v)) } (A.12) A-2 Imaging Algorithms Page 22 are s u f f i c i e n t to determine a unique image from the v i s i b i l i t y amplitudes, G(u,v), alone for an object with s u f f i c i e n t l y disconnected support. The LBI imaging problem i s thus to f i n d a s u i t a b l e phase function g(u,v) for the deconvolution problem. Schafer, Mersereau and Richards [44] discuss the c l a s s of i t e r a t i v e algorithms f or solving s i m i l a r problems. I f I i s the estimate of the image at the i i t e r a t i o n , then the process ( i t e r a t i o n ) : w i l l converge to the correct image i f the operator Q includes the constraints for an acceptable image and i s non-expansive. The operator Q must incorporate the measurements (data), the a - p r i o r i constraints ( p o s i t i v i t y ) and the e f f e c t s of the instrument (beam pattern). This requires operations i n both the data and the image domains. I f X i s the constraint operator i n the data domain, and P i s the constraint operator i n the image domain then : where FT denotes the forward Fourier transform operation, and FT denotes the inverse Fourier transform operation. The LBI imaging algorithms of "Image Constraint", "Phase Closure", and " S e l f - C a l i b r a t i o n " are moulded a f t e r t h i s process with d i f f e r i n g designs of the operators P and X. A diagram of t h i s generalized imaging algorithm with operations i n both the data and the image domains i s shown i n f i g u r e A-2.1. The image recovery i s complicated by the problem that the c o n s t r a i n t s of p o s i t i v i t y and confinement enforced by the P operator i n (A.14) cannot be d i r e c t l y applied to the estimated image. The estimated image, being the transform of the hybrid v i s i b i l i t i e s , i s confused by the presence of the synthetic beam. This spreads out the image and forms negative areas. These negative a r t i f a c t s must be removed before the p o s i t i v i t y c o n s t r a i n t can be applied. Thus the P operator must be a combination of the p o s i t i v i t y c onstraint and a deconvolution of the beam : ^ • l = Q [ I i ] (A.13) Q[ I ] = P[ FT{ X[ F T " 1 { I } ] } ] (A.14) -1 A-2 Imaging Algorithms Page 23 PC I ] = p( I « Z ) (A.15) where p i s the p o s i t i v i t y operator : p(x) = x for x > 0 i n s i d e region of support and p(x) = 0 for x < 0 or outside region of support, (A.16) and Z i s the "inverse" of the beam pattern. Note however that Z i s undefined i n t h i s case, and i n practice the deconvolution must be done with an i t e r a t i v e algorithm such as CLEAN. I t i s because the operators P and X which enforce the extra c o n s t r a i n t s i n equation (A.14) are non-linear that an algorithm must be used f o r image reconstruction. These non-linear algorithms are not amenable to " t h e o r e t i c a l " d e s c r i p t i o n and one way to tes t t h e i r effectiveness i s by experiment. For t h i s study, a "generic" image processing system was designed which was tested under various conditions of constraints and data. Using the generalized model of the process shown i n figu r e A - 2 . 1 , the experiments consisted of using various algorithms for each operation, and working with d i f f e r e n t data sets to see how the image was recovered. The various algorithms tested were programmed following the published de s c r i p t i o n s [ 3 1 ] [ 2 0 ] [ 3 0 ] , but of course the implementation may d i f f e r from the o r i g i n a l designer's. For the maximum entropy algorithm, a program supplied by J . S k i l l i n g and S.Gull [ 4 3 ] was used. These experiments are described i n more d e t a i l i n section E-5. The experimental r e s u l t s are i l l u s t r a t e d here with the object shown i n figur e A - 2 . 2 which i s one of a number of objects tested. (Others are shown as examples i n section E.) This object was formed by s e l e c t i n g every second data point from a portion of an image made with the DRAO synthesis telescope. The object f i e l d contains three bright sources plus extended low-level areas inc l u d i n g the "streamer" from the upper-left source. The brightest point, i n the lower center, has a value of about 12000 i n a r b i t r a r y u n i t s . The streamer A - 2 . 2 c Experiments using Image Constraints A -2 Imaging Algorithms Page 24 i s i n the range of 2 0 0 - 3 0 0 , and the lowest l e v e l i s about 3 0 . Note that t h i s f i e l d does contain a "bright feature" but that there are also extensive areas of low l e v e l brightness. This i l l u s t r a t i o n a c t u a l l y shows a reconstruction by the CLEAN algorithm from the undistorted, sampled aperture. This i l l u s t r a t i o n may be compared d i r e c t l y with the other reconstructions by CLEAN. The object f i e l d was Fourier transformed and sampled to provide a set of simulated LBI v i s i b i l i t y records to which antenna phase errors (uniformly d i s t r i b u t e d i n the r a n g e i 2 f r ) were added. Figure A - 2 . 3 shows the d i s t o r t e d "image" made by Fourier transforming t h i s simulated data. The discussion of the experimental r e s u l t s w i l l begin with what i s perhaps the simplest algorithm, which i s c a l l e d the "image constraint algorithm". This technique was f i r s t applied to LBI astronomy by Fort & Yee C313. A very s i m i l a r process i s used by Fienup [ 3 2 ] for the c o r r e c t i o n of o p t i c a l images. Some of the e a r l i e s t work on algorithms of t h i s type was done by Gerchberg and Saxton [ 3 3 ] for electron-microscopy. In t h i s case the data constraint portion of the operation i s the simple combining of the measured data amplitudes G(u,v) with the estimated (model) phases m(u,v). H(u,v) = G(u,v) exp(xm(u,v)) (A.17) These "hybrid" v i s i b i l i t i e s are then transformed to the image domain to form a "hybrid" image. In the image domain t h i s represents the convolution (or c o r r e l a t i o n ) between the zero-phase image and the "phase only" model image. I h(x,y) = I z p ( x , y ) « FT{ exp(im(u,v)) } (A.18) I h(x,y) = I c(x,y) » FT{ exp(-ig(u,v)) } * FT{ exp(im(u,v)) } (A.19) where I has been substituted from equation (A. 1 2 ) . This hybrid image i s then CLEANed to apply the p o s i t i v i t y and confinement constraints and to deconvolve the beam. The r e s u l t i n g image, formed from the components detected by CLEAN, i s then transformed to become a new set of model phases for the next i t e r a t i o n . Note that t h i s i s an extension of the algorithm as described by Fort & Yee by the i n c l u s i o n of the CLEAN algorithm to separate the components and to deconvolve the beam. Without t h i s addition, the algorithm A-2 Imaging Algorithms Page 25 would not operate, and t h i s indicates the importance of properly applying the image domain constraints. Note from equation (A.19), that i f the model phases m(u,v) are correct, then the hybrid image w i l l also be correct. I f the model phases are not exact however, then the e f f e c t of the convolution w i l l be to emphasize the correct features of the model and supress the i n c o r r e c t ones so that the hybrid image w i l l move towards the c o r r e c t image. Unfortunately however, i f the model i s symmetric and thus i t s phase function i s zero, then the hybrid image w i l l remain as the zero-phase image and the algorithm w i l l not converge c o r r e c t l y . This algorithm i s a simple process which forces only p o s i t i v i t y and confinement upon the image and any measured phase information i s e n t i r e l y ignored. In spite of i t s s i m p l i c i t y however, the algorithm was found to be s u r p r i s i n g l y successful. Note that a considerable portion of the '•magic" i n t h i s process resides i n the CLEAN algorithm which i s used i n the dual r o l e of enforcing p o s i t i v i t y and confinement as well as deconvolving the beam. During the i n i t i a l i t e r a t i o n s , CLEAN a l s o acts as a " f i l t e r " to pick out from the hybrid image the brightest features which are most l i k e l y to be r e a l components of the correct image. Figure A-2.4 shows the image recovered by the image constraint algorithm from the d i s t o r t e d data of f i g u r e A-2.3. Although the recovered image i s not perfect, i t does reproduce c o r r e c t l y a l l the major features. Convergence i s asymptotic and i s quite slow f o r the f i n a l i t e r a t i o n s . The accuracy of the reconstruction i s e s s e n t i a l l y l i m i t e d by the number of i t e r a t i o n s performed and by the a b i l i t y of CLEAN to reconstruct the image from the noisy data and the sparse aperture sampling. Note also that as a l l phase information i s ignored, the recovered image i s translated from i t s "true" p o s i t i o n . The i l l u s t r a t e d recovered image i n figure A-2.4 has been re-aligned to allow comparison with the object of f i g u r e A-2.2. I t i s important to understand the r o l e s played by the CLEAN algorithm, the thresholding ( p o s i t i v i t y ) operation, and the presence of bright features i n the object. Theory provided by Fienup & Crimmins [50] establishes that the object d i s t r i b u t i o n w i l l be recoverable (without phase information) i f the object i s confined such that i t does not ( s i g n i f i c a n t l y ) overlap i t s e l f i n the autocorrelation image. This means that the object d i s t r i b u t i o n must consist of i s o l a t e d features which remain ( l a r g e l y ) i s o l a t e d when autocorrelated. This constraint has the e f f e c t of l i m i t i n g the number of A-2 Imaging Algorithms Page 26 non-zero p i x e l s i n the image to l e s s than 1/2 the t o t a l and a development of t h i s "confinement l i m i t " i s provided i n section E-1. Although the theory says the object i s unique and recoverable, i t doesn't say how to f i n d i t . The i t e r a t i v e algorithm of figure A-2.1 i s one p r a c t i c a l way to f i n d the image. The r o l e of CLEAN i n t h i s algorithm i s two-fold. One important function i s to ( c o r r e c t l y ) deconvolve the beam so that the image domain constraints can be applied. I t can only do t h i s properly, however, when the phase er r o r s are small. A second function, which i s very important during the early i t e r a t i o n s , i s to force the separation of the hybrid image into a number of i s o l a t e d components. This ensures that the model has the " l i m i t e d support" required by the theory. S i m i l a r l y , a threshold process i n which a l l image elements below a "threshold" l e v e l are set to zero, i s applied to supplement the CLEANing and enforce s t r i c t p o s i t i v i t y . This also forces a l i m i t e d support constraint on the model image. The threshold l e v e l i s started high and i s gradually reduced as the i t e r a t i o n s proceed. This forces the hybrid image to i n i t i a l l y have a small number of well separated features, and then allows lower l e v e l d e t a i l s to be f i l l e d i n once the major features are defined. For a l l t h i s to work, there must be some prominent features i n the object for the CLEANing to separate out. These features must be b r i g h t enough and separated s u f f i c i e n t l y so that the zero-phase image, when an ( a r b i t r a r y ) threshold l e v e l i s used, has only these l i m i t e d support features i n an un-ambiguous pattern. I f the object has such features, then the i t e r a t i v e algorithm WILL recover the correct image (eventually). I f these features are not present, then the i t e r a t i v e algorithm w i l l NOT work. However, the i t e r a t i v e algorithm i s l i m i t e d by two f a c t o r s . The threshold l e v e l cannot be reduced beyond the point where the features no longer remain separated, and the development of the image becomes very slow as i t approaches the correct r e s u l t . The closer i t i s , the l e s s " e r r o r s i g n a l " i s a v a i l a b l e for the c o r r e c t i o n of the image. The accuracy of the r e s u l t i n g image therefore, depends on the d e t a i l s of the object and how long the computer i s allowed to perform c a l c u l a t i o n s . The methods of "closure-phase" to be discussed next, improve on t h i s performance by using the extra measured phase data to help the recovery i n the "end-game". These algorithms allow a more de t a i l e d (accurate) r e s u l t with a more complex ( l e s s separated) object as they are able to provide a larger c o r r e c t i o n s i g n a l i n the l a s t stages of image development. A-2 Imaging Algorithms Page 27 To extend the range of images recoverable, the standard procedure i s to include the a d d i t i o n a l constraint i n the data domain that the phase errors originate at each antenna and only the differences are measured by the c o r r e l a t i o n c o e f f i c i e n t s . Although the measured phase c o e f f i c i e n t s are known to be wrong, they s t i l l contain a hidden germ of useful information. The algorithms, known as Phase-Closure [20] and S e l f - C a l i b r a t i o n [30], make use of t h i s a d d i t i o n a l information. In t h i s case the data constraint portion of the operator X involves the combination of the measured (data) and model phases i n such a way that the hybrid v i s i b i l i t i e s maintain the same phase closure sums [17] as the data. The set of equations followed for the phase-closure algorithm i s outlined i n section B-4 f i g u r e B-4.2. The S e l f - C a l i b r a t i o n equations are described i n section E-4. Although the algorithms are quite s i m i l a r , the S e l f - C a l was found to be superior. This was p r i n c i p a l l y because the phase-closure algorithm requires the s e l e c t i o n of a "reference antenna" and the best choice was found to depend on the array configuration and the object. Although there appeared to be a r e l a t i o n between the e f f e c t i v n e s s of the selected reference antenna and the aperture coverage of the baseline set involving that antenna, i t was not possible to find an algorithm for s e l e c t i n g the optimum reference antenna, other than an exhaustive search at each i t e r a t i o n . The S e l f - C a l i b r a t i o n algorithm can be designed to work without a reference antenna, and t h i s gave good r e s u l t s i n general use. Figure A-2.5 shows the image recovered from the d i s t o r t e d data of figure A-2.3 using the Self-Calibration/CLEAN process. In t h i s case the reconstruction i s quite accurate and has more d e t a i l than the r e s u l t from the image constraint algorithm. The convergence i s quite f a s t . This example was generated i n 40 i t e r a t i o n s , and the accuracy of the image i s l i m i t e d by the a b i l i t y of CLEAN to deconvolve the beam and restore d e t a i l s to the image. The standard data reduction schemes use the "CLEAN" algorithm to deconvolve the telescope beam from the hybrid image i n order that the image constraints such as p o s i t i v i t y can be applied. As the beam sidelobes are, i n e f f e c t , created by assuming the unsampled areas of the aperture are zero, the deconvolution i s r e a l l y a process of " f i l l i n g i n " these unsampled areas. The CLEAN algorithm does t h i s i n t e r p o l a t i o n based on the assumption that the image i s made up of a summation of point sources. The features of forming a A-2 Imaging Algorithms Page 28 simple component image and good behaviour with noisy data make CLEAN an e f f e c t i v e choice i n the process loop for LBI image recovery. The CLEAN algorithm also plays an important r o l e i n the early i t e r a t i o n s of the phase recovery by separating the bright features from the confusion i n the hybrid image. The CLEAN algorithm and an important new improvement i n i t s operation are discussed i n more d e t a i l i n section B-2. The Maximum Entropy Method (MEM) [43] i s another algorithm for " s o l v i n g " the missing data (or beam deconvolution) problem. In t h i s case, the e f f e c t i s to produce the smoothest p o s i t i v e map that f i t s ( i s consistent with) the data to within an assumed noise l e v e l . (This i s almost exactly the opposite assumption to that of CLEAN.) When the MEM algorithm was simply substituted for CLEAN i n the LBI reconstruction process, i t was u s u a l l y found that the zero-phase image would be reproduced when s t a r t i n g from an a r b i t r a r y i i n i t i a l model. Note that S k i l l i n g [42] has found that i f the process i s started with a model near the correct image, then good reconstruction can be i achieved. Without p r i o r knowledge to provide a good model, however, the MEM i s unable to break the symmetry of the zero-phase image. The entropy parameter ( l i k e the sharpness of section A-2.2a) has several l o c a l maxima as a function of the antenna phase co r r e c t i o n s . The larges t of these maxima i s for the zero-phase image. When s t a r t i n g with a poor model, the algorithm, to maximize the entropy parameter, w i l l most often stumble on the maximum at zero phases. It was possible, i n many cases however, to recover images with the MEM algorithm i n combination with the S e l f - C a l phase algorithm, i f the model image generated by the MEM algorithm was thresholded to leave only i t s brightest features. This forces the l i m i t e d support c r i t e r i a onto the hybrid image and allows the major features to develop. As the i t e r a t i o n s proceed, and the bright features become correct, the threshold l e v e l i s gradually reduced to allow the lower l e v e l d e t a i l s to be f i l l e d i n . By t h i s technique, use i s being made of the p o s i t i v i t y and confinement constraints enforced by the MEM algorithm, but the "entropy" feature of the reconstructed image (which requires smooth extended features) i s being discarded u n t i l the f i n a l i t e r a t i o n s when the threshold l e v e l i s near zero. I t i s the p o s i t i v i t y and confinement which drive the i n i t i a l stages of the phase recovery. Again, as with CLEAN, the f i n a l reconstruction accuracy i s l i m i t e d by the accuracy with i' I'-A-2 Imaging Algorithms which the MEM algorithm i s able to deconvolve the beam. Page 29 Figure A-2.6 shows the image recovered from the data of fi g u r e A-2.3 by a combination of S e l f - C a l and MEM. The reconstruction of fi g u r e A-2.6 should be compared with figure A-2.7 which shows the image reconstructed by the MEM from data with no phase e r r o r s . This image i s smoother than the corresponding CLEAN reconstruction because of the parameter settings for the MEM program. Before concluding the discussion of general LBI imaging, perhaps i t i s worthwhile to mention one s p e c i a l (easy) case. I f the object c o n s i s t s only of a small number of separated, unresolved (point) sources, then i t can often be imaged almost d i r e c t l y from the v i s i b i l i t y amplitudes alone. This process i s described in more d e t a i l by Baldwin & Warner [2]. In essence, i f the object contains only a small number of i s o l a t e d point sources E, then the zero-phase image w i l l contain E(E-1)+1 sources. A "simple" process of inspection can then be used to s e l e c t from the E(E-1)+1 "apparent" sources the E " r e a l " sources that when autocorrelated w i l l match the autocorrelation image. I t i s important to r e a l i z e that i t i s the f a c t that the image has two dimensions that allows the zero-phase image symmetry to be broken. One-dimensional problems of t h i s s o r t are ambiguous. This method can be automated by a computer. However, the presence of noise peaks in r e a l data can confuse the process. This i s a degenerate example of (very) l i m i t e d support that can be solved by inspection. For more complex objects -and to allow for noise, the i t e r a t i v e algorithms perform the same task, i n a d i f f e r e n t fashion. A-2.2d A New Algorithm For Deconvolving the beam During the course of the experiments with the i t e r a t i v e algorithms i t was noted that the CLEAN algorithm plays a c e n t r a l r o l e i n enforcing the confinement constraint on the image as i t develops. The "standard" CLEAN algorithm [23,24], however f a i l s to operate properly with sources that are extended. I t was found that the f a i l u r e of the CLEAN operation was l i m i t i n g the s i z e of the objects that could be imaged with the LBI phase recovery techniques. The f a i l u r e of CLEAN i s due to the i m p l i c i t assumption i n the algorithm that the image i s composed of a summation of separated beam patterns. When the source i s extended, the beam patterns overlap, and the A-2 Imaging Algorithms Page 30 algorithm i s not always able to sort out the features, because the beam patterns are removed one at a time. This can lead to the formation of s t r i p e d patterns i n the deconvolved image. To solve t h i s problem, the standard algorithm was modified to allow the removal of components i n groups. The group of components i s chosen to match the estimated area of the source i n the object. This i s done by the r e l a t i v e l y simple method of d i s c a r d i n g a l l the image points below a threshold set at a f r a c t i o n of the peak l e v e l . Because the image components are detected and removed i n groups, the s t r i p e patterns are avoided and the new algorithm works much f a s t e r than the old for large images with s i g n i f i c a n t l y extended features. As the CLEAN technique i s widely used by both long baseline and connected interferometers for removing the e f f e c t s of the beam from the image, t h i s new algorithm w i l l be of wide i n t e r e s t . A more complete d e s c r i p t i o n of the standard and the new CLEAN algorithms i s given i n section B-2. A-2.3 Summary of Experimental Results Two basic types of algorithm have been examined. These are s i g n a l processing and image processing techniques. The s i g n a l processing techniques, such as speckle, are judged i n e f f e c t i v e as there i s i n s u f f i c i e n t data to apply u s e f u l s t a t i s t i c s . The techniques cannot compete with the more powerful image constraint algorithms. Global image parameter maximizing algorithms are also judged to be i n e f f e c t i v e . The experiments using the global image parameters, such as sharpness and MEM, showed two important features of the imaging process. In the f i r s t place, the analysis and experiments showed that no global imaging parameter w i l l , by i t s e l f , solve the problem of imaging with large phase err o r s . These techniques w i l l always be confused by the zero-phase image and extra information must be provided to steer away from t h i s r e s u l t . Secondly, the experiments i n which MEM was combined with s e l f - c a l i b r a t i o n snowed the importance of confinement of the object. The MEM deconvolution does not enforce confinement on the r e s u l t . The CLEAN algorithm, which i s the standard method i n the LBI imaging process, does, by default, confine the r e s u l t . The f a c t that the recovery algorithm would not work with MEM unless i t was supplemented by a thresholding operation indicates the great importance of A-2 Imaging Algorithms Page 31 confinement for sucessful imaging. Although the standard algorithm may be c a l l e d phase-closure or s e l f - c a l i b r a t i o n , i t i s c l e a r l y the confinement and p o s i t i v i t y constraints on the object that are the main forces i n the image recovery. Closure-phase information by i t s e l f i s not s u f f i c i e n t for imaging. The important contribution of t h i s study has been the development of the generic algorithm outlined i n f i g u r e A -2 .1 and i t s t e s t i n g with a v a r i e t y of constraints and classes of objects. Previous work on algorithms has concentrated only on the development of the procedure and has not been concerned with the r o l e s played by the various parts of the algorithm. The s i g n i f i c a n t outcome from t h i s study i s that imaging i s possible with,only the p o s i t i v i t y - confinement constraints i f they are applied properly i n the image domain. Early experimenters did not properly deconvolve the beam before applying these c o n s t r a i n t s . Later workers included the deconvolution with the CLEAN algorithm plus the additional closure-phase constraint on the data. These were quite successful and the success was a t t r i b u t e d to the a d d i t i o n a l closure-phase information. The experiments of t h i s study have shown that i t i s the confinement and p o s i t i v i t y enforced by the CLEAN algorithm that are mainly responsible for the success. The closure-phase information i s used only to complete the d e t a i l s of the image once the major features are established. These r e s u l t s give a more complete picture of how the imaging algorithms operate. The i t e r a t i v e algorithms ( i e p o s i t i v i t y / s e l f - c a l i b r a t i o n ) are judged to be useful techniques. I t was apparent from the experiments however, that While many objects could be imaged very w e l l , some could not. I t was also apparent that the more complex the object (that i s i n terms of the number and siz e of features), the more i t e r a t i o n s the algorithm would need for recovery. This i n d i c a t e s an important constraint on the object for successful imaging. The object must be confined, so that i t does not overlap i n the zero-phase image. This i s a requirement that the area of the features i n the object must be l e s s than 1/2 the t o t a l and that t h e i r separation must be greater than twice t h e i r widths (section E-1). P o s i t i v i t y and confinement are the basic conditions that the object must meet f o r imaging to be possible. I f the object meets these conditions at some l e v e l , then with the i n c l u s i o n of closure-phase information, very good images can be recovered. I t may s t i l l , however, be a l o t of work to f i n d the r e s u l t . A-2 Imaging Algorithms Page 32 The imaging algorithms operate i n three stages. They are random searching for i n i t i a l components, development of these "seeds" v i a p o s i t i v i t y and confinement c o n s t r a i n t s , and f i n a l l y the completion of f i n e r d e t a i l s v i a the closure-phase information. The accuracy of the f i n a l image i s mainly determined by the beam deconvolution algorithm (CLEAN or MEM). The a b i l i t y of these algorithms to work with the noise l e v e l and the sampling pattern determines the r e l i a b i l i t y of the f i n a l features of the image. Features whose components were not sampled i n the aperture cannot be restored. Noise e f f e c t s that cannot be modeled as independent errors at each antenna are not corrected by these algorithms. As long as the separation requirement for the object i s met, the accuracy of the f i n a l image i s independent of the fa c t that the phases are not completely known. The errors i n the image w i l l be those due to noise i n the data and the e f f e c t s of the standard image processing programs. Two t e s t s can be performed on the amplitude data alone ( i n the form of the zero-phase image) to determine the prospects for successful imaging. A plane cut through the zero-phase image must leave a set of i s o l a t e d features surrounded by "white space". I t must also be possible to CLEAN the image. To t e s t t h i s , the zero-phase image can be "CLEANED". The point at which the CLEAN stops working (due to noise or confusion) w i l l indicate the l e v e l to which the c o r r e c t image w i l l be recoverable. I f the CLEAN f a i l s to operate, the recovery w i l l f a i l , and s i m i l a r l y i f the zero-phase image has no separated features, the recovery w i l l f a i l . Figure A-2.8 shows an example of zero-phase images with separated and non-separated features. m e a s u r e d v i s i b i l i t i e s ( d a t a ) e s t i m a t e d v i s i b i l i t i e s ( m o d e l ) J £ 7 X d a t a c o n s t r a i n t s h y b r i d v i s i b i l i t i e s t r a n s f o r m to i m a g e d o m a i n h y b r i d i m a g e a m p l i t u d e s c l o s u r e p h a s e s g r i d F F T d e c o n v o l v e b e a m i m a g e c o n s t r a i n t s new m o d e l i m a g e t r a n s f o r m to d a t a d o m a i n new m o d e l v i s i b i l i t i e s CLEAN MEM p o s i t i v i t y c o n f i n e m e n t F F T - 1 s a m p l e General Image Recovery Algorithm F IGURE A - 2 . 1 (S101) F I G U R E .2 T h e " C o r r e c t " I m a g e R e c o n s t r u c t e d by C L E A N f r o m u n d i s t o r t e d d a t a Sf F I G U R E . 5 I m a g e r e c o v e r e d by " S e l f - C a l / C L E A N " ( c l o s u r e p h a s e i n f o r m a t i o n ) F I G U R E . 3 T h e " D i s t o r t e d " Image F I G U R E .4 I m a g e r e c o v e r e d by 3 6 0 ° a n t e n n a p h a s e e r r o r s " I m a g e - C o n s t r a i n t " (NO p h a s e i n f o r m a t i o n ) F I G U R E . 6 I m a g e r e c o v e r e d b y F I G ' U R E . 7 I m a g e R e c o n s t r u c t e d by " S e l f - C a l / M E M " t n e M E M f r o m u n d i s t o r t e d d a t a ( c l o s u r e p h a s e i n f o r m a t i o n ) FIGURES A-2.2 - A-2.7 (GCFIN) a.Unseparated (MCTB1) b .Separa ted FIGURE A - 2 . 8 Z e r o - P h a s e images Showing Separat ion of F e a t u r e s A - 3 Conclusions Page 3 6 A-3 Conclusions The objective of the t h e s i s project has been to examine LBI imaging algorithms. These algorithms are i t e r a t i v e techniques which are able to recover images from data that i s sparsely sampled and has very large phase err o r s . The study has focussed on the operation of the algorithms, with the aim of learning the s i g n i f i c a n c e of the constraints applied, and d e f i n i n g the types of objects that can be imaged s u c c e s s f u l l y . I t has been possible to answer these questions, and to improve parts of the processing. There are three major conclusions : (1) Object Constraints The f i r s t major conclusion i s that LBI imaging w i l l be successful only for a r e s t r i c t e d c l a s s of object. These objects must consist of ( p o s i t i v e ) separated features which remain separated i n the zero-phase image. In practice t h i s requirement t r a n s l a t e s into the condition that the blank sky or "white-space" between the features must be greater than 1/2 the t o t a l area, and the features must be separated by a distance greater than twice t h e i r widths. Although the confinement to l e s s than 1/2 the t o t a l area i s more than s u f f i c i e n t i n theory for recovery of the image, t h i s f r a c t i o n i s necessary i n p r a c t i c e . I f the separation condition i s met, then imaging i s possible, but i t may not be easy. The i t e r a t i v e algorithms which use a model image and apply constraints i n the image and data domains has been found to be the most powerful technique for imaging (under these conditions). As the object approaches the separation l i m i t , the imaging task becomes longer and more i d i f f i c u l t . The confinement - p o s i t i v i t y constraint has been found to be the l most s i g n i f i c a n t force f o r imaging, with many well separated objects being imagable with only t h i s c o n s t r a i n t . The closure-phase information, however, i s important for g i v i n g d e t a i l s and accuracy to the image once the major t features are established. I f the closure-phase information i s used, and the confinement con s t r a i n t i s well met, then the image can be recovered to an accuracy independent of the phase e r r o r s . The operation of the algorithms i s c r i t i c a l l y dependent on the proper a p p l i c a t i o n of the image domain constra i n t s and these are l a r g e l y applied by the CLEAN algorithm. This makes the a d d i t i o n a l requirement that the noise l e v e l and the aperture sampling be such that the CLEAN algorithm operates s u c c e s s f u l l y . This suggests two p r a c t i c a l t e s t s that can be performed to determine the p o t e n t i a l i n the data A-3 Conclusions Page 37 f o r imaging. One i s to tes t the zero-phase image f o r separated features. I f none are present, then the imaging w i l l f a i l . The CLEAN algorithm and the beam can also be tested on the zero-phase image. I f the algorithm does not operate s u c c e s s f u l l y to "CLEAN" the zero-phase image, then the imaging w i l l f a i l . (2) Global Algorithms The second conclusion concerns the use of " g l o b a l " image parameter maximization procedures for imaging. These techniques, p a r t i c u l a r l y i n the guise of the maximum entropy method, are widely proclaimed as solutions to a wide spectrum of imaging problems. Experiments and an a l y s i s i n t h i s study however, have shown that these methods cannot, by themselves, be used f o r imaging with large phase er r o r s . These algorithms w i l l almost always be confused by the zero-phase image which has the maximum global parameter value. In the course of the study, i t was found to be possible to combine a maximum entropy procedure with the other image c o n s t r a i n t s of confinement and closure-phase to produce a working algorithm. This combination of techniques has not been reported i n the l i t e r a t u r e before, although several groups also say they have been successful. This combination gave equivalent r e s u l t s to the t r a d i t i o n a l self-calibration/CLEAN technique. However the implementation was found to be a b i t slower and more d i f f i c u l t to use. (3) CLEAN enhancements The t h i r d s i g n i f i c a n t conclusion i s that i t i s possible to improve the performance of the CLEAN algorithm. This algorithm plays a c e n t r a l r o l e i n LBI imaging by enforcing the image constraints, and i t i s also very widely used with connected interferometers to remove any e f f e c t s of the sampling pattern. The standard algorithm, however, does not give good r e s u l t s with extended features i n the object. These are often reproduced as a s e r i e s of s t r i p e s or corrugations. A new algorithm has been developed as part of t h i s study which i s stable f o r extended features and i s also s i g n i f i c a n t l y f a s t e r i n operation. This algorithm has extended the s i z e of objects that could be processed with the LBI imaging simulations, and i t w i l l a lso be of general i n t e r e s t to astronomers processing large images with extended features from synthesis telescopes. A-3 Conclusions Page 38 In addition to these major conclusions, i t i s possible to make answers to other (smaller) questions. These are o u t l i n e d below : (4) Array Design At the outset of t h i s project, there was some question as to whether the LBI array of antennas could i n some way be optimized for use with the imaging algorithms. In p a r t i c u l a r , i t was thought that perhaps an arrangement of redundant baselines, which would allow accurate phase c a l i b r a t i o n , would y i e l d the best images. This study has shown that t h i s would not be the best design. For the astronomical cl a s s of objects, imaging i s possible without complete phase information and one l i m i t i n g factor i n the accuracy of the image i s the amount of data c o l l e c t e d . D e t a i l s cannot be restored to the image i f they are not sampled i n the aperture. Thus, i t i s more important to have complete aperture coverage than to have accurate phase information, and the array should be designed to give the most complete coverage p o s s i b l e . I t i s , of course, best to have as many antennas as possible i n the array. This increases the power of the closure-phase constraint and also improves the aperture sampling. These experiments have shown that the algorithms are e f f e c t i v e for arrays with a minimum of between 5 - 8 antennas. I t i s important however, to keep the number of p i x e l s i n the image consistent with the number of antennas i n the array. There cannot be more s i g n i f i c a n t p i c t u r e elements than there are data points. (5) Models The i t e r a t i v e algorithms used for the image recovery must begin with an i n i t i a l model or guess at the image. This i s then developed by the algorithm and the constraints into the f i n a l image. There has been concern expressed that perhaps the i n i t i a l model may influence the f i n a l r e s u l t . T h i s has been found to be not true (providing that s u f f i c i e n t independent data points are measured). The i n i t i a l model may a f f e c t how long i t takes to a r r i v e at the f i n a l image, but i t does not influence the end r e s u l t . A model close to the correct image w i l l quickly converge to the answer. A model "ignorant" of the answer simply takes longer to get to the end. A model d e l i b e r a t e l y contrived to t r y to confuse the algorithm seems to be treated as a bad guess and simply takes longer to work to the answer. I t i s best to avoid models that have only zero-phases. Algorithms which use only the p o s i t i v i t y - confinement const r a i n t w i l l not develop from a zero-phase model. The p o s i t i v e features of A-3 Conclusions Page 39 the i n i t i a l distorted image are a better start in these cases. For algorithms using closure-phase, a zero-phase model can be used. It does not seem to be better or worse than other arbitrary starting points. (6) Algorithm Design and Array Design When this project was begun (in 1980) the only algorithm available for LBI imaging was the phase closure technique of Readhead & Wilkinson [20]. Although this method was being applied to LBI data, i t was not always reliable. One of the starting points for the study was to see i f the array design affected the phase closure algorithm. It has been found that the array layout does influence this algorithm through the selection of the reference antenna. Not a l l choices work and, for best results, the array should be laid out so that the reference w i l l be involved with baselines spread over the whole range of lengths. This favours designs with the antennas for the shorter spacings near the middle of the array. Arrays for which the reference antenna baselines do not provide a good representation of the model can be impossible to use. The phase closure algorithm has since been superseded by the self-calibration technique of Cornwell & Wilkinson [30] which has been found to be insensitive to the array layout. Had this new algorithm not become available, the understanding of the sensitivity of the phase closure method to the reference antenna would have had a significant influence on new LBI array design. (7) Extensions to the Self-Calibration Algorithm The self-calibration algorithm, when combined with a confinement constraint such as applied by the CLEAN algorithm, i s a very powerful image recovery technique. Currently in use with connected interferometers such as the VLA or Westerbork, i t is capable of producing remarkably wide dynamic range images. As the phase errors with these instruments are small, good i n i t i a l models are available to the self-calibration algorithm in the form of the uncorrected image. Unfortunately, however, the basic algorithm does not work reliably i f the phase errors are large and there is no prior information about the source distribution. This i s the situation for LBI and i t means that a basic i n i t i a l model must be obtained by other methods before the self-calibration can be used [57]. This problem is the result of the fundamental d i f f i c u l t y in "averaging" numbers that represent phase angles. Included in section E-U i s the development of an adaptive version of the algorithm which enables i t to A -3 Conclusions Page 40 be e f f e c t i v e even with very poor i n i t i a l models. This modification to the technique w i l l be of i n t e r e s t to those designing new LBI image development systems. J The general conclusion i s that subject to a number of l i m i t a t i o n s on the object, and with "reasonable" sampling and noise l e v e l s , very good images can be recovered i n s p i t e of the phase errors. B-1 Aperture Synthesis Concepts LBI Imaging Background Page 41 This t h e s i s involves data processing for aperture synthesis radio telescopes with (very) long baselines. To set the background, i t i s worthwhile to discuss the concepts of operation of these instruments. While the p r i n c i p l e s of aperture synthesis are used i n such diverse areas as synthetic radar and medical tomography, each f i e l d has i t s own problems, nuances of operation, and unique jargon. For radio astronomy, the concept of sampling i n the "aperture plane" i s important to understanding the problems. This section w i l l describe some of the features of the synthesis process which are important f o r l a t e r d iscussion. The section i s divided i n t o four parts. The following part B-1, introduces the concepts of the aperture plane, earth r o t a t i o n sampling, and Fourier synthesis. These are a l l fundamental to the generation of LBI images. This part i s not a development of the theory of LBI, but rather an o u t l i n e of concepts which are h e l p f u l i n understanding the LBI problems and sol u t i o n s . This information i s lar g e l y t u t o r i a l i n nature, and i s provided here p a r t l y as a convenience for readers from outside the f i e l d of LBI. Part B-2 discusses, i n some d e t a i l , the deconvolution algorithm used by radio astronomers to compensate for the the e f f e c t s of the telescope "beam" (or point response f u n c t i o n ) . In the jargon, t h i s i s known as CLEAN and i t i s an important part of the data processing for both connected and long baseline (unconnected) interferometers. This algorithm has been found to play a major r o l e i n the LBI data processing. A new, improved, version i s introduced i n t h i s section which not only extends the range of images that can be recovered but i s also much fa s t e r i n operation. Part B-3 i l l u s t r a t e s the e f f e c t s on images of phase er r o r s i n the aperture data. The main objective of the LBI imaging i s to compensate for phase errors, and t h i s section i s included to i l l u s t r a t e , i n i s o l a t i o n , the "spreading out" e f f e c t the errors have on the image. F i n a l l y , part B-4 outlines the closure-phase constraint which i s widely used by LBI imaging systems. This section i s provided separately as the const r a i n t i s a condition on the data due to the method of sampling, and i t i s thus a constraint rather than an imaging algorithm. B-1 Aperture Synthesis Concepts B-1 Aperture Synthesis Concepts Page 42 Aperture synthesis was developed by radio astronomers to simulate the re s o l v i n g power of a (very) large telescope without the need to b u i l d one. The r e s o l u t i o n l i m i t of a telescope or imaging system (either o p t i c a l or radio) i s l a r g e l y determined by the s i z e of the instrument. A rough estimate of the r e s o l u t i o n c a p a b i l i t y of an instrument i s <^ /D where i s the wavelength of r a d i a t i o n and D i s a dimension, usually the diameter of the aperture or lens. In o p t i c a l telescopes, because the wavelengths are very small, the r e s o l u t i o n can be very good with p r a c t i c a l sizes of D. For an o p t i c a l instrument with a diameter of 1 meter and a wavelength of 0.6 nano-meters, the A/D r a t i o suggests a r e s o l u t i o n c a p a b i l i t y of nano-radians! In p r a c t i c e , f or most o p t i c a l telescopes, the achievable r e s o l u t i o n i s l i m i t e d to about 2500 nano-radians (0.5 arc-second) by disturbances i n the atmosphere and not by the instrument i t s e l f . For radio observations, the wavelengths are 1 - 1 0 m i l l i o n times larger than the v i s i b l e wavelengths, and a correspondingly larger instrument i s needed to obtain the equivalent r e s o l u t i o n . Unfortunately, while a one meter o p t i c a l instrument i s quite p r a c t i c a l , a m i l l i o n meter radio instrument i s quite impossible to b u i l d (and pay f o r ) . Aperture synthesis telescopes have therefore been developed to simulate the e f f e c t of a large instrument from the measurements made by a number of smaller ones. These systems allow radio astronomers to produce measurements of comparable reso l u t i o n to those made by o p t i c a l observers. The aperture synthesis technique allows a number of small antennas and a computer to be substituted for a much larger antenna. The small antennas may be spaced at distances from a few tens of meters to several thousand kilometers. The very large maximum spacings can give very high r e s o l u t i o n , although at the expense of a very small f i e l d of view. A long baseline instrument with a 4900 Km baseline can, i n p r i n c i p l e , achieve a r e s o l u t i o n of 0.00055 arc seconds (at A. of 1.3 cm.). The radio instrument i s thus, i n theory, capable of re s o l v i n g two point objects separated by the diameter of a n i c k e l at a distance of 4000 Km (or almost the width of Canada!). This i s a very precise measurement and, i n addition to astronomy, the instrument can be used f o r very accurate determinations of the distances between the antenna s t a t i o n s . Long Baseline Interferometer (LBI) techniques are presently being used to monitor the earth's c r u s t a l movements and to study changes i n the B-1 Aperture Synthesis Concepts Page 43 earth's r o t a t i o n . In prac t i c e (of course), the LBI instruments are also l i m i t e d i n re s o l u t i o n by noise, e l e c t r i c a l c a l i b r a t i o n errors, and by atmospheric e f f e c t s . These d i s t o r t i o n s conspire to reduce the achievable r e s o l u t i o n of a p r a c t i c a l instrument. A discussion of methods of overcoming the e f f e c t s of the atmosphere and the sparse aperture sampling i s the main subject of t h i s t h e s i s . The c h i e f l i m i t a t i o n of the radio synthesis system i s i t s low s e n s i t i v i t y . While a f u l l y f i l l e d aperture w i l l c o l l e c t and process a l l of the energy from the whole aperture, the aperture synthesis system only i n t e r c e p t s a t i n y b i t of the signal i n the aperture. The signal received i s thus very small and t h i s l i m i t s the instruments to the study of stronger sources. As the radio sky has not yet been examined f u l l y for even the bri g h t e s t objects, astronomers are content ( f o r the moment) to examine the brighter objects. i B-1.1 The Aperture Plane There are many ways of v i s u a l i z i n g the operation of an aperture synthesis instrument. One of the simplest ways to view some of the basic concepts i s to make use of the ideas of Fourier Optics and to draw an analogy with the f a m i l i a r formation of a v i s i b l e image by a lens. Figure B-1.1 shows a schematic cross section of a lens imaging system. In t h i s case i t i s assumed that the object emits monochromatic r a d i a t i o n and that i t i s located e f f e c t i v e l y at i n f i n i t y . In practice, i f the s i g n a l bandwidth i s small, and the distance to the object i s large enough that the spherical waves can be considered plane across the aperture, the Fourier o p t i c s approximation i s v a l i d . In t h i s case the image i s formed i n the f o c a l plane of the lens. The analysis of Fourier Optics indicates that for a given brightness d i s t r i b u t i o n i n the object or sky plane, the s i g n a l at the aperture i s a very good approximation to the Fourier transform of the brightness d i s t r i b u t i o n [37] C38]. In forming the v i s i b l e image, the lens i s performing an inverse Fourier transform operation on the aperture s i g n a l . The s i g n a l at the aperture can thus be thought of as the " s p a t i a l frequency spectrum" of the object brightness d i s t r i b u t i o n . The image forming action of the lens i s to sample the s p a t i a l frequency spectrum and to compute the Fourier transform to B-1 Aperture Synthesis Concepts Page 44 form the image. The reason for the increased r e s o l u t i o n with a larger lens can be understood with t h i s model. The f i n e d e t a i l of the image i s represented by the higher s p a t i a l frequency components, and these are located furthest from the center of the aperture. The larger the aperture diameter, the more high frequency components w i l l be included i n the Fourier transform process and the more d e t a i l w i l l be included i n the image. I t i s worth pointing out two features of the s i g n a l s i n t h i s process. The sky brightness d i s t r i b u t i o n i s a p o s i t i v e r e a l quantity. This i s the r e s u l t of there being no "negative" brightness areas i n the sky, and that only " r e a l " signals can be observed. The Fourier transform of a " r e a l " function i s an even (or Hermitian) function with (complex) values i n one h a l f plane being derivable from t h e i r counterparts i n the other h a l f plane. In p r a c t i c e t h i s means that only h a l f of the t o t a l aperture need be sampled. The aperture s i g n a l , which i s the Fourier transform of the (real) object i s a complex number. This has the p r a c t i c a l e f f e c t for synthesis instruments, that the aperture s i g n a l must be measured to include both i t s amplitude and i t s time of a r r i v a l (or phase with respect to a time reference). The phase portion of the measurement i s d i f f i c u l t to achieve, p a r t i c u l a r l y with long baseline instruments. B-1.2 Aperture Sampling Figure B-1.2 shows a schematic representation of image formation by the aperture synthesis technique. In t h i s system, the s i g n a l at the aperture i s sampled by an array of antennas. These sampled s i g n a l s are connected to a computer and p l o t t e r where the c o r r e l a t i o n of the antenna s i g n a l s , the Fourier transform and the image reconstruction processes are performed. In t h i s system, each antenna pair i s sampling values of the s p a t i a l frequency spectrum of the object, and the computer i s providing the c a l c u l a t i o n s necessary to convert the spectral samples i n t o an image. This analogy makes i t c l e a r that a wide spacing of the antennas i s needed to sample the high frequencies of the s p a t i a l spectrum to obtain high r e s o l u t i o n . C l e a r l y , i t i s also necessary to sample the lower frequency components to allow reconstruction of the extended area features i n the image. B-1 Aperture Synthesis Concepts Page 45 The analogy with the o p t i c a l imaging system can be used i n one more way to give an idea of the complex amplitude-phase measurements needed i n the aperture plane. Figure B-1.3 shows a schematic diagram of the f a m i l i a r double s l i t interference experiment. I f the object consists of a point source located on the axis of symmetry of the s l i t s , then the interference f r i n g e pattern seen at the image plane w i l l have a decaying sinusoidal pattern, and the peak of the envelope w i l l l i e at the center between the s l i t s . The envelope of the fringe pattern i s dependent on the brightness of the source, the degree of monochromaticity of the source, and the width of the s l i t s . A wide bandwidth source, or wide s l i t s w i l l tend to wash out the f r i n g e pattern in t o an i n d i s t i n c t b l u r . I f the spacing of the s l i t s i s changed, the spacing of the fringes w i l l also change. The more widely separated the s l i t s , the more c l o s e l y spaced the frin g e pattern becomes. The wider spacing i s , i n e f f e c t , s e l e c t i n g a higher s p a t i a l frequency component of the source. Figure B-1.4 shows the same s i t u a t i o n as fi g u r e B -1.3 except that now the object i s o f f s e t from the center by an angle 9 . This has the e f f e c t of s h i f t i n g the fring e pattern so that i t s peak i s also o f f s e t from the c e n t e r l i n e . Some understanding of the meaning of the two complex measurements i n the aperture plane can be gained from t h i s model. The frin g e frequency amplitude i s rel a t e d to the brightness of the object at the s p a t i a l frequency selected by the s l i t separation, and the phase of the frin g e pattern, with respect to the s l i t axis, i s rela t e d to the p o s i t i o n of the component i n the object f i e l d . A measurement of both the amplitude and phase i s thus required to reconstruct an image with the correct strength and d i s t r i b u t i o n of brightness. A more complicated object, made up from a number of point sources, w i l l of course r e s u l t i n a more complex pattern of interference fringes. B-1.3 Earth Rotation Synthesis In the i l l u s t r a t i o n s provided to t h i s point, the synthesis technique has been shown only i n one dimension. A l i n e a r array of antennas, samples a l i n e i n the aperture, and only allows reconstruction of a si n g l e l i n e ( s l i c e ) of the two dimensional object. The aperture must be sampled i n two dimensions to allow two dimensional image reconstruction. To provide two dimensional sampling of the aperture, e a r l y small synthesis instruments used a r e a l two dimensional array of antennas to B-1 Aperture Synthesis Concepts Page 46 l i t e r a l l y do the two dimensional sampling. With larger (LBI) instruments a two dimensional array becomes impractical and a technique known as "earth r o t a t i o n synthesis" i s used. The essence of r o t a t i o n synthesis i s to b u i l d a f i x e d pattern of antennas, (often i n a l i n e ) and to l e t the d a i l y r o t a t i o n of the earth move the array through the angles needed to sample the aperture i n two dimensions. Figure B-1.5 shows a diagram of some of the geometry of t h i s s i t u a t i o n . To make the picture easier to v i s u a l i z e , the earth has been cut i n h a l f at the equator, and the two antennas of the interferometer are assumed to be on the equatorial plane as shown. As seen from the source, the baseline w i l l rotate through 360 degrees i n 24 hours. I f the source p o s i t i o n i s not at the pole, then the baseline w i l l also change i t s apparent length as viewed from the source as the earth rotates. From a source located at zero d e c l i n a t i o n , (above the equator) the baseline cannot be seen to rotate, and the baseline only changes length as the earth r o t a t e s . A source at zero d e c l i n a t i o n thus cannot be observed by a synthesis system that only has East-West baselines. In the more general case, the antennas on the earth's surface w i l l not a l l l i e at a constant l a t i t u d e , and the baseline w i l l have a component out of the equatorial plane. In t h i s case a r o t a t i o n and a foreshortening w i l l be seen by any source v i s i b l e from the l o c a t i o n s of both antennas forming the baseline. B-1.4 Aperture Sampling Tracks A baseline of length D, at a d e c l i n a t i o n d b , and at an hour angle L, w i l l trace out an e l l i p t i c a l track i n the aperture plane as the earth rotates [393. The equation of t h i s e l l i p s e i s : u = D c o s ( d ) s i n ( L - L . ) (B.D s s b v = D ( s i n ( d b ) cos(d s) - cos(d b) s i n ( d g ) c o s ( L s - L b ) ) (B.2) where d g and L g are the d e c l i n a t i o n and hour angles of the source being observed. Note that i f the baseline i s p a r a l l e l to the equator, ( i e the two ends are at the same l a t i t u d e ) then db=0 and the e l l i p s e has i t s center at the o r i g i n of the aperture plane. I f the baseline i s not east-west, then the center of the e l l i p s e i s o f f s e t from the o r i g i n along the V axis an amount given by : Page 47 (B.3) Figure B-1.6a shows the angles involved in locating the baseline on the earth's surface. To calculate the baseline parameters (true length and angles) from the antenna geographic positions (longitude and latitude), a conversion of coordinates and resolution of vector components i s required. This is provided by the set of equations shown in figure B-1.6b. Because of the use of the letters "u" and "v" for coordinates in the aperture plane, astronomers often refer to this as the "u-v" plane. Figure B-1.7 shows the e l l i p t i c a l tracks formed in the aperture plane by three antennas on the earth's surface for a source at a declination of 45 degrees. In this case the three stations are the DRAO at Penticton, the ARO at Lake Traverse, and the VLA in New Mexico. Note that one track in the aperture or u-v plane is formed by each pair of antenna. Thus there are N(N-1)/2 possible baselines and e l l i p t i c a l tracks with a set of N antennas. In figure B-1.7, note that the ARO-DRAO baseline (#3), which i s almost east-west, (latitudes 46. and 49.3) i s almost centered on the u axis, and that the other baselines which have a large north-south component are offset from the center. If the antennas were arranged in a straight line, then a set of concentric ellipses would result as shown in figure B-1.8. Note that the tracks shown are not complete ellipses. This i s because the source is not visible from the stations for the f u l l 24 hour period needed for the earth to rotate through 360 degrees. The source w i l l be below the horizon at one or both of the antenna sites for some period of the day. The length of the arc for each baseline i s calculated from the declination of the source, and the location and horizon angle of each antenna s i t e . The calculation of the half-angle (V a) the source is visible away from the zenith meridian is given by : V Q = ARCCOS { ( sin(H) - sin(d c) sin(L) ) / ( cos(d Q) cos(L) ) } (B.4) where H i s the horizon angle, d i s the source declination, and L i s the s latitude of the antenna. The total hour angle for which the source i s visible for a given baseline, i s then just the sum of the 1/2 visible angles for each station, minus the difference in longitude between the stations. B-1 Aperture Synthesis Concepts v = D sin(d.) cos(d ) B-1 Aperture Synthesis Concepts Page 48 At f i r s t glance the f a i l u r e of the source to be v i s i b l e f o r 24 hours and to allow a closed e l l i p s e , would seem to prevent f u l l sampling of the aperture plane. However, as was mentioned e a r l i e r , the sky brightness i s a r e a l function and the aperture signal i s thus Hermitian. This means that one h a l f of the plane can be reconstructed from the other h a l f , and the sampling instrument need only measure the signals i n h a l f the plane. This i s indeed fortunate since few sources are v i s i b l e for a f u l l 24 hours but most sources are v i s i b l e for 12 hours i f they are i n the same hemisphere as the baseline. Figure B-1.8 shows the aperture sampling tracks for an East-West array observing a s e r i e s of sources at declinations from 90 degrees (the pole) to -15 degrees. Note that at the c e l e s t i a l pole, a closed c i r c u l a r track i s formed. As the d e c l i n a t i o n moves away from the pole, the track becomes an e l l i p s e , and the track segments get shorter as the d e c l i n a t i o n decreases to zero. At zero d e c l i n a t i o n , the e l l i p s e s have collapsed i n t o l i n e s on the u a x i s . This occurs because the baseline l i e s wholly i n the e q u a t o r i a l plane and i s not seen to rotate by sources on the equator. At small negative declinations, the e l l i p s e appears again but the tracks are very short and are i n f a c t too short for e f f e c t i v e sampling of the aperture. This set of i l l u s t r a t i o n s gives an idea of how the aperture sampling coverage changes with the source d e c l i n a t i o n . The aperture sampling curves of f i g u r e B-1.7 and B-1.8 have small crosses marked on each track. Because each baseline has a d i f f e r e n t longitude, each baseline i s at a d i f f e r e n t r o t a t i o n angle as seen by the source. This means the degree of foreshortening i s d i f f e r e n t , and each baseline i s sampling a d i f f e r e n t area of the aperture plane. The crosses shown i n the figures represent the points i n the aperture plane being sampled at the instant of time when the source i s on the zenith meridian of the longest baseline or the outermost arc. That these points are not aligned i s e s s e n t i a l l y due to the curvature of the earth. The l o c a t i o n of these points i s determined by the differences i n longitude between the baselines and the longest (or r e f e r e n c e ) ^ a s e l i n e . This i s also i l l u s t r a t e d i n a cross s e c t i o n a l view i n f i g u r e B-1.9. This d e t a i l i s mentioned here because i t i s important for the image recovery algorithms that use "phase closure". These methods must operate on signals which are a l l measured at the same time. The B-1 Aperture Synthesis Concepts Page 49 pattern of crosses shown on the sampling tracks i s the set of points being sampled at one time. Note that some tracks w i l l "end" before others because the source w i l l go below the horizon considerably sooner at some antennas. The closure-phase constraint may thus not be available for a l l of the aperture samples. B-1.5 Long Baseline Interferometer System The process of data c o l l e c t i o n i s outlined i n figure B-1.10. This shows the o v e r a l l hardware arrangement for a two s t a t i o n interferometer. For long baseline interferometry (LBI) measurements, the signal from each r e c e i v e r i s recorded on a video tape recorder together with a (very accurate) time s i g n a l from an atomic clock. At the end of the observing period the tapes from each s t a t i o n are brought together at a "play-back center". Pairs of tapes are played-back and correlated for each possible pair of antennas. The tapes are aligned i n time so that the two play-back outputs represent the s i g n a l s recorded for the same portion of the s i g n a l wavefront. To achieve the e f f e c t of a plane aperture, the s i g n a l from one antenna must be delayed by a c a l c u l a t e d amount to allow for the extra path length to the other antenna. As f i g u r e B-1.10 shows, the amount of delay needed i s approximately given by : t = X Sin (©) / c (B .5) where X i s the chord distance between the antenna, Q i s the angle between the baseline bisector and .the source d i r e c t i o n , and c i s the speed of l i g h t . Compensation i s also necessary to allow f o r the r o t a t i o n of the earth and the r e s u l t i n g Doppler s h i f t of the s i g n a l s at each antenna. The antennas w i l l also move (changing the o r i e n t a t i o n of the baseline) as the earth rotates during the period t . Compensation for these e f f e c t s may be accomplished by o f f s e t t i n g the l o c a l o s c i l l a t o r s at the receivers by a calculated amount. The antenna si g n a l s are then correlated by a m u l t i p l i e r c i r c u i t , and the r e a l and imaginary parts of the product transfered to a d i g i t a l computer f o r image reconstruction. B-1 Aperture Synthesis Concepts B-1.6 Fringes Page 50 The recordings of the data c o l l e c t e d by the LBI instrument are known i n the jargon of astronomy as " f r i n g e s " . This i s because a time p l o t of the sign a l s from the c o r r e l a t o r resembles the interference f r i n g e s from a double s l i t o p t i c a l experiment as shown i n fig u r e B-1.3. This section w i l l i l l u s t r a t e how these fringes are formed i n LBI. The signals sampled by the instrument i n the aperture plane are complex numbers that represent the object v i s i b i l i t i e s . Figure B-1.11 shows a contour plo t of the r e a l part of the aperture s i g n a l for a simple object c o n s i s t i n g of two Gaussian sources. In t h i s case the s i g n a l i s a set of "ridges" resembling a corduroy material. The action of the telescope i s to sample t h i s t function. Figure B-1.12 shows the sampling tracks f o r an 8 antenna East-West array. I f t h i s diagram i s o v e r l a i d on fig u r e B—1.11, the nature of the sign a l s sampled can be seen. Two tracks, numbers 25 and 26-27 (26 & 27 are coincident i n t h i s array), are marked on the diagram. Figure B-1.13 shows the simulated recordings for these sampling tracks. Both the r e a l and imaginary plo t s are shown in the f i g u r e . Note that as the sampling path moves through the ridges, a varying s i g n a l i s recorded which resembles a f r i n g e pattern. The inner tracks, 1 & 2, w i l l pass over only one ridge and generate only a si n g l e peak. The tracks further out w i l l cross more ridges and have correspondingly more peaks i n t h e i r records. The outermost track, 28, crosses the most ridges and hence has the greatest number of peaks i n i t s record. This i s said to have the highest " f r i n g e frequency". The need to properly define a l l of these peaks i n the outermost record sets the sampling rate needed for the array. Note also that at some point during the r o t a t i o n of the earth, the track i s moving along a ridge, rather than across, and a f l a t portion of the record i s formed. This i s the "tangent point" and i t occurs when the l i n e between the two sources i s p a r a l l e l to the l i n e between the two antennas. B-1.7 Image Reconstruction The basic imaging process performed by the computer i s shown i n f i g u r e B-1.14. The f i r s t two operations a f t e r the data are passed to the computer involve converting the samples measured along the curved sampling tracks i n t o B-1 Aperture Synthesis Concepts Page 51 a rectangular g r i d of data representing the aperture plane, and the c o r r e c t i o n of the data for any instrumental e r r o r s . This i s usually r e f e r r e d to as " c a l i b r a t i o n " . The actual image reconstruction takes place i n the two processes labeled "Fast Fourier transform" (FFT) and "CLEAN". These e s s e n t i a l l y perform the inverse Fourier transform operation required to reconstruct the image. The Fourier transform i s usually accomplished with a " f a s t " Fourier transform (FFT) algorithm which i s speedy, but requires the data to be sampled on a regular g r i d and with a number of g r i d l i n e s equal to a power of two. The FFT algorithm i s used due to the s i z e of the data sets. About 1000 samples are c o l l e c t e d for each antenna, and with an 8 antenna array, there are about 28000 c o r r e l a t i o n c o e f f i c i e n t s to transform into the image. The "CLEAN" algorithm i s used to deconvolve the image from the e f f e c t s of the "synthetic beam" of the instrument. The CLEAN algorithm i s discussed i n more d e t a i l i n section B-2. The p l o t t i n g algorithm converts the reconstructed image data into a v i s i b l e image, e i t h e r i n the form of a contour map or a p l o t of radio i n t e n s i t y d i s t r i b u t i o n ( l i k e a conventional photograph). The image i s then ready f o r study by the astronomer. This section has outlined the basic concepts of aperture synthesis imaging. I t has i l l u s t r a t e d how the process corresponds to sampling a set of complex numbers i n the aperture plane that can be formed into an image with a Fourier transform operation. I t was noted that the amplitude of the complex samples gives information about the brightness of the object, while the phase t e l l s about the p o s i t i o n of the features. The^use of the Earth's r o t a t i o n to provide two dimensional sampling of the aperture was explained and i t was i l l u s t r a t e d how the signals measured along the e l l i p t i c a l sampling tracks y i e l d a time varying s i g n a l known as f r i n g e s . The remaining portions of t h i s s e ction review i n more d e t a i l techniques for deconvolving the beam, the e f f e c t s of phase errors on the images, and the constraint of phase-closure. 52 S k y Brightness D i s t r i b u t i o n Aperture Plane (lens) F b(x,y) = B( u,v) S p a t i a l Frequency D i s t r i b u t i o n Image ^lane b'(x',y T) Image Brightness D i s t r i b u t i o n FIGURE B-1.1 Image formation by a lens n i l ? ><x.y) C*3 Aperture Plane X X X (sampling array) B(u,v) Computer & P l o t t e r C o r r e l a t o r Image b'(x',y T) FIGURE B - 1 . 2 Image fo rmat ion by a p e r t u r e s y n t h e s i s 53 ^ Source • S l i t s i Fringe P a t t e r n on screen 0 ° phase on a x i s source F IGURE B - 1 . 3 I n te r fe rence f r inge p a t t e r n F IGURE B-1.4 P h a s e of f r inge pa t te rn 54 FIGURE B -1.5 Baseline Rotation as Seen by Source 55 Antennas at A l and A2 L l , L2 = antenna longitude 01, 02 = antenna l a t i t u d e P i , P2 = radius t o center of Earth D = b a s e l i n e vector Q = db = basel i n e d e c l i n a t i o n Lb = ba s e l i n e longitude FIGURE B-1.6a Location of Baseline on Earth 5 6 GIVEN • i • "1 » L2 = Longitude of antennas , A 2 *2 = Latitude of antennas A^, A^ H2 = Height above sea l e v e l of A^, A^ FIND : D = Baseline length (chord) = Baseline d e c l i n a t i o n (= d e c l i n a t i o n of baseline bisector) L b = Baseline longitude P 1 and P 2 are vectors from center of Earth to antennas 1) Convert geodetic l a t i t u d e to geocentric (arcsec) 0* = 692.743 s i n (20) + 1.163 s i n (40) + 0.0026 s i n (60) 2) Ca l c u l a t e r a d i a l distance to antennas (meters) P = 6367489.4 + 10692.6 cos (20) + 22.4 cos (40) + 0.05 cos (60) + H 3) Calcu l a t e c a r t e s i a n coordinates D x = P 1 cos (0.J1) cos (L.j) - ? 2 cos (0'2I) cos ( L 2 ) D y = P 1 cos (0^) s i n ( L ^ - ? 2 cos (02') s i n ( L 2 ) D z = P 1 s i n (0^) - P 2 s i n (02') 4) Ca l c u l a t e polar coordinates d. = Arctan ( Sqrt [ D /(D 2+D 2 ) ] } b n z x y L. = Arctan { D /D } 2 y 2 2 D = Sqrt [ D + D + D ] x y z use f u l conversion factors : 206264.806 arcsec = 1 radian 57.2957 degrees = 1 radian Figure B-1.6b Baseline C a l c u l a t i o n from Longitude  and Latitude of Antennas [393 57 [Figure B - 1 , 7 Baseline Sampling Tracks in Aperture Plane Figure B - 1 . 8 Ba se l i ne T r a c k s for Var ious S o u r c e Dec l i na t i ons 59 Source Source has passed meridian on base l i n e B-^ Source has j u s t passed meridian f o r Source has not yet come to meridian on Bg-^  FIGURE B - 1 , 9 Effect of Curvature of Earth on Baseline Longitude 60 FIGURE B-1.10 Long-baseline Interferometer System Source Clock B Receiver B x 0 ' N Receiver A Antenna A Clock A ±_ Playback CUD time-delay c o n t r o l Recorder A Playback M u l t i p l i e r ( c o r r e l a t o r ) I i «s > 1 f Belay Image P l o t t e r Computer 1 [F i gure B - 1 . 1 1 R e a l p a r t of a p e r t u r e s i g n a l f o r t w o G a u s s i a n s o u r c e s 54.000 68.460 86.540 88.350 93.770 97.380 49.30 49.30 49.30 49,30 49.30 49.30 MOFFET.RNT DEC BF SOURCE 45.0 HO.OilO U9.30 lay.500 49.30 130625.3 Figure B-1.12 Sampling tracks for 8 antenna i . • -Eas t -West array « 6 3 Imaginary P a r t 2 6 - 2 7 R e a l P a r t Imaginary P a r t 2 5 R e a l P a r t Figure B-1.13 Simulated "Fringe" Records for Two Source Object 64 Observe Sky Playback, c o r r e l a t i o n , C a l i b r a t i o r Y Re-Grid Arc -Rectangular "CLEAN" Fast F o u r i e r Transform P l o t Observe Image FIGURE B- 1.14 Aperture Synthesis Image Process ing B-2 Deconvolving the Beam B-2 Deconvolving the Beam Page 65 The images obtained as the " p r i n c i p a l s o l u t i o n " by d i r e c t l y Fourier transforming the samples of the aperture s i g n a l c o l l e c t e d by a synthesis array can be quite poor. The sparseness of the sampling of the aperture leads to a complicated "synthetic beam" pattern and the p r i n c i p a l s o l u t i o n image i s the convolution of the object brightness with t h i s beam pattern. The beam must somehow be removed from the image before i t can be studied. This problem i s common to both connected and long baseline interferometers. The proper compensation for the e f f e c t s of the beam has also proven to be of c r i t i c a l importance to the success of the LBI imaging algorithms. This section w i l l review the algorithms used by radio astronomers to corre c t f o r the e f f e c t s of the synthesis beam sidelobes. There are several important features to be out l i n e d . The e f f e c t s of the beam i n the image r e s u l t from the unsampled areas of the aperture, and thus, the i m p l i c i t objective of the deconvolution i s to f i l l - i n or extrapolate these unknown areas (with reasonable numbers). This can only be done by using information about the object i n addi t i o n to the data. The successful CLEAN algorithm uses the reasonable assumption that the object i s confined to compact areas of the f i e l d of view. This leads to trouble, however, i f the object does contain areas of extended emission. The standard algorithm does not recover these areas properly, and t h i s i s both a nuisance f o r users of connected interferometers and a l i m i t f o r LBI users. The s i g n i f i c a n t r e s u l t of t h i s section i s the development of a new CLEAN algorithm which not only gives the proper r e s u l t f o r extended features but i s also s i g n i f i c a n t l y f a s t e r i n operation. This new algorithm w i l l be of considerable p r a c t i c a l i n t e r e s t to users of both connected and long baseline interferometers. B-2.1 Reducing the E f f e c t s of the Beam In most synthesis instruments the sidelobes are suppressed somewhat by apodizing or "tapering" the measured aperture data. Often a Gaussian weighting function i s used so that the data at the edge of the measured area are only 20% of t h e i r true values. The Gaussian weight has the advantage that i t transforms to another Gaussian i n the image plane. (Actu a l l y , as only a truncated sampled Gaussian i s used, the Fourier transform i s the convolution of a Gaussian with a sine function.) This weighting i s not an e n t i r e l y B-2 Deconvolving the Beam Page 66 s a t i s f a c t o r y s o l u t i o n however, as r e s o l u t i o n i s being s a c r i f i c e d by de-emphasizing the high s p a t i a l frequency measurements. In a d d i t i o n , f or many observations, the gaps i n the sampling occur i n the middle areas of the aperture plane, and these are not helped by a (simple) tapering function. Burns & Yao [22] describe some early attempts to f i l l i n these holes by averaging adjacent points i n the aperture domain. This approach i s not of general use. For high r e s o l u t i o n maps, the data points are independent, and they cannot be recreated by averaging nearby points. Burns & Yao present example r e s u l t s showing confined symmetric objects. These objects have a simple aperture phase s i g n a l . The confinement makes the aperture data redundant (over-sampled) and the symmetry makes the phase s i g n a l smooth. Both of these factors contribute to the success of t h e i r averaging i n r e s t o r i n g the image. The aperture data consists of complex numbers, and i t i s possible (with a simple object) to f i l l i n a small hole by averaging the numbers that completely surround the region. For a larger area, or one that i s not completely surrounded by data, the averaging process w i l l not work, as the phases cannot be properly approximated i n the unsampled area. I t i s important to notice that the deconvolution of the beam i s , i n e f f e c t an, i n t e r p o l a t i o n - extrapolation process because i t involves f i l l i n g - i n the unmeasured areas of the aperture with numbers other than zero. The p r a c t i c a l solutions to t h i s problem thus require the use of a d d i t i o n a l information about the object to guide the c r e a t i o n of these numbers. In the case of radio-astronomy, the add i t i o n a l c o n s t r a i n t s are that the object i s confined and p o s i t i v e . The p r a c t i c a l deconvolution algorithm must therefore work l a r g e l y i n the image domain to include these c o n s t r a i n t s . The point response function (beam pattern) of a r e a l synthesis telescope could be measured by imaging a point source, however the usual method i s to Fourier transform a set of numbers with one i n each measured sample l o c a t i o n and zero i n the unsampled spots. The sidelobes are e f f e c t i v e l y created by the unmeasured areas of the aperture, and the assumption during the Fourier Transform that these areas are zero. Although zero i s known to be a poor guess for these missing numbers, i t i s the only possible one i f a Fast Fourier Transform (FFT) i s used. B-2 Deconvolving the Beam Page 67 Aperture synthesis radio astronomy made a significant step forward with the introduction in 1974 of the CLEAN algorithm by J.A. Hogbom [23]. This algorithm is a solution to the problem of deconvolving the synthesis telescope's point response function, or beam, from the image. Similar algorithms had been in use for a number of years by other observers [56] for solving beam deconvolution probems. Before CLEAN, the formation of images with interferometers was practical only with (nearly) fu l l y sampled apertures in order to ensure that the beam sidelobes were well outside the main image area. With apertures that are not fu l l y sampled, the sidelobes may l i e inside the image area of interest, and the beam must be deconvolved before the image can be studied. The CLEAN algorithm is a practical solution to this problem and i t i s now in use with almost every major synthesis instrument. The algorithm has proven to be straightforward to implement, to work effectively, and to be robust in operation with noisy data. Schwarz [26] has shown that in spite of i t s simplicity, the algorithm is an optimum least squares f i t of sine functions to the v i s i b i l i t y data. At the time of CLEAN's development, aperture synthesis instruments were of limited sensitivity and most of the images were easily represented by an empty f i e l d with a few compact bright sources. With the development of better instruments and bigger computers the algorithm i s now being applied to quite different images which include areas of extended brightness. These images require a great many (10,000) iterations for deconvolution and they may be subject to two types of failure. Often the extended areas of brightness are reconstructed as a series of ridges (corrugations or stripes), and there are sometimes numerical failures due to the very large number of subtraction operations being performed with limited precision. While Schwarz [48] has shown that the stripes correspond to v i s i b i l i t i e s outside the range of measurement, and thus the striped image i s consistent with the data, the stripes are none-the-less inconsistent with the astronomer's (current) view of the universe. The fabric of the celestial sphere was not cut from striped cloth! As the CLEAN algorithm is able to perform the deconvolution successfully for narrow sources, i t seems reasonable to expect that i t also be consistent with the constraint that extended sources are not striped. Segalovitz & Frieden [53], Palagi [47], Cornwell [45], Hogbom [46], and Schwarz [48] have recently discussed the problem and some possible remedies. In the remainder of this section, the CLEAN algorithm and the failure B-2 Deconvolving the Beam Page 68 mechanism w i l l be reviewed, the methods of c o r r e c t i n g the problem w i l l be discussed. A new, very simple, remedy w i l l be presented which gives superior r e s u l t s . An example w i l l be shown which i l l u s t r a t e s the effectiveness of the modified algorithm. B-2.2 The Standard CLEAN Algorithm The basic CLEAN algorithm i s an i t e r a t i v e solution to the ( c l a s s i c ) deconvolution problem : D = B d « 0 ( + noise ) (B.6) where 0 i s the object brightness d i s t r i b u t i o n (on the c e l e s t i a l sphere), B d i s the instrument point response function or " d i r t y beam", D i s the " d i r t y image" which i s obtained as the Fourier Transform of the observed v i s i b i l i t y data, and * denotes the convolution operation. In practice the three functions are a l l sampled and are of l i m i t e d extent. An important p r a c t i c a l feature of the beam function, B d, i s that i t consists of a prominent main lobe surrounded by a pattern of smaller sidelobes. While the sidelobes are not i n s i g n i f i c a n t i n s i z e , and they extend over a wide area, the main lobe i s s t i l l the dominant feature. This has the consequence that the p o s i t i o n of b r i g h t features i n the object w i l l be preserved i n the d i r t y image. The objective i s to recover the object brightness, 0, given the d i r t y image and the d i r t y beam. The problem i s not straightforward because the beam function • • i B. does not have a convolutional inverse, and the data are noisy. The problem a l ? i s mathematically " i l l - p o s e d " but i t i s solvable i n many cases by the i n c l u s i o n of a d d i t i o n a l constraints on the object brightness. The major c o n s t r a i n t s are that the sky i s l a r g e l y empty (below the noise l e v e l ) and that any features are compact and few i n number. The desired s o l u t i o n i s a set of components, or a c o l l e c t i o n of points of brightness, referred to as the density f i l e C. The requirement i s that t h i s density f i l e , when convolved with the d i r t y beam, f i t s the d i r t y image within the noise l e v e l : D V B d » C (B.7) The process by which the density f i l e , C, i s created by the standard CLEAN algorithm [233 i s outlined i n f i g u r e B-2.1. This involves the repeated B-2 Deconvolving the Beam Page 69 subtraction of the d i r t y beam from the l o c a t i o n of the bri g h t e s t element i n the " r e s i d u a l map" R .^ . The r e s i d u a l map i s i n i t i a l l y set equal to the d i r t y map and i t c o l l e c t s the remainder a f t e r each subtraction. A "loop gain" parameter, g, may be used to scale the beam before each subtraction. I f , for example, a gain of 0.1 i s used, the beam i s scaled down to 1 0 % before the subtraction. The density f i l e components are modified d e l t a - f u n c t i o n s which are placed i n the corresponding l o c a t i o n where each beam pattern i s subtracted from the r e s i d u a l map. The amplitude of each d e l t a - f u n c t i o n denotes the amplitude of the beam removed, and thereby records the l o c a t i o n of each component hidden i n the object. The subtraction process continues u n t i l the r e s i d u a l map i s smooth and contains no more major peaks, or u n t i l the noise l e v e l i s reached, or u n t i l the computing budget i s depleted. The accumulated density f i l e i s then convolved with an a r b i t r a r y r e s t o r i n g beam B^ (the "clean beam" which has no side l o b e s ) , to y i e l d a restored image, I r , free from the confusing e f f e c t s o f the beam sidelobes and representing a reasonable estimate of the object given the r e s o l v i n g power of the telescope : I r = C » B p . (B.8) The f i n a l r e s i d u a l map i s usually added to t h i s restored image to restore low l e v e l features that may not have been "CLEANed" and to keep the noise l e v e l r e a l i s t i c i n the f i n a l image, 1^ : I f = C * B r + R f . (B.9) The f i n a l r e s i d u a l , R^. g , w i l l be scaled s l i g h t l y to account f o r d i f f e r e n c e s i n area between the d i r t y and the clean beams. The r e s t o r i n g operations o f convolution with the CLEAN beam and addition of the r e s i d u a l s are not shown in f i g u r e B-2.1. B-2.3 Forming S t r i p e s with CLEAN When confronted with a d i r t y image which includes broad (extended) features the algorithm does not perform well and often turns the smooth areas into a s e r i e s of ridges or s t r i p e s . The mechanism for the formation of these B-2 Deconvolving the Beam Page 70 s t r i p e s i s shown i n figure B-2.2. When the beam pattern i s subtracted from the peak of a broad feature i n the d i r t y map, the e f f e c t i s to superimpose the pattern of the sidelobes onto the previously smooth feature. Thus, when looking for the next component i n the r e s i d u a l map, the impressed r i p p l e s are found as the peak l o c a t i o n s , and the r e s u l t i s a density f i l e with components spaced apart at the i n t e r v a l of the r i p p l e s . I t i s these separated components that give the ridges i n the restored map. As these components have a ( s p a t i a l ) frequency close to the d i r t y beam sidelobe spacing, as Schwarz [ 4 8 3 has shown, they represent features outside the measured v i s i b i l i t y area. The problem i s e s p e c i a l l y serious i f the beam i s heavily oversampled and thus the "sidelobes" ( i n t h i s case the points adjacent to the beam center) are large and make a s i g n i f i c a n t impression on the r e s i d u a l map. I f the image were noise-free, and the CLEANing operations could proceed for a great many i t e r a t i o n s without any l o s s of accuracy due to arithmetic p r e c i s i o n , then the algorithm would eventually f i l l i n the holes to y i e l d a smooth reconstruction. However, the data are always noisy, the arithmetic i s done to l i m i t e d p r e c i s i o n , and the computing budget i s of f i n i t e s i z e , and so the process i s usually terminated before the holes can be repaired. There are a number of ways to address t h i s problem. One approach notes that i t i s the beam sidelobes which lead the algorithm astray, and a s o l u t i o n l i e s i n making the sidelobes l e s s prominent. This can be done by using a small loop gain or by a l t e r i n g the d i r t y beam. The beam can be altered by adding an a r t i f i c i a l "spike" or " d e l t a - f u n c t i o n " to the center of the main lobe. This has the e f f e c t of suppressing the sidelobes r e l a t i v e to the center, and CLEANing with such a "spiked" beam^does indeed reduce the e f f e c t of r i p p l e s i n the image. Cornwell [ M 5 3 provides a r e l a t i o n for determining the spike s i z e and i l l u s t r a t e s maps s u c c e s s f u l l y deconvolved with the procedure. CLEANing with a modified beam was also t r i e d during t h i s study and i t was found to be successful when the spike was about 10% - 15% of the beam peak. There are two d i f f i c u l t i e s with t h i s s o l u t i o n . Because the spike has the e f f e c t of reducing the loop gain, more subtraction operations w i l l be required than with an unmodified beam. This w i l l increase the running time of the program, and may lead to numerical f a i l u r e s as o u t l i n e d i n the next section. Also, as the sidelobes are scaled improperly, there w i l l be a higher r e s i d u a l sidelobe l e v e l i n the f i n a l map, and the restored image may need to be scaled s l i g h t l y . I f the spiked beam i s used only to locate components i n B-2 Deconvolving the Beam Page 71 the "minor" i t e r a t i o n cycles of Clark's algorithm [24] but i s not used for the subtraction i n the "major" cy c l e , then these d i f f i c u l t i e s are a l l e v i a t e d . Cornwell [45] and Palagi [47] also discuss other methods of preventing the s t r i p e s by incorporating algorithms which d i r e c t l y suppress the s t r i p e s . This may take the form of a l t e r n a t i n g the CLEAN i t e r a t i o n s with a l e a s t squares model f i t or a s o l u t i o n by the maximum entropy method MEM ( G u l l : [43] ). These approaches can be e f f e c t i v e , but they s p o i l the a t t r a c t i v e s i m p l i c i t y of the basic CLEAN technique. B-2.4 Arithmetic Errors The standard CLEAN algorithm repeatedly subtracts the d i r t y beam from the d i r t y map. Subtraction, when performed with l i m i t e d p r e c i s i o n , can lead to s i g n i f i c a n t errors i f i t i s repeated a large number of times. These numerical f a i l u r e s r e s u l t from the truncation which occurs due to the d i f f e r e n c e s i n the magnitudes of the numbers and the l i m i t e d p r e c i s i o n of the arithmetic. For example, given a sidelobe l e v e l of \% of the peak, and arithmetic done to f i v e s i g n i f i c a n t f i g u r e s with a gain of 0.1, then the l e a s t s i g n i f i c a n t f i g u r e of the beam w i l l be l o s t i n each subtraction. As the 3 truncation i s always i n the same d i r e c t i o n , a f t e r approximately 10 operations, the accumulated error w i l l be as large as the sidelobe. This i s the danger i n CLEAN of choosing a small loop gain which increases both the t o t a l number of subtraction operations, and the number done at each component l o c a t i o n . These errors e f f e c t i v e l y add a "noise" to the r e s i d u a l map i n d i r e c t proportion to the number of subtractions performed and w i l l have the greatest e f f e c t on a weak source i n the v i c i n i t y of a strong one. Although these errors are cancelled somewhat when p o s i t i v e and negative sidelobes are combined i n one spot, the o v e r a l l e f f e c t i s to add d i s t o r t i o n to the r e s i d u a l map which w i l l cause the algorithm to terminate e a r l i e r than i t should with low noise data and to s l i g h t l y bias the f i n a l map when the r e s i d u a l s are added. B-2 Deconvolving the Beam Page 72 B-2.5 The Modified Algorithm The standard assumption, i m p l i c i t i n the CLEAN algorithm, i s that the peak element i n the r e s i d u a l map represents the l o c a t i o n of a s i n g l e component hidden i n the image. For a map co n s i s t i n g mainly of i s o l a t e d point sources t h i s i s a reasonable assumption and the algorithm i s e f f e c t i v e . For a map with extended sources, t h i s i s not reasonable and the algorithm can be led astray. The "obvious" s o l u t i o n , therefore, i s to estimate the area of the source surrounding the peak in the r e s i d u a l map and remove, not a s i n g l e component, but a feature of corresponding area. Since most of the components i n an extended source w i l l be removed at once, the formation of s t r i p e s w i l l be prevented. The hurdle to be overcome i n order to implement t h i s modified approach i s to estimate the group of components surrounding the peak. Figure B-2.3 shows a diagram of the algorithm that has been developed to perform the modified CLEAN operation. The extended component i s chosen by simply i n c l u d i n g a l l those points i n the r e s i d u a l map that r i s e above a contour set at some f r a c t i o n of the peak l e v e l . This i s c a l l e d the "trim contour", T Q. Note that i f the r e s i d u a l map includes a number of separated peaks of comparable brightness, then the trim contour w i l l s e l e c t several i s o l a t e d " i s l a n d s " of components. The components, selected from the d i r t y image i n t h i s way, must be" corrected f o r the presence of the d i r t y beam before they can be added to the density f i l e as components of the object. The d i r t y beam, due to i t s width, acts non-uniformly on the components of the object. A s i n g l e i s o l a t e d component w i l l have i t s value m u l t i p l i e d by the peak of the beam (which for convenience can be normalized to one). A component that i s part of a group, however, w i l l have i t s value m u l t i p l i e d by the volume of the beam (which i s greater than one and i s t y p i c a l l y 4 - 5 for a normalized beam). The group of components selected by the trim contour must be scaled so that the value of the peak a f t e r convolution with the d i r t y beam i s the value of the corresponding point i n the r e s i d u a l map. This ensures that no component w i l l exceed the values i n the r e s i d u a l map during the subtraction. I t i s necessary to do the convolution of the components with the beam i n order to determine the scale f a c t o r , and thus an a d d i t i o n a l Fourier transform p a i r must be performed. B-2 Deconvolving the Beam Page 73 These scaled points are then further scaled by the CLEAN loop gain, and added to to the components already accumulated i n the density f i l e ( f i g u r e B-2.3). This density f i l e i s then convolved with the d i r t y beam to obtain the pattern to be subtracted from the d i r t y map. Note that t h i s convolution w i l l r e s t o re the scale factor of the beam volume removed i n forming the density f i l e components. The convolutions with the d i r t y beam are performed i n the Fourier transform domain. As the transforms of the density f i l e and the beam have complex r e s u l t s , a complex-multiply operation i s necessary i n the Fourie r domain. The product i s then inverse Fourier transformed and subtracted from the d i r t y map to y i e l d the (new) r e s i d u a l map. For the next i t e r a t i o n , t h i s r e s i d u a l i s searched for the group of components surrounding the peak, and the estimation and subtraction operations are repeated. The numerical problems of repeated subtraction are diminished by always subtracting the complete current best estimate of the components from the o r i g i n a l d i r t y map. This change i s p r a c t i c a l because the time required to perform the FFT operation i s independent of the number of components i n the image, and thus no penalty i s incurred by transforming the complete set o f components. For convenience i n the computer programming, the subtraction has been done i n the image domain from the o r i g i n a l d i r t y map. I t could equally well be done i n the Fourier domain from the gridded v i s i b i l i t y data. The trim contour method to estimate the group of components i s e f f e c t i v e for the simple reason that, given an extended region and a narrow beam, the shape of the convolution o f the two resembles the region more than the beam. The beam i s smeared by the extended area, and thus the c e n t r a l area of the actual extended feature i n the d i r t y map i s a reasonable ( f i r s t ) estimate for the shape of the group of components : B * E ^ E' (B .10) i n the region of E* where B i s the narrow beam, E i s the extended object, and E 1 i s the c e n t r a l area of E. Any i n i t i a l errors i n the component amplitudes are removed i n l a t e r i t e r a t i o n s i n the same way that the standard CLEAN process corrects for i n c o r r e c t component amplitudes. Note also that i f the peak i n the re s i d u a l map i s not i n an extended region, then the trim contour w i l l enclose only a small area and the e f f e c t , a f t e r the convolution B-2 Deconvolving the Beam Page 74 with the beam, w i l l be to subtract only the d i r t y beam from the d i r t y map. Thus, point sources are s t i l l treated c o r r e c t l y as i n the standard CLEAN. As the beam sidelobes remain c o r r e c t l y scaled, the f i n a l r e s i d u a l map w i l l not contain any extra sidelobe components. * Some d i f f i c u l t y a r i s e s due to the s c a l i n g (by the volume of the beam) of the components selected with the trim contour. Because they are convolved with the beam, narrow sources receive a proportionately smaller gain than extended ones. This causes narrow sources to be removed quite slowly. The action of the program i s thus to remove f i r s t the brightest extended areas, leaving only the point sources and the lower extended areas. To compensate for t h i s l o s s of gain, the program te s t s the brightest point i n the r e s i d u a l map for the presence of a narrow source, and adjusts the gain at that point i f one i s found. The test i s done by simply checking i f the peak i n the r e s i d u a l map i s coincident with the peak of the convolution of the components with the beam. This modified algorithm has some resemblance to the Clark [24] version of the CLEAN algorithm. The Clark algorithm improves the speed of the standard algorithm by using only a portion of the beam to f i n d the component lo c a t i o n s . The components i n the Clark algorithm are selected by searching the r e s i d u a l for the peak, and subtracting away only the c e n t r a l area of the beam to y i e l d an approximate r e s i d u a l map which i s then searched f o r the next component. The small section of the beam used to form the approximate r e s i d u a l map i s c a l l e d the "beam patch". Using t h i s small patch of the beam makes the subtraction operation much fas t e r than t r a n s l a t i n g and subtracting the complete beam pattern. This i s e s p e c i a l l y s i g n i f i c a n t i f the beam and map f i l e s are too large to be completely f i t t e d into the computer memory at one time. These approximate subtractions ("minor cycles") are repeated f o r a fix e d number of times and then the selected group of components i s convolved ( v i a an FFT) with the complete d i r t y beam and subtracted exactly from the previous exact r e s i d u a l i n a "major c y c l e " to give the new exact r e s i d u a l f o r the next set of minor c y c l e s . However, as the "beam patch" includes the major sidelobes, the algorithm w i l l s t i l l be led astray when working on extended features. As the beam i s s t i l l being removed incrementally, numerical problems can s t i l l occur, e s p e c i a l l y i f a very large number of i t e r a t i o n s i s performed. The modified algorithm has the equivalent e f f e c t of working with a B-2 Deconvolving the Beam Page 75 beam patch one element i n s i z e . The trim contour i s a more e f f i c i e n t process to s e l e c t the components. Note also that the new algorithm has fewer " c o n t r o l " parameters to be set by the user as i t i s not necessary to set e i t h e r the beam patch s i z e or the r a t i o of major/minor c y c l e s . Because many components are removed i n each i t e r a t i o n of the new algorithm, the number of operations i s reduced s i g n i f i c a n t l y when compared with the standard algorithm. For the standard CLEAN, the number of operations (subtractions) N i s approximately : N s ^ Q s M 2 (B.11) where Q g i s the number of i t e r a t i o n s M i s the number of p i x e l s i n image. For the modified algorithm, the number of operations i s approximately : \ Nm ^ Q m M ( 4 log(M)/2 + 4 ) (B.12) where Q m i s the number of i t e r a t i o n s . ( 4 M log(M)/2 operations for 4 FFT's ( t h i s number w i l l depend on d e t a i l s of FFT implementation), 4 H operations for add to density f i l e , complex multiply, trimming, and s u b t r a c t i o n . ) , For the Clark algorithm, the number of operations i s approximately : N c ~ Q c ( 2 M log(M>/2 + 2M + n(1+a) ) (B.13) where QQ i s the number of major c y c l e s , n i s the number of minor c y c l e s , and a i s the area of the beam patch ( i n p i x e l s ) . To be able to r e l a t e the number of operations N, the i t e r a t i o n counts Q g, Q c, and Qffl must be determined i n terms of the image, the clean gain, and the t r i m contour. B-2 Deconvolving the Beam Page 76 In the basic CLEAN subtraction process ( for an i n d i v i d u a l component), the r e s i d u a l peak, P ., i s : r n+1 P n + l = ( 1 - g ) P n ( B . 1 4 ) and a f t e r h i t e r a t i o n s , the f r a c t i o n , f, of the o r i g i n a l peak remaining i s : f = ( 1 - g ) h . (B.15) The number h i s the number of subtractions needed to reduce one peak i n the d i r t y map to the r e s i d u a l l i m i t f. I f the d i r t y map has sources with area B expressed i n un i t s of beams, then the h must be summed over a l l the sources i n the map taking into account the d i f f e r e n t peak amplitude of each. For each peak : h. = log( f P m a x / P i ) / log( 1 - g ) (B.16) where P i n the i n i t i a l maximum peak value, and P. i s the i n i t i a l peak of fchmax 1 the i component. Thus for Q g : Q s ~ — 7 — 2l<»8< f Pmax ' P i ) log(1-g) 1=1 B log( f P ) 1 B ^ ^ _ 2 l o g ( P . ) . (B.17) log(1-g) log(1-g) i=1 1 where B i s the area of the source expressed i n units of beam areas. However the geometric mean, G, i s defined as : I B 1 B log( G ) = — log (TT p. ) = — -rriog(P.) (B.18) B i=1 1 B i=1 and hence Q can be defined i n terms of the source area B and the geometric s mean of the peaks to be CLEANed : B fP Q log ( ) m (B.19) log(1-g) G B-2 Deconvolving the Beam Page 77 This r e l a t i o n i s v a l i d f o r the condition that f < g / P m a v which i s the s i t u a t i o n that the r e s i d u a l l i m i t f i s below the geometric mean of the peaks to be CLEANed. (This i s the usual s i t u a t i o n . ) For the Clark algorithm Q „ i s given by Q_/n. For the modified algorithm, to reach a f r a c t i o n f of the i n i t i a l peak there are h i t e r a t i o n s needed as per equation (B.15) where g i reduced by the e f f e c t of the beam volume to become g y . As the trim contour encloses as much of the source as i s needed to reach the trim l e v e l , the number of i t e r a t i o n s to reach a f r a c t i o n of the i n i t i a l peak i s independent of the source area and thus for Q m : Q m log( f ) / log( 1 - g y ) . (B.20) This i s true for reasonable values of trim contour, T , and gain g, and i s subject to the condition that g < 1 - T c- I f the gain i s increased above t h i l i m i t , then the peak w i l l be depressed below the trim contour l e v e l during the subtraction and the r e s i d u a l l e v e l w i l l be co n t r o l l e d by T c and Q m w i l l be approximately : Q m ^ log( f ) / log( T c ) . (B.21) This, however, i s not a good region f o r operation as i t can force negative areas i n the r e s i d u a l and lead the algorithm astray. Thus the approximate r a t i o of the number of i t e r a t i o n s i s Qm ^ n l Qg< f > Q c B log( f G / P m a x ) , (B.22) which i n d i c a t e s that the new algorithm w i l l be s i g n i f i c a n t l y f a s t e r f o r sources covering a large f r a c t i o n of the image. Hence the r a t i o of the i t e r a t i o n counts i s mainly related to the area of the source to be cleaned. The new algorithm w i l l be fas t e r than the old i n d i r e c t proportion to the area of the source and w i l l be equivalent f o r a s i n g l e point source i f B/n = 1. B-2 Deconvolving the Beam Page 78 The new algorithm requires the trim contour l e v e l to be set as a f r a c t i o n of the beam peak (between 0 - 1 ). Like the loop gain i n the CLEAN algorithm, the r e s u l t s are not p a r t i c u l a r l y s e n s i t i v e to the parameter s e t t i n g . I t i s c l e a r l y desirable to choose a trim l e v e l as low as possible, as t h i s w i l l increase the number of points i n the selected region and allow for large extended areas. C l e a r l y also, a very high trim contour w i l l give a very small group of components and the new algorithm w i l l be equivalent to the standard CLEAN, and no advantage w i l l be gained. The trouble with choosing too low a trim contour i s that sidelobe features may be i n c o r r e c t l y included as components and the deconvolution w i l l be led astray. As i n the standard CLEAN, sidelobe l o c a t i o n s i n c o r r e c t l y interpreted as true components are almost always irrecoverable. The CLEAN i s usually terminated at the noise l e v e l before such e r r o r s can be corrected. Very small e r r o r s , however, can s t i l l be corrected by the standard technique of allowing the use of negative components. In the new algorithm, any negative peaks that develop i n the r e s i d u a l map are treated i n the same manner as the p o s i t i v e peaks. That i s , the group of components surrounding the negative peak (selected by the trim contour l e v e l ) i s added to the density f i l e . (This r e a l l y means they are subtracted as they are negative numbers.) The trim contour, T , should be set at a l e v e l s l i g h t l y ( w ) above the l a r g e s t sidelobe i n the d i r t y beam : T c = S 1 / P B + w (B.23) where i s the beam sidelobe l e v e l , P^ i s the beam peak l e v e l , and 0 < T . < 1 .Note that t h i s "sidelobe" i s the l a r g e s t value of the beam other than the c e n t r a l peak and i f the beam i s oversampled, t h i s point may s t i l l be on the main lobe. This s e t t i n g l e v e l w i l l allow narrow sources to be deconvolved c o r r e c t l y , and w i l l also allow extended areas to be handled without s t r i p e s . The value of the trim contour can be determined automatically from the d i r t y beam, and thus the s e t t i n g i s " i n v i s i b l e " to the user. Note also that for extended sources the gain can be r e l a t e d to the trim contour so that g < 1-T c and i t could also be automatically determined. B-2.6 Example Images Although a number of images were tested with the new algorithm, only two examples are shown here. Figures B-2.4 and 2.5 show the r e s u l t s of B-2 Deconvolving the Beam Page 79 processing by several CLEAN techniques. The d i r t y beam i s shown i n fig u r e 8-2.4. This corresponds to a roughly c i r c u l a r sampling area with a diameter about two-thirds the extent of the aperture plane. A sector 40 degrees i n extent has been zeroed to simulate an unsampled period of observation. The aperture sampling pattern i s also shown i n fig u r e B-2.4. The c e n t r a l , or DC, term i s included, and the aperture has not been tapered. The f i g u r e also shows cross sections of the beam through the center. The l a r g e s t p o s i t i v e point outside the peak i s at 56%, and the l a r g e s t negative sidelobe i s 36% of the peak. Figure B-2.5A shows the te s t object d i s t r i b u t i o n , 0. This image i s 64x64 p i x e l s i n extent and the lowest contour l e v e l i s about 1.0% of the highest contour, and a l l of the object points are p o s i t i v e . The d i r t y map i s shown i n f i g u r e B-2.5B. Notice that, while there i s considerable confusion due to the beam sidelobes, the extended areas r e t a i n much of t h e i r general o u t l i n e , and hence the trim contour i s e f f e c t i v e i n estimating t h e i r s i z e . Figure B-2.5E shows the density f i l e r e s u l t i n g from the standard CLEAN, and the separated components show up quite c l e a r l y i n the extended regions. As t h i s i s the density f i l e , these would normally be smoothed somewhat i n the formation of the restored image by the convolution with the clean beam. Figure B-2.5C shows the density f i l e from the modified CLEAN algorithm, and no separated components are v i s i b l e . This image was produced with a trim contour of 0.55 with 17 i t e r a t i o n s . For reference, f i g u r e B-2.5D i l l u s t r a t e s the density f i l e from a CLEAN using a "spiked" beam. This image was produced with a "spike" of 15% with 2000 i t e r a t i o n s . Neither r e s u l t has s t r i p e s evident, and the images are comparable, although the modified algorithm r e s u l t has a smaller RMS error than the spiked r e s u l t when compared with the o r i g i n a l . This image has no (added) noise. B-2.6 shows the r e s u l t s of t e s t i n g with both a l a r g e r image and with added noise. Figure B-2.6A shows the object d i s t r i b u t i o n which includes both point sources and an extended area. This object i s 256x256 p i x e l s i n s i z e , however only the center 64x64 region with object components i s i l l u s t r a t e d . Also shown i s the d i r t y map which was formed when the object was convolved with a beam ( c o n s i s t i n g of a sine f u n c t i o n ) . Random noise with standard deviation of 1% of the peak was added to t h i s d i r t y map. As the peak value was 1500, t h i s i s 15, however i n d i v i d u a l noise components ranged i n value from -70 to +80. The lowest contour shown i s 3% of peak. The f i g u r e B-2.6 shows the density f i l e r e s u l t s from both the standard and the new CLEAN B-2 Deconvolving the Beam Page 80 algorithms. Although the standard algorithm has c o r r e c t l y reproduced the the point sources, the extended area i s very poorly reproduced. B-2.7 Conclusions The revised CLEAN algorithm has proved to be e f f e c t i v e at preventing the formation of stripes- and reducing numerical errors during the deconvolution of extended images. The new algorithm also gives good performance with narrow sources. The process i s based on the concept of removing components i n groups chosen by a trim contour technique. Because the algorithm i s stable for extended sources, and components are i d e n t i f i e d and removed i n groups there i s a s i g n i f i c a n t improvement i n speed. The new algorithm also has fewer " c o n t r o l " parameters to be set by the user. The modified algorithm w i l l prove e s p e c i a l l y useful to those who process large images with considerable areas of extended brightness. I t i s important to r e a l i z e that a deconvolution process such as CLEAN i s a c t u a l l y an i n t e r p o l a t i o n i n the Fourier domain. In order to remove the e f f e c t s of the beam sidelobes, i t i s necessary to " f i l l - i n " the unsampled areas of the aperture. This can only be done i f there i s a d d i t i o n a l a - p r i o r i information a v a i l a b l e about the object. In the case of CLEAN, t h i s i s the cons t r a i n t that the object c o n s i s t s of a l i m i t e d number of compact areas of brightness. The standard CLEAN algorithm contains the i m p l i c i t assumption that each peak i n the r e s i d u a l i s the l o c a t i o n of a s i n g l e component i n the object. This i s equivalent to a d i r e c t i v e to the algorithm to produce the narrowest possible features i n the image. The r e s u l t i s the generation of s t r i p e s . The modified algorithm allows for v a r i a b l e area features at each peak i n the r e s i d u a l , and thus the algorithm i s not constrained to produce only narrow features and the s t r i p e s are avoided. The modified algorithm i s able to " f i l l - i n " the unsampled areas of the aperture with a b i t more attention paid to the data than to the assumptions buried i n the algorithm. 8 1 CLEAN 3 BEAM file DENSITY file 0 RESIDUAL file DIRTY map search for peak in RESIDUAL file < test for exit > translate BEAM center to location of peak ^ EXIT ^ I subtract: GAIN x BEAM from RESIDUAL RESIDUAL enter component location & amplitude into DENSITY FIGURE B-2.1 Diagram Showing "Standard" CLEAN Algorithm FIGURE B-2.2 Impression of Ripples into Extended Source S3 Modified C L E A N J BEAM file DENSITY file 0 RESIDUAL file DIRTY map Fourier transform BEAM FT -BEAM search for peak in RESIDUAL file < test for exit EXIT ^ set to zero points in RESIDUAL lless than peak x trim contour, scale by beam volume & gain, add to DENSITY file. FFT DENSITY file, multiply by FT-BEAM* FFT - 1 -^PRODUCT. 'component selection" "convolution" subtract PRODUCT from DIRTY m a p R E S I D U A L file 'subtraction' FIGURE B -2 .3 Diagram Showing "Modified" CLEAN Algorithm 8 4 A Test Beam Pattern C Beam Section E-W B Beam Section N-S D Aperture Sampling FIGURE B-2.4 Test Beam Pattern and Aperture Sampling 35 FIGURE B-2 .5 Test Density File Results 86 C.Modified CLEAN B.Dirty Map ( 1 % RMS noise added) A.Object Distribution D.Standard CLEAN FIGURE B-2,6 Test Density File Results with Noise B-3 Images and Phases B-3 Images and Phase Errors Page 87 The main obje c t i v e of the LBI imaging algorithms i s to compensate for the e f f e c t s of phase errors i n the complex data c o l l e c t e d i n the aperture plane. This s e c t i o n i s provided therefore, to i l l u s t r a t e the e f f e c t s of phase errors on images. The errors act to spread out the compact features of the object i n t o a confused jumble. Small phase errors r e s u l t i n moderate spreading, and large errors completely destroy the image. I t i s also shown that the s i z e (area) of the smallest feature i n the object controls the s i z e of the "speckles" i n the di s t o r t e d image. B-3.1 Examples of Images (Almost) without Phase To generate these i l l u s t r a t i o n s , a t e s t "sky" was assembled and then Fourier Transformed to obtain the equivalent aperture or uv plane data. This aperture plane data i s (of course) a set of complex numbers which may be expressed i n real-imaginary (Cartesian) or magnitude-phase (polar) form. Phase errors were added to the aperture plane data i n polar form, and the corrupted data was inverse Fourier transformed to obtain the disturbed image. Amplitudes of the aperture s i g n a l were not disturbed for these i l l u s t r a t i o n s . The aperture sampling was quasi-simulated i n these examples as the aperture plane was only sampled according to an a r t i f i c i a l set of baseline l o c a t i o n s . A set of " l i n e a r arcs" was used i n which the sampling tracks were st r a i g h t and adjusted to correspond to g r i d l i n e s i n the aperture plane. The e f f e c t s of a r c - g r i d conversion are avoided by t h i s procedure, although the aperture sampling i s somewhat a r t i f i c i a l . These l i n e a r tracks are s i m i l a r to those that would be obtained from a p r a c t i c a l array with wide l a t i t u d e coverage observing a source at 0 degrees d e c l i n a t i o n . These l i n e a r sampling tracks generate a beam with a l i n e a r pattern of sidelobes and t h i s accounts for the l i n e a r features seen i n the i l l u s t r a t i o n s used i n t h i s study. Figure B-3.1 shows an example. The sources i n t h i s case consist of 5 Gaussian shapes. There are three large ones, one medium, and one quite small. The smallest i s 1J of the amplitude of the l a r g e s t , and the medium one i s 10$. The smallest source i s the "bump" on the edge of upper l e f t peak. B-3 Images and Phases Page 88 Figure B-3.2 shows the undistorted image. This i s what would be recovered i f there were no phase errors i n the data. In ef fect t h i s i s the true sky convolved with the telescope "beam". This image i s often referred to as the " d i r t y map". In t h i s case, as the sampling pattern i s l i n e a r tracks i n the uv plane, the beam has a l inear set of s idelobes . For t h i s example, the " h o l e s " i n the aperture are tracks 0,26,31, & 32, where 0 i s the track through the center of the uv plane, and there are 32 gr id l i n e s (tracks) i n the ha l f -p lane . This i s a " l i n e a r " version of an 8 antenna, 28 baseline array . A p r a c t i c a l synthesis telescope has e l l i p t i c a l sampling tracks and generates e l l i p t i c a l beam sidelobes. B-3.2 Phase D i s t o r t i o n - Spreading Out -Figures B-3.3 - B-3.6 show the effects of increasing the s izes of the phase errors added to the true values. The phase errors were obtained by c a l c u l a t i n g a set of uniformly d i s t r ibuted random numbers (between 0-1). These numbers were then scaled to give an error set which was a f r ac t ion of 360 degrees i n maximum amplitude. These phase errors were then added to the aperture plane data i n a manner that corresponded to errors at each i n d i v i d u a l antenna. Figure B-3.3 shows the effect of a constant phase error at each antenna. In t h i s example, a random phase error of maximum amplitude 36 degrees was added to a l l the data for each antenna. The d i s t o r t i o n ef fect i s to break up the true source pattern into a number of smaller sources i n hew l o c a t i o n s . The apparent l inear pattern of the d i s t o r t i o n i s due to the l i n e a r sampling of the aperture which makes the phase errors e f f e c t i v e l y constant i n one dimension. Figure 8-3.4 shows the effect of 36 degree phase er ror s . In t h i s example the random phase errors were independent for each antenna sampling po int . Note that the largest three sources are detectable, but the smaller ones are ind i s t ingui shable from the er ror s . In f igures B-3.5 and B-3.6 are shown the ef fects of 180 degree and 360 degree phase e r ror s . The t y p i c a l LBI data has phase errors of 360 degrees or larger as the atmosphere introduces phase delays of many wavelengths. In these images, the sources are completely l o s t i n the scramble. B-3 Images and Phases Page 89 As a p r a c t i c a l i l l u s t r a t i o n , f i g u r e s B-3.9 - B-3.14 show an astronomical map of the radio source CTB1 ( Landecker, Roger & Dewdney [12]). Figure B-3.9 i s the true object. Figure B-3.10 i s the d i r t y map, and f i g u r e s B -3.12,13, and 14 show the e f f e c t s of 18, 36, and 72 degree phase e r r o r s . The more complicated image very quickly becomes l o s t i n the tangle. The e f f e c t of the phase errors i s to "spread-out 1 1 the image. The true image has most of the s i g n a l concentrated i n the sources, while the phase disturbed image has s i g n a l (noise?) spread into bumps a l l over the image. This suggests that one possible method of image recovery could be by " c o r r e c t i n g " the phases u n t i l an image with the s i g n a l most concentrated i n sources i s obtained. Muller & Buffington [15], Brown [3] and O'Meara [14] and section C-2 of t h i s t hesis report some experiments to t e s t t h i s p o s s i b i l i t y . B-3.3 Phase D i s t o r t i o n - Zero-Phase Map -The image from data where the phase i n the aperture i s constant i s shown i n f i g u r e B -3 .7. This generates a map that i s (often) r e f e r r e d to as the zero-phase image. A s i m i l a r image can also be obtained i n the map domain by c o r r e l a t i n g the image with i t s e l f . This "auto-correlation" image i s s i m i l a r i n form to the zero-phase image, however i t i s equivalent to the Fourier transform of the amplitudes squared with zero phases. These images are symmetric about the orign. The pattern of peaks i s mainly made up of the true image repeated with each major component centered at the o r i g n . There are 4(3)+1 = M(N-1)+1 = 13 peaks i n t h i s example which has 4 = N d i s t i n c t sources i n the true image. The true image i s shown again i n the adjacent f i g u r e B - 3 .8 for reference. Note that the smallest (1%) source has been."lost? i n the zero—phase image. In t h i s case of separated ( d i s t i n c t ) sources, i t i s possible to conceive of an algorithm for searching through the set of apparent sources to f i n d the sub-set which i s the true image. Baldwin & Warner [2] describe a possible procedure for t h i s . While i t i s p o s s i b l e to do by "hand" i t i s d i f f i c u l t to automate with a computer due to various p r a c t i c a l problems of d i s t i n g u i s h i n g sources and peaks. Figures B-3.10 and B -3.11 show a more t y p i c a l object (again CTB1) and i t s zero-phase image. In t h i s case the sources are not d i s t i n c t and the map i s not e a s i l y untangled. B-3 Images and Phases Pa-ge 90 Note that the amplitudes i n the zero-phase map are d i s t o r t e d from the correct values, with the center point being extra large and the more distant pOints being extra small. The zero-phase image, by v i r t u e of i t s symmetry, also covers an area which i s approximately twice that of the true map. In t h i s sense the zero-phase image i s also more "spread out". I t i s of i n t e r e s t to look at the zero-phase image as i t can be generated from the amplitude data alone, and any image recovered must be consistent with t h i s . Of course, i f the object does co n s i s t of a small number of separated point sources, then the zero-phase image can be used to recover the correct image following Baldwin & Warner's procedure. In the zero-phase map of the preceeding section, the phases are a r b i t r a r i l y set to zero. The question that immediately a r i s e s i s : what happens i f constant phases other than zero are used ? There are two answers to t h i s question. I f the image i s constrained' to be always REAL, then the constant phase added i n the UV plane must be anti-symmetric about the orign. This means that phases i n one h a l f plane are the negative of those i n the other h a l f plane. I f such a phase constant i s used, then there i s NO CHANGE i n the image. Thus the zero-phase image can be created by any one of an i n f i n i t e number of phase constants. I f the image i s not constrained to be r e a l , and the same phase constant i s used i n both h a l f planes, then the e f f e c t i s to " r o t a t e " the image in t o or between the r e a l and imaginary planes. A constant of 0 thus gives a Real image. B-3.4 Notes on Phase Constants Other Than ZERO 90 gives an Imaginary image , gives a negative Real image, gives a negative Imaginary image, and gives a Real image. 180 270 360 Intermediate values of phase give mixed real-imaginary images. B-3 Images and Phases Page 91 These comments also apply to phase constants added to Image phase data in the aperture plane. A constant added anti-symmetric to the aperture does not affect the image. A constant added equally over both half-planes w i l l rotate the "image" into the real-imaginary domain. Note also that adding a phase which i s a sloped plane, w i l l have the effect of moving the map in the map domain. The distance moved depends on the slope of the plane. This i s equivalent to the "shift theorem" for Fourier transforms. If the surface i s not f l a t , then the effect w i l l be a translation which i s different for different components of the image and a scrambled image w i l l occur. This i s the effect created by the atmosphere. B-3.5 Speckle Patterns Figure B-3.15 - B-3.18 show the so-called "speckle effect". In this case the true sky and the distorted image are shown for two possible objects. Both distorted images, which have 30% phase noise, have broken up into a set of spots spread a l l over the map. This i s called a "speckle pattern" by analogy with the spotted image observed in LASER illumination in optics. Note however, that the mechanism for forming the spotted pattern in the radio case i s not quite the same as the LASER case. These figures show another interesting point. The two objects are of the same pattern, but the Gaussian sources are of quite different widths. The result i s a speckle pattern with smaller spots for the image of the smaller sources. For unresolved sources, the ultimate size of the spots i s controlled by the size of the aperture. However for resolved sources, the speckles tend to be the size of the objects. Cady & Bates [53 have reported a procedure for recovering optical images based on this phenomonon. w B-3 Images and Phases Page 92 Summary of E f f e c t s of Phase D i s t o r t i o n 1 Amplitudes determine the strength of the image components. Phase determines where the components are i n the image. 2 Phase errors have the e f f e c t of "spreading" the s i g n a l a l l over the map 3 Zero or other Constant phase leads to a symmetric map c a l l e d the zero-phase image which i s s i m i l a r to the aut o c o r r e l a t i o n map 4 Constant phase added.to UV plane data has no e f f e c t on the map. Only r e l a t i v e phases determine the image pattern. ( 93 Phase Distorted Image Examples 1 ' 1 I . . . B-3.2 Undistorted Image (Sky Convolved with Beam) "Dirty Map" B - 3 . 1 S k y Object 64x64 pixels 5 Gauss ian Sources B-3.3 Distorted Image with B-3.4 Distorted Image with Constant phase error at 3 6 ° Phase Errors each Antenna 9 4 Phase Distorted Image Examples B-3.5 Distorted Image with B-3.6 Distorted Image with J180° Phase Errors 3 6 0 ° Phase Errors B-3.7 Zero Phase or B-3.8 ; Undistorted Image \ Autocorrelation M a p ' ( same as .2 shown here for reference to .7) 95 CTB1 Image B-3.9 Sky Object Source CTB1 64x64 pixels B-3.10 Zero Phase or B-3.1 1 Undistorted Image Autocorre la t ion Map of CTB1 CTB1 "Dirty Map" 96 CTB1 Image with Phase Errors B-3.12 CTB1 Distorted Image with 18° Phase Errors 9 7 Speckle B-3.15 Sky with Small Width Sources B-3.16 Speckle Image with 1 0 8 ° Phase Errors Small Speck les Patterns B-3.17 Sky with Wide Sources B-3.18 Speckle Image with 1 0 8 ° : Phase Errors Larger Speck le s B-4 Closure Phase Page 98 B-4 Closure Phase Constraint Throughout the l i t e r a t u r e on astronomical LBI imaging, the term "Closure Phase" frequently appears. This i s a phenomonon of the data c o l l e c t i o n process which puts a useful constraint on the imaging process. This section w i l l review closure phase, show the constraint equations, and mention some of the implications for LBI phase recovery. The c o n s t r a i n t r e l a t e s the phases on baselines between three or more antennas, and although i t does not provide any actual phase values, i t does provide a germ of useful information. Although the contraint equations are remarkably simple, as the r e s t of the study shows, t h e i r use as part of imaging techniques i s quite complicated. Although closure phase i s now widely used by LBI Radio Astronomers, i t was f i r s t introduced by Jennison [17] as a method of c o r r e c t i n g f o r receiver errors on short baseline interferometers. Rogstad [19] l a t e r adapted the process for o p t i c a l interferometers, and Rogers et a l . [18] reported some of the f i r s t r e s u l t s of radio observations using the technique. Since then the method has become more widespread with the method of Readhead and Wilkinson [20] being among the most popular. Figure B-4.1 i l l u s t r a t e s the basic closure phase p r i n c i p l e f or a small array. The interferometer includes three antennas (A,B,C) each of which receives c e l e s t i a l signals which are corrupted by atmospheric phase errors E w and receiver phase errors E r and E m . The corrupted signals are combined i n three c o r r e l a t o r s ( m u l t i p l i e r s ) M^, M2, and to y i e l d the three observed phases P g c, P g b, and P c t ). Lines 1,2, and 3 of the figure i n d i c a t e that these three phases reduce to the desired true v i s i b i l i t y W plus the d i f f e r e n c e i n phase errors, E, a t t r i b u t a b l e to the two antennas of the baseline. I t i s shown i n l i n e four that by forming the sum of the three observed phases, the error terms cancel out and the sum i s exactly equal to the sum of the true v i s i b i l i t y phases. This i s a consequence of the errors entering the system p h y s i c a l l y at each antenna, and the additive nature of the phases i n the c o r r e l a t o r . The s i g n i f i c a n c e i s that the sum of the measured values i s e r r o r -free and thus any "correct" image must correspond to data which agrees with t h i s sum. B-1 Closure Phase Page 99 I f there were just three antennas i n the array, t h i s r e l a t i o n s h i p would not be e s p e c i a l l y useful as i t would not provide very much a d d i t i o n a l information. One closure sum i s obtained for each possible t r i o of antennas i n an array however, and as the number of antennas increases, the set of possible t r i o s r a p i d l y increases. For an 8 antenna array, for example, there are 21 = (N-1)(N-2)/2 independent closure sums. There are (only) 28 = (N-DN/2 unknown v i s i b i l i t y phases ( f o r 28 possible baselines) i n t o t a l . There i s thus a t o t a l of 21 (correct) equations with 28 unknowns, and things are 21/28 = 75$ of the way to a complete s o l u t i o n . This i s another way of saying that the number of independent errors i n the problem i s N-1 and not N(N-1)/2. There are three s i g n i f i c a n t points to note. The f i r s t concerns the p o s s i b i l i t y of redundant baselines. I f the antennas are arranged such that some of the baselines are of equivalent length, then the t o t a l number of unknowns w i l l be reduced, and an exact s o l u t i o n may be possible. For example, i f an 8 antenna system were designed with only 21 d i s t i n c t baselines, then an exact s o l u t i o n would be possible i n s p i t e of atmospheric or receiver phase er r o r s . This i s not often done i n p r a c t i c e because the phase can be recovered by the algorithms discussed i n t h i s t h e s i s , and i t seems better to measure the l a r g e s t number of independent data points i n order to obtain the most information ©bout the object. The U n i v e r s i t y of Tokyo Nobeyama 17 GHz s o l a r radio observatory _[36] does use an array designed with redundant spacings to e f f e c t a s e l f c a l i b r a t i o n . In t h i s case, part of the design o b j e c t i v e i s to achieve a high speed (real-time) imaging system. Note i n passing that the closure phase sums are only v a l i d f o r s i g n a l s received at the same time at each antenna, and care must be taken i n the data processing to observe t h i s r e s t r i c t i o n . "Closure Phase" techniques are thus not applicable to instruments which synthesize the aperture from many sequential observing periods. The second point to note i s that the closure sums are c o r r e c t and any recovery algorithm must constrain i t s r e s u l t s to agree with these sums. Some methods, such as Readhead and Wilkinson [20], do t h i s e x p l i c i t l y , while others, such as the sharpness method, have i t b u i l t into the processing. B-4 Closure Phase Page 100 The t h i r d point to note i s that while the number of phase errors increases with the number of antennas as N-1, the number of c o r r e l a t i o n s increases as N(N-1)/2. As the number of antennas increases, the number of errors very r a p i d l y becomes a small f r a c t i o n of the t o t a l number of c o r r e l a t i o n s . The closure phase rules can thus be expected to become more useful f o r arrays with many antennas. Readhead et. a l . report [21] the ad d i t i o n a l "closure amplitude" r e l a t i o n s f o r cases where v i s i b i l i t y amplitudes may be poorly c a l i b r a t e d . In t h i s case, with an array of four or more antennas, the gain errors i n the antennas and the receivers can be eliminated by taking the r a t i o of amplitudes from groups of four antennas. Figure B-M.2 i l l u s t r a t e s the closure amplitude and the closure phase r e l a t i o n s for an array with f i v e antennas. t r 101 -Each Correlator Phase: Pab = Wab+Ewa+Era+Ema-Ewb-Erb-Emb = Wab+Ea-Eb (1) Pac = Wac+Ewa+Era+Ema-Ewc-Erc-Emc = Wac+Ea-Ec (2) Pcb = Wcb+Ewc+Erc+Emc-Ewb-Emb-Erb = Wcb+Ec-Eb (3) -Sum the three measurements: Pab-Pac-Pcb = Wab+Ea-Eb-Wac-Ea+Ec-Wcb-Ec+EbV Pab-Pac-Pcb = Wab-Wac-Wcb.,'» (4), I FIGURE B - 4 . 1 Phase Closure Relation for Three Antenna Array Given an array of 5 antennas A, B, C, D, E 102 Then the f o l l o w i n g 10 equations give the measured s i g n a l E as a function of the c o r r e l a t o r gain K, antenna gain G, phase error 0, and desired signal V. :ab iac iad :ae :bc :bd ibe ;cd !ce 'de = K = K K = K = K = K = K = K ab ac ad ae be bd be cd ce de Gr G. < G o G G° a d G" < G G h < G b < G Gj G° ^ G * a e EXP{ EXP{ EXP{ EXP{ EXP{ EXP{ EXP{ EXP{ EXP{ EXP{ 2ri (0 - 0 ) } V 2^i (jefa - 0°) } V 2ri (0a -2*1 (03 -2 M (0* -2*i (0° -2<*i (0° -ab ad .be 2^1 (0" - 0") } 2*1 (0° - 0d) } V ° d 27ri (0° - U«) } v£ 0°) } # } 0°.) } e, I f these s i g n a l s are combined i n groups of four, the unknown gains divide out. Amplitude Closure : abed * * IE . E J / E E. , I ab cd ac bd IK , K . G o G. G G. ab cd a b c d EXP{ 2?a (J0T -0.-0 +0A) } V . V , | a b e d ab cd ac bd £ G „ G. G . c b d EXP{ 2 ^ (0a-0c-0h+0d) } V a c V b d l abce acde l K a b K c d Vab V c d ' | V a b V c d | K a c K b d V ac Vbd' , V a c V b d | V a b V c e ' A. , = bede | V b c V d e IV ac Vbe' | V b d V ce | V a c Vde« A | V a b V d e | V a d V c e ' abde | V a d Vbe i f c o r r e l a t o r gains equal, The phase measurements may be written f o r t h i s array P , = W . - 0. + 0 P.. = W. , - 0. + 0. nab „ab ^b rfa nbd „bd Z,& Ob P = W - 0 + 0 Pw = W. • - 0 . + 0. n a c ,.ac ^c „ t a nbe „be Ce Cb Pa_, = W , - 0 J + 0 P J = W , - 0 J + 0 _ d ,.ad rfd „,a r>cd ,.cd „,d Oc P =W - 0 + 0 P = W - 0 + 0 * ae "ae Ze Oa * ce "ce Z.e Oc P. = W. - 0 + 0, P . = W. - 0 + J0, be be c b de de ^e d These equations may be combined i n groups of three to cancel the error terms. Closure Phases : X., = P u + P K - P = W. +W.-W X 2 = P ^ + Pab p a d = w b d + w a b w a d x 3 s P b * + P a b _ ?™ w b * + W a b - Wa* < - f c * < i - lit - < t * < l - < 6 de ad ae de ad ae Figure B-4.2 Phase & Amplitude Closure f o r  Non-redundant Array Speckle Processing Page 103 C Speckle Processing Recently, advances i n photography and advances i n s i g n a l processing with d i g i t a l computers have combined to allow o p t i c a l observers to overcome atmospheric d i s t o r t i o n , and i n some cases to obtain instrument-resolution l i m i t e d images. The processing techniques are c o l l e c t i v e l y r e f e r r e d to as "speckle-interferometry". As the o p t i c a l and radio d i s t o r t i o n problems are quite s i m i l a r i n nature, (except for scale!) i t i s worthwhile to look at the speckle processing technique and see i f i t can be used for radio long baseline interferometry (LBI) image recovery. The basic speckle process i s to compensate for e r r o r s i n the data by "averaging" ( i n a s p e c i a l way) a number of d i s t o r t e d images. This i s not d i r e c t l y a pplicable to LBI imaging as there i s only a s i n g l e d i s t o r t e d radio image. However, the radio data i s oversampled i n some areas, and i t seemed p l a u s i b l e that t h i s would be s u f f i c i e n t for the speckle averaging. This section w i l l f i r s t review, i n t u t o r i a l form, the speckle method used i n o p t i c a l observations. This i s provided as background for readers not working i n the f i e l d . The l a s t section w i l l describe the experiments i n which the averaging techniques were adapted and t r i e d (unsuccessfully) on simulated LBI data. The contribution contained i n t h i s s ection i s the modification of a technique from an outside f i e l d , the t e s t i n g for the LBI imaging problem, and the conclusion that the technique i s not s u f f i c i e n t l y powerful. C-1 Atmospheric D i s t o r t i o n The atmosphere distu r b s the imaging process i n two ways. The e f f e c t s are p r i m a r i l y i n the form of "phase-error" and they vary randomly with time. I f the l i g h t s i g n a l from the object at the aperture plane i s thought of as a stream of complex numbers i n magnitude-phase or "polar" form, then the atmosphere d i s t u r b s the phase term and leaves the amplitude term unaffected. The process of image forming by a lens or a mirror i s a weighted summation of these complex numbers. Thi s summation i s equivalent to a Fourier transform operation. With an undisturbed s i g n a l and a good lens, a proper image can be formed. This i s i l l u s t r a t e d i n f i g u r e C-1a. I f the aperture phase i s disturbed however, the s i g n a l s are combined by the lens i n c o r r e c t l y , and a d i s t o r t e d image i s formed. See f i g u r e C-1b. The e f f e c t of the atmosphere on the aperture data can be v i s u a l i z e d as a group of random " c e l l s " i n front of C Speckle Processing Page 104 the instrument. These c e l l s vary with time and act as a random grating to d i s t o r t the image i n t o a u n i n t e l l i g i b l e pattern. The r e s u l t i n g "image" i s a confusing assortment of l i g h t and dark patches r e f e r r e d to as a "speckle-image". For o p t i c a l work, the atmospheric c e l l s are t y p i c a l l y of the order of a few centimeters i n s i z e , they create phase delays of many wavelengths, and they change t h e i r l o c a t i o n and phase delay i n a time period of a few tenths of a second. As the c e l l s change t h e i r parameters, the interference pattern and the speckle image a l t e r s and moves. I f the speckle image i s observed over a period o f time (such as by i n t e g r a t i n g i t for a few seconds exposure o f a film) then the f i n e structure of the speckle pattern blends away to give a blurred atmospheric-resolution l i m i t e d image. In such a long exposure photograph, which extends over several periods of atmospheric f l u c t u a t i o n , the large random phase errors combine to destroy the d e t a i l e d phase information of the object. I f a short exposure photograph i s made i n l e s s time than i t takes f o r the atmospheric c e l l s to change, then the d e t a i l e d phase Information from the object i s not l o s t , and i t i s captured i n a confused form i n the speckle image. The s i g n a l processing terminology i s that the object d i s t r i b u t i o n has been convolved with the atmospheric d i s t r i b u t i o n and the instrument response to give the speckle image. A l l of the object information i s i n a short exposure speckle image and i t i s "simply" a question o f unraveling i t from the atmospheric e f f e c t s . Korff, Dryden, & M i l l a r [10] provide an h e u r i s t i c a n alysis to show that the speckle image does contain u s e f u l information to the r e s o l u t i o n l i m i t o f the instrument. The basic speckle technique requires capturing short exposure images. In p r a c t i c a l terms, obtaining these images requires an exposure time o f l e s s than one tenth second and t h i s s e r i o u s l y l i m i t s the range of astronomical objects that can be imaged. The objects must be bright enough to give an image with l e s s than one tenth second exposure time. Faster f i l m s and e l e c t r o n i c image i n t e n s i f i e r s have now extended the brightness range to the area of i n t e r e s t i n g objects. Since the object remains constant while the atmosphere changes, the speckle processing technique involves capturing a number o f speckle images. Each of these has the same object phase or "information s i g n a l " but a C Speckle Processing Page 105 d i f f e r e n t d isturbing phase or "error s i g n a l " , and thus an "averaging" technique can be used to separate the image from the atmosphere. C l e a r l y , simple averaging of the speckle patterns i i i the image domain w i l l not give any better r e s u l t than a long time exposure photograph. C-2 Autocorrelation Imaging - Averaging Magnitudes Among the e a r l i e s t experimenters to work with the speckle image was Labeyrie [11]. He introduced the concept of imaging by signal processing i n the "Fourier Domain". The Fourier transform of an image i s a set of complex numbers which are equivalent to the s i g n a l i n the "aperture plane". This i s r e f e r r e d to as the "u-v plane" by radio astronomers. Labeyrie*s procedure was to average the squared magnitudes of the u-v plane signals for a s e r i e s of speckle images. The averaged magnitudes were then retransformed back to the image plane (assuming a zero-phase component) to obtain the image. This does not give a true picture of the sky, but instead, the (symmetric) autocorrelation of the true image. For an image of a single s t a r , t h i s symmetry i s not a problem. However a double star w i l l become a t r i p l e pattern, and i n general, N sources w i l l become N(N-1)+1 sources. The image produced i n t h i s way however does contain information to the r e s o l u t i o n l i m i t of the instrument and i t i s useful for determining s t e l l a r diameters and separations, i f not for detailed imaging. A 'diagram of the Labeyrie process i s shown i n figure C-2. In the complete procedure, note that the instrument response function (or "beam" of the telescope) i s removed by d i v i d i n g i n the Fourier domain by the averaged magnitudes of| a s e r i e s of speckle images of an unresolved star (point source). Labeyrie's technique was c a r r i e d out i n a fa coherent o p t i c a l s i g n a l processor using LASERS and lenses for the transform operations and photographic f i l m for averaging the amplitudes. \ C-3 Obtaining the Phases - Averaging Differences To obtain the true image from the autocorrelation image requires phase information i n the u-v plane and Aitken [29] and Knox & Thompson [8][9] have provided an algorithm which allows t h i s phase information to be recovered from corrupted data. In t h e i r process the u-v plane data for each speckle image i s sampled on a gri d by a photo-detector array. The sampled and d i g i t i z e d data (the u-v plane grid) i s then entered into a computer for is further processing (magic). j? C Speckle Processing Page 106 Knox and Thompson noted that the d i f f e r e n c e i n phase between two points on the measured u-v plane g r i d i s composed of two terms. One of these terms i s the true image phase differ e n c e between the points and the second i s a phase error from the atmosphere. I f the g r i d s i z e i s smaller than the s i z e of the d i s t u r b i n g a i r c e l l s , then the error term w i l l be quite small. As the phase error i s (nearly) constant within a c e l l , the error can be removed by taking the d i f f e r e n c e i n phase between adjacent sample points. This i s a p r a c t i c a l consequence of the a - p r i o r i information that the d i s t u r b i n g c e l l s are l a r g e r than the sampling i n t e r v a l . By forming a g r i d of the phase dif f e r e n c e s from the data i t i s possible to obtain a set of numbers which l a r g e l y represent the true image phase di f f e r e n c e s with only small deviations due to the atmospheric e r r o r s . I t i s possible to imagine t h i s " d i f f e r e n c e g r i d " as c o n s i s t i n g of a number of c e l l s where the phase d i f f e r e n c e s are correct, and these c e l l s are separated by ridges of e r r o r s which represent the t r a n s i t i o n s between the atmospheric c e l l s . In e f f e c t , the d i f f e r e n c i n g operation has outlined the edges of the c e l l s of the atmospheric distubing medium. I f a number of aperture si g n a l s are a v a i l a b l e and the d i f f e r e n c e g r i d s calculated for each, then an average of these grids w i l l reduce the e f f e c t of the e r r o r s , and y i e l d a better approximation to the true phase d i f f e r e n c e s . A summation made from t h i s "averaged" g r i d w i l l then give an approximation to the equivalent image phases. Knox and Thompson u t i l i z e d two paths i n the u-v plane when summing the averaged d i f f e r e n c e grids to obtain the estimated true phases. The average over the two paths provides a further reduction i n phase e r r o r s . The f i r s t path started p a r a l l e l to the u axis and when i t reached the u coordinate i t traveled p a r a l l e l to the v a x i s . The second path started p a r a l l e l to the v axis and proceeded u n t i l i t reached the v coordinate and then i t traveled p a r a l l e l to the u axis. The two paths thus formed a rectangle with the orign at one corner and the endpoint at the diagonally opposite corner. These paths are i l l u s t r a t e d i n Figure C-3. The use of the two d i r e c t i o n s of summation introduces one s l i g h t a d d i t i o n a l complexity. Two phase d i f f e r e n c e " g r i d s " roust be c a l c u l a t e d . One represents d i f f e r e n c e s i n the u d i r e c t i o n , and the other represents di f f e r e n c e s i n the v d i r e c t i o n . I t may be h e l p f u l to consider these two grids as representing the d i f f e r e n t i a l n o t a t i o n a l s du and C Speckle Processing Page 107 dv. When performing the summations along a path, i t i s necessary to use the d i f f e r e n c e g r i d which i s appropriate to the d i r e c t i o n of t r a v e l . The reconstructed phases, together with amplitudes obtained by the Labeyrie procedure were then Fourier transformed to produce the image. The diagram of f i g u r e C-4 o u t l i n e s the Knox-Thompson procedure. Quite good r e s u l t s were obtained i n t h e i r simulated observations of a moon of J u p i t e r . No p r a c t i c a l astronomy observations using t h i s technique have been reported i n the l i t e r a t u r e . In summary, the o p t i c a l speckle processing technique i s p o t e n t i a l l y able to recover high r e s o l u t i o n images i n the presence of phase errors by "averaging" i n a s p e c i a l way the Fourier domain data from a s e r i e s of images which are known to have the same si g n a l but d i f f e r e n t e r r o r s . The f a c t s that the d i s t o r t i o n c e l l s are f i n i t e i n s i z e , that the c e l l s vary with time, and that the image i s two dimensional a l l combine to allow the processing to work i n the Fourier transform domain. C-U Speckle and LBI I t now remains to be seen how the LBI resembles o p t i c a l imaging. The d i f f e r e n c e s are mainly those of technique rather than theory. For the radio observer, the wavelengths are much longer and the atmospheric c e l l s are kilometers i n s i z e . The phase delays vary with times of tens of minutes but the e r r o r s are s t i l l large and may be several wavelengths i n extent. The LBI observer measures the data d i r e c t l y i n the Fourier domain (u-v plane) rather than the image plane, and the LBI observations are c o l l e c t e d over a long period of time (usually 12 hours). The long "exposure time" however does not correspond to the long exposure o p t i c a l photographs as the radio instrument only "sees" a small part of the u-v plane at any one time, and a continuous record i s made of the amplitude and phase at each sample point. The r e s u l t i s that the radio image i s equivalent to a s i n g l e speckle image where the frozen speckle pattern i s the e f f e c t of the " v i r t u a l " state of the atmosphere as i t was at the times the various aperture points were sampled. Simple image recovery by the Fourier transform y i e l d s only a speckle image and radio astronomers have long been st r u g g l i n g to perform true imaging with t h e i r LBI data. E a r l y techniques ignored the phase problem and generated autocorrelation maps (images). C Speckle Processing Page 108 The immediate problem with the a p p l i c a t i o n of speckle techniques to LBI i s that a f t e r 12 hours of observation, the radio astronomer has the data equivalent to ONE speckle image. For the speckle techniques to operate properly a number (between 40 and 150 are reported to be used [54] [55]) of speckle patterns must be processed. A s u i t a b l e set of speckle data could be obtained with many nights of observations, and the speckle processing procedure provides a way to combine multiple observations. The use of 50 nights to observe one object i s not considered p r a c t i c a l however! The LBI data i s c o l l e c t e d i n the u-v plane not on a g r i d , but along a s e r i e s of arc shaped tracks. A diagram of the u-v sampling tracks for an 8 antenna east-west LBI array i s shown i n f i g u r e C-5. These arc sampling tracks are created by the separation of the antennas on the earth's surface and the r o t a t i o n of the earth about i t s axis (see section B-1.3). Samples are usually taken at a constant rate. Often t h i s i s every minute which i s equivalent to 1/4 degree of r o t a t i o n of the earth. The constant angular sampling rate means that for the small radius inner arcs, the l i n e a r sampling rate i s quite high, and considerable oversampling occurs. I t seemed possible that use could be made of t h i s oversampling to obtain a set of speckle images. I t should perhaps be observed at t h i s point that while the LBI data i s oversampled ALONG the arc tracks, i t i s usually quite undersampled i n the r a d i a l d i r e c t i o n across the arcs. The number o f arc sampling tracks i s l i m i t e d by the number of antennas i n the synthesis array and for LBI t h i s i s usually l i m i t e d to a s i n g l e d i g i t integer. The oversampling can perhaps be used on the inner arcs to create m u l t i p l e exposures. I f for example, the oversampling were a f a c t o r of ten, then ten d i f f e r e n c e vectors could be obtained from the d i f f e r e n c e of every tenth data point along the arcs. These ten d i f f e r e n c e vectors could then be averaged and summed to y i e l d an estimate of the phase along the arc according to the Knox-Thompson method. These phases, combined with the amplitudes measured along the inner arcs, would be Fourier transformed to give an i n i t i a l model image which might be useful for further developing with other recovery algorithms. T h i s procedure for LBI data reduction has several advantages. One of the roost important i s that i t i s not an i t e r a t i v e algorithm and the image i s C Speckle Processing Page 109 obtained without the need for "convergence" or a "good s t a r t i n g model". The process also does not assume a " p o s i t i v e sky" as do other LBI imaging algorithms. This speckle process does imply that the object i s confined. The oversampling requirement for the separation o f the v i r t u a l speckle images forces a confined source requirement. Other LBI algorithms make use of "closure-phase" (see section B-4). This i s a consequence of the way the phase e r r o r s are incorporated into the data and makes the requirement that the sums of c e r t a i n phases i n the Fourier transform of the f i n a l image must equal the equivalent sums of the aperture data. The proposed speckle LBI algorithm does maintain the closure-phase information i n the sense that sums of the averaged phases w i l l equal the average of the equivalent closure sums. Like a l l algorithms, the speckle LBI processing has some problems. Perhaps the major one i s the s e l e c t i o n of every 10th data point. T h i s severely l i m i t s the number of data points a v a i l a b l e for the images and may l i m i t the process to LBI with strong sources and to array instruments with many antennas. Experimental t e s t s on simulated data were t r i e d to see i f the technique was of any p r a c t i c a l value. C-5 Speckle Simulation Experiments Two experiments (computer simulations) were conducted to see i f the redundancy i n LBI data could be used for phase recovery. The o u t l i n e of the basic method i s shown i n f i g u r e C-6. A properly sampled object was used to obtain simulated aperture data. This was then sampled along the g r i d l i n e s to obtain simulated LBI data records. These records were made redundant by repeating each data point a number of times. T y p i c a l l y , an expansion factor of 6 was used which made records six times longer than the o r i g i n a l s . The expansion factor determined the s i z e of the redundancy c e l l s i n the data. Within the boundaries of these c e l l s the image phase i s known to be constant. Antenna phase e r r o r s were then added to the expanded records. These e r r o r s simulated random phase errors at each antenna and the number and s i z e of the e r r o r s could be c o n t r o l l e d . By s e l e c t i n g one phase e r r o r , only a s i n g l e constant error was applied for each antenna. For a case with 5 e r r o r s , the LBI record was divided into 5 segments and 5 independent e r r o r s were added, one for each segment. These segments defined the error c e l l s i n the records. The disturbed, expanded data was then averaged using one of two algorithms. C Speckle Processing Page 110 In the f i r s t case, which simulated the Knox-Thompson method, di f f e r e n c e s were taken between the phases i n adjacent redundancy c e l l s . These d i f f e r e n c e sets were then averaged to obtain a contracted estimated d i f f e r e n c e record. The averaged record i s shorter by the inverse of the expansion factor (1/6). This d i f f e r e n c e record was then summed to give the estimated true phase for the aperture. The averaged phases were combined with the true amplitudes and inverse Fourier transformed to obtain the estimated image. Because the data records have been expanded and contracted by equivalent amounts, the new records w i l l generate an image of the same size as the o r i g i n a l . Some representative r e s u l t s are shown i n f i g u r e s C-7 & C-8. Each f i g u r e shows the (true) undistorted image, the d i s t o r t e d uncorrected image, and the "corrected" image obtained by phase averaging. The maximum si z e of the antenna phase errors i s expressed as a percentage of 360 degrees, and the number of errors per antenna in d i c a t e s the number of independent phase error segments applied to the s i g n a l s from each antenna. I t was apparent from these t e s t s that while the method would usually "improve" an image i t did not give r e s u l t s that r e a l l y could be considered u s e f u l . While somewhat good reconstruction (10% errors) was obtained when the number of errors was small (<3) or the error s i z e s were small (<18 degrees), no r e a l improvement was noted for cases of many errors or for e r r o r s of large s i z e . Note that during these t e s t s , the f i r s t data point of each record was l e f t undisturbed. This was done to provide a way to maintain the c o r r e c t r e l a t i v e phase between records. This condition would not be true .in r e a l data. Nevertheless, i t was introduced into the simulation as i t was not expected to work without i t . With r e a l data, because of the undersampling i n the r a d i a l (across arc) d i r e c t i o n , there i s no redundancy a v a i l a b l e to use to obtain a r e l a t i v e phase between records. Note, however, that from these t e s t s i t was apparent that i f the number and s i z e of phase er r o r s i s small, then image recovery i s possible i f the r e l a t i v e phase can be e s t a b l i s h e d . A s l i g h t l y d i f f e r e n t method of phase averaging was also t r i e d . This attempted to take advantage of the knowledge of the redundancy c e l l l o c a t i o n s . In t h i s case, rather than d i f f e r e n c i n g between redundancy c e l l s , the d i f f e r e n c e s were taken among the points within the redundancy c e l l . In C Speckle Processing Page 111 t h i s case the di f f e r e n c e s should always be zero, unless a d i s t o r t i o n c e l l boundary passes through the redundancy c e l l . The d i f f e r e n c e s thus represent the edges of a l l the d i s t o r t i o n c e l l s that do not coincide with the edges of the redundancy c e l l s . These d i f f e r e n c e s were then used to cor r e c t the phases. I f no c e l l boundaries coincided, then quite good recovery should be p o s s i b l e . Representative images processed i n t h i s manner are shown i n f i g u r e s C -9 & C - 1 0 . Again, reasonable recovery was obtained for few e r r o r s , but no improvement was observed for many err o r s ( > 5 ) . I t can be concluded from these t e s t s that the techniques of averaging to compensate for the LBI phase errors are not s u c c e s s f u l . Aside from the d i f f i c u l t i e s of e s t a b l i s h i n g r e l a t i v e phase between baselines, there are not s u f f i c i e n t independent data sets to allow the averaging to be u s e f u l . In contrast to the Knox-Thompson o p t i c a l process where separate speckle images represent d i f f e r e n t d i s t o r t i o n patterns, the LBI data has only one error c e l l pattern, and i t i s not possible to average i t away. Note however that these averaging techniques do provide a means to combine a number of separate observations of the same source. 112 Object; (Sky) ^ Complex; Numbers Clear Air <ZZHD>Len: Disturbing A i r Lens (ZZZZD> Image (Photograph) FIGURE C-1 : a Speckle Image F IGURE C-1 b Imaging with Atmosphere 113 C Telescope Data > Series of Speckle Images Fourier Transform each Speckle Image (complex Numbers) Remove "Beam" Fourier Transform averaged Magnitudes with Zero Phases to obtain Autocorrelation Image Average Squared Magnitudes of each point IMAGE ^ FIGURE C - 2 Labeyrie Algorithm for obtaining Autocorrelation Image 1 1 4 Path 2 Origin FIGURE C - 3 Multiple Paths in u-v Difference Plane 1 1 5 (5 Telescope Data Series of Speckle Images PHASES X Calculate Difference Grids for du & dv Directions Fourier Transform each Speckle Image Grid (complex numbers) AMPLITUDES Average Squared Magnitudes of each point Average Phase Difference Grids Sum Averaged Phases Differences to recover Estimated True Phases I Fourier Transform averaged Magnitudes & Phases to obtain estimated image •>( IMAGE ^ FIGURE C~ 4 Knox-Thompson Algorithm for Image Recovery 116 54.000 63.038 86.538 97.385 99.192 102.808 117.269 - 124.500 131207.9 49.000 49.000 49.000 49.000 49.000 49.000 49.000 49.000 01-FEB-82 11:22:52 N83904G.ANT DEC OF SOURCE 45.0 FIGURE C- 5 U-V Sampling T racks for Ea s t -Wes t Synthesis Array (8 antennas) 1 1 7 G Sampled SKY Fourier Transform to U-V Grid Calculate Differences between Nth Data Points <-Sample into Simulated LBI Records of length Q Expand Phase Records by Factor N Average Phase Differences to record of length Q Recover Image Phase by Summing Differences Fourier Transform Averaged Phases & true Amplitudes to obtain estimated Image IMAGE ) FIGURE C - 6 Simulated LBI Speckle Imaging ( SMALL 2) 118 FIGURE C - 7 Knox-Thompson Algorithm simulated A.Undistorted 32x32 Pixels B.Distorted Image 30% 1 Error/antenna C.Averaged Image from B. D.Distorted Image 30% 10 Errors E.Averaged Image from D. (MANY) B.Distorted Image 3 0 % 3 Errors/antenna C.Averaged Image from B. (SMALL2) 120 FIGURE C -9 Second Phase Averaging Algorithm A.Undistorted! 32x32 Pixels B.Distorted Image 50% 3 Errors/antenna C.Averaged Image from B. D.Distorted Image 50% 5 Errors E.Averaged Image from D. 121 FIGURE C-10 D Global Image Parameters D Global Image Parameter Maximization Page 122 I t was noted i n the e a r l i e r section B-3 that most radio astronomy images have t h e i r s i g n a l s confined to a r e l a t i v e l y small area of the image. An image recovery algorithm that corrects phases i n order to obtain minimum image spreading seems a worthwhile p o s s i b i l i t y , and in f a c t i s the basis of another o p t i c a l imaging technique c a l l e d "adaptive o p t i c s " . This section w i l l f i r s t review the "sharpening" or adaptive methods as used for c o r r e c t i n g o p t i c a l telescopes. The s u i t a b i l i t y of these methods for LBI w i l l be investigated, and some experiments with simulated LBI data w i l l be i l l u s t r a t e d which show the method i s not powerful enough for use with the large phase errors present in LBI data. I t i s also shown that no method, by i t s e l f , which s e l e c t s an image phase correction by searching only for a maximum i n a global image parameter w i l l be s u c c e s s f u l . Global image parameters, such as sharpness or entropy, have several l o c a l maxima and the l a r g e s t i s for the zero-phase image. Any parameter maximization routine w i l l always work to the zero-phase r e s u l t . This reports on new work which has not appeared before in the l i t e r a t u r e i n t h i s a p p l i c a t i o n , and the c o n t r i b u t i o n of t h i s section i s the modification of a technique used i n o p t i c s for the LBI imaging problem, the t e s t i n g of the modified technique, and the conclusion that the " g l o b a l " techniques are not solutions to imaging problems with large phase e r r o r s . D-1 Adaptive Optics Muller & Buffington (M&B) [15] describe a method for the processing of o p t i c a l images aimed at r e a l time correction of atmospheric e f f e c t s f or astromomy. Their equipment i s described in [4] and McCall, Brown, and Passner (MB&P) [13] describe a s i m i l a r apparatus. An adaptive o p t i c s system i s i l l u s t r a t e d i n f i g u r e D-1. The system consists of two s i g n i f i c a n t elements. One i s a "phase correc t o r " and the other i s a "sharpness sensor". The phase corrector i s placed in the o p t i c a l path and i t allows the introduction of a phase s h i f t to compensate for the atmospheric e f f e c t s . The phase corrector may also take the form of a "rubber mirror" i n a s l i g h t l y d i f f e r e n t o p t i c a l arrangement. In p r a c t i c e , the mirror i s a faceted surface with p i e z o - e l e c t r i c (or magnetic) actuators to move the segments. The sharpness c a l c u l a t o r i s a sensor placed in the image plane which provides a global D Global Image Parameters Page 123 parameter of the image. The basic idea i s to adjust the rubber mirror i n order to improve the image as indicated by the sharpness c a l c u l a t o r . Muller & Buffington (M&B) used the following simple sumation as the "sharpness" parameter (S) for t h e i r t e s t s : S = ^ I n 2 (D.1) where I„ are the image i n t e n s i t i e s , n They also found a number of other parameters to be useful such as : S = 2 I n Log(I n) (D.2) or S = M n I R 2 (D.3) (where MR i s an a r b i t r a r y mask f u n c t i o n ) . M&B c r e d i t Babcock [1] with f i r s t suggesting the use of a compensating p l a t e c o n t r o l l e d by te s t s on the image. The assumption, which i s in some measure supported by theory, i s that the sharpness parameter w i l l reach a maximum when the atmospheric d i s t o r t i o n has been removed by the compensating mirror. In operation, the machine measures the sharpness parameter and c o n t i n u a l l y adjusts the mirror to increase the sharpness. The process may operate in r e a l time and i s reported by MB&P to give a stable image. S i m i l a r equipment and p r i n c i p l e s are used to a s s i s t the propagation of LASER beams through the atmosphere (see f o r example O'Meara [14]). In these cases the mirror i s adjusted to give the maximum ' g l i n t ' return s i g n a l from the target. Hamaker, 0'Sullivan and Noordam (HS&N) [7] present an h e u r i s t i c 2 i n d i c a t i o n of why the sharpness parameter I R should be a maximum for the. corrected image. In essence, for an incoherent imaging system, the sharpness, parameter can be related to the summation of the product of.the v i s i b i l i t y function (VF) V(u,v) magnitude squared m u l t i p l i e d by the o p t i c a l t r a n s f e r function (OTF) H(u,v) magnitude squared (where (u,v) are the coordinates in the aperture plane); i S = ^  { I n 2 } = g { l V ( u , v ) | 2 |H(u,v)| 2 } . (D.4) ^ This i s e f f e c t i v e l y a statement of Parseval's theorem for Fourier Transforms The OTF i s a combination of the aperture, the atmospheric d i s t o r t i o n s , and the rubber mirror corrections. HS&N then argue that as the v i s i b i l i t i e s , V, are set by the object, the sharpness w i l l have a maximum when the OTF i s a maximum. The OTF, being the magnitude of a complex function, w i l l be a, maximum when i t s phase i s zero (or constant or l i n e a r l y sloped). This zero. D Global Image Parameters Page 124 phase OTF however represents an u n d i s t o r t i n g aperture, and the sharpness i s thus a maximum when the undistorted object i s imaged. Although t h i s argument 2 i s h e l p f u l for the sharpness p a r a m e t e r ^ I n „• i t cannot be d i r e c t l y applied to the other parameters that M&B found u s e f u l . M&B present a s i m i l a r argument that makes use of the F r e s n e l - K i r c h o f f equation of d i f f r a c t i o n i n place of Parseval's theorem. M&B, Brown [3] , and MB&P a l l report success with t h i s procedure ei t h e r i n the form of simulations or preliminary p r a c t i c a l experiments. The system i s intended to operate in r e a l time and t h i s imposes two l i m i t a t i o n s . The feedback system must respond and adjust the mirror i n a time l e s s than the atmospheric v a r i a t i o n s , and the object must be br i g h t enough to give a s u f f i c i e n t number of photons during each adjustment period f o r the sharpness detector. Dyson [6] has studied algorithm design for the feedback c o n t r o l system. This suggests that the system w i l l be stable f or a proper balance of object strength and number of mirror segments. There are also p r a c t i c a l considerations in the construction and c o n t r o l of the rubber mirror. For best image r e s o l u t i o n , the number of segments must be large, however t h i s reduces the time a v a i l a b l e for each segment adjustment. In some cases there can also be mechanical d i f f i c u l t i e s associated with i n t e r a c t i o n of nearby segments. See for example Pearson & Hansen [16]. I t should be noted that the c o r r e c t i o n by use of a rubber mirror assumes the d i s t o r t i o n i s caused by atmospheric e f f e c t s that are r e l a t i v e l y near the aperture. This i s r e f e r r e d to as the " i s o p l a n a t i c patch" requirement and i n pra c t i c e i t l i m i t s the c o r r e c t a b l e f i e l d of view of the telescope to a few minutes of arc (for o p t i c a l images). While in p r i n c i p l e i t would be possible to recover several patches by separately correcting each region, t h i s i s d i f f i c u l t i n p r a c t i c e due to d i f f e r e n t phase o f f s e t s f o r each region. The rubber mirror corrector a l s o assumes that the d i s t o r t i o n s h i f t s the l o c a t i o n of components of the image and does not attenuate the s i g n a l . This i s an assumption that the atmospheric d i s t o r t i o n c e l l s are " f l a t " phase pl a t e s and not absorbers. D Global Image Parameters Page 125 One of the l i m i t i n g d i f f i c u l t i e s with the system i s that the r e s o l u t i o n of the output image i s proportional to the number of facets i n the c o r r e c t i n g mirror, or the number of possible phase c o r r e c t i o n s . More c e l l s w i l l allow f i n e r c o r r e c t i o n , and thus a better corrected image. However as the c o r r e c t i n g c e l l s become smaller, the c o n t r i b u t i o n of each to the image becomes smaller. Roughly, a phase change i n a c e l l of an aperture with a t o t a l of N c e l l s w i l l make a 1/N change i n the sharpness parameter of the image. For work with instruments and computers, t h i s places a p r a c t i c a l l i m i t on the c o r r e c t i o n c a p a b i l i t y . While M&B propose a real-time c o r r e c t i o n system. Brown [33 proposes the method for post-detection processing. In t h i s case, the detected speckle image (photograph) i s given to a computer which c a l c u l a t e s the necessary phase corrections " o f f - l i n e " . While e i t h e r method can be used for o p t i c a l astronomy, the post detection processing has the advantage of not needing the mechanical rubber mirror. Resolution, as controled by the number of phase c o r r e c t i o n c e l l s , can be as f i n e as p r a c t i c a l given s u f f i c i e n t computing time. Note that the post detection processing i s r e s t r i c t e d to non-redundant apertures. Post detection processing i s also the only method a v a i l a b l e for LBI. Real time LBI reduction i s not possible for two reasons. In the f i r s t place LBI s i g n a l c o r r e l a t i o n i s not done i n r e a l time, and secondly the LBI arrays do not cover a s u f f i c i e n t portion of the aperture at any i n s t a n t of time to allow imaging. D-2 Global Parameters and LBI Imaging The p o s s i b i l i t y of adopting the adaptive o p t i c s process for LBI imaging can be thought of i n terms of f i g u r e A-1.4. This shows the radio LBI image as a combination of signals received from a number of antennas. The image i s destroyed ( spread out ) by the random phase er r o r s introduced by the atmosphere at each antenna. The proposed imaging technique i s to introduce into the processing a compensating phase c o r r e c t i o n factor for each antenna. These f a c t o r s are adjusted to minimize the spreading of the image (or equivalently to maximize the global image parameter). The voltage signals received from each antenna, T^ exp( i t ^ ), are correlated i n p a i r s to form the measured v i s i b i l i t i e s : Page 126 (D.5) where (u,v) are coordinates in the measurement (aperture) plane determined by the separation of the antennas and the time of the measurement. These data, D(u,v), are e f f e c t i v e l y composed of three factors : the true object v i s i b i l i t i e s X(u,v), the array sampling p a t t e r n ^ ( u , v ) , and the atmospheric d i s t o r t i o n oUu.v). In general, these are complex Hermitian functions, and the Fourier transform of t h e i r product i s the d i s t o r t e d image I d : I d ( x , y ) = FT { D(u,v) } = FT { (u,v)£(u,v )o6(u,v) } = FT { T ^ j e x p ( i [ t j - t j ] ) } (D.6) where (x,y) are coordinates i n the image domain, and FT denotes the Fourier transform operation. For reference the "correct" image, I c , i s defined as the Fourier transform of the true object v i s i b i l i t i e s : I c ( x , y ) = FT { (u.v) } . (D.7) This image represents the object brightness d i s t r i b u t i o n or i n t e n s i t y , and i s -2 -1 -1 i n u nits of Wm Hz sr . Note that while the c o r r e c t image, I c , must be p o s i t i v e , the d i s t o r t e d image, I d , may have both p o s i t i v e and negative components due to the presence in equation (D.6) of the antenna sampling pattern term, , and the d i s t o r t i o n term . The product in the Fourier transform domain represents convolution in the image domain so that the d i s t o r t e d image, I d , i s e f f e c t i v e l y the convolution of the object brightness with the beam pattern and the d i s t o r t i n g e f f e c t of the atmosphere. The complex functions of equation (D.6) can be expressed in polar form with magnitude and phase components : I d(x,y) = FT{ G(u,v)exp(ig(u,v)) B(u,v)exp(ib(u,v)) A(u,v)exp(ia(u,v))} . (D . 8 ) The object function i s in general complex : S(u,v) = G(u,v)exp(-ig(u,v)) D Global Image Parameters D^tu.v) = TjT.. exp( i [ t ^ - t . ] ) The array function ft i s e f f e c t i v e l y a sampling pattern such that : D Global Image Parameters Page 127 £(u,v) = B(u,v)expUb(u, v)) where b(u,v) = 0 and B(u,v) = (0 or 1} (D.9) The d i s t o r t i n g function i s p r i m a r i l y a phase function such that : ot-(u.v) = A(u,v)exp(ia(u,v)) where a(u,v) i s a r e a l function and A(u,v) = 1 (D.10) thus from equation (D.8) : I d(x,y) = FT { G(u,v) B(u,v) e x p ( i [g(u,v)+a(u,v)] ) }.CD.11) The data thus c o n s i s t of three parts : the measured c o r r e l a t i o n c o e f f i c i e n t amplitudes, G, the known array sampling pattern, B, and the measured composite phase g + a. The phase error term, a, can be considered to be composed of uncorrelated errors, ej , accountable to each i n d i v i d u a l antenna. The phase t ^ at each antenna i s thus : i s the phase error at the r e c e i v e r output due to the atmosphere. The measured composite phase, g+a, on the baseline between antennas i and j i s thus composed of two components ( g ^ - g ^ 1 ) and (e^-e^). I f these phase terms are substituted into equation (D.11) then : Note that i f the composite phase data g + a are ignored, then a symmetric image i s produced which i s r e f e r r e d to as the "zero-phase" image : (D.12) where g ^ i s the antenna s i g n a l phase due to the object d i s t r i b u t i o n , and I d(x,y) = FTC G(u,v)B(u,v) exp( i tg^'-g-j'+e^e^ )) .(D.13) I n(x,y) = FT { G(u,v) exp(0) } = I c ( x , y ) « FT { exp(-2g(u,v)) } (D.14) where * denotes the convolution operation. T h i s image i s equivalent to the D Global Image Parameters Page 128 correct image convolved with an image having the conjugate of the correct phase. The correct image could be obtained from the zero-phase image i f the phase function g(u,v) could be determined. There are, of course, an a r b i t r a r i l y large number of functions which meet t h i s requirement and a d d i t i o n a l constraints must be introduced to s e l e c t a unique image. The LBI imaging problem i s to f i n d a s u i t a b l e phase function g(u,v) for the deconvolution, or equivalently to determine the antenna phase e r r o r s , e^. D-3 Global Image Parameters A gl o b a l image parameter can be defined as the summation of some a r b i t r a r y function of the image i n t e n s i t i e s : S = 2 f ( I n ) (D.15) where f denotes the function, and the summation i s taken over a l l image elements, I n« For example the co n f i g u r a t i o n a l entropy i s defined as : I I S = - £ { — 2 _ l o g ( — ) } (D.16) 2 1 . 2 1 . l l where the normalizing factors 21^ are necessary i f the entropies of two d i f f e r e n t images are to be compared. S i m i l a r l y , the sharpness i s defined as : S s = S l n q ( D - 1 7 ) where q i s an integer. The entropy parameter i s often thought of as a measure of the "smoothness" of an image, while the sharpness i s , as i t s name implies, a "measure of the "peakedness" of an image. Note, however, that neither parameter takes any account of the "shape" of the image. Given a set of p i x e l values, the same global parameter value w i l l be c a l c u l a t e d for any re-arrangement of the elements. An extremum of the global image parameter i s thus one of a set of images encompassing a l l re-arrangements of the elements. To use the global parameter as a means of choosing a corrected image, some add i t i o n a l information must be provided to d i s t i n g u i s h a p a r t i c u l a r image of the set. D Global Image Parameters Page 129 Figure D-2 i l l u s t r a t e s an example image and the calculated "sharpness" values for various conditions of phase errors. Note that the sharpness i s a maximum for the zero-phase image, and that i t also peaks at the corre c t image with a reduction as the phase errors are made l a r g e r . D-1 Correction Algorithm Given the d i s t o r t e d image represented by equation (D.13), one technique of correction i s to introduce compensation terms c^ for each antenna to cancel out the e f f e c t s of the phase errors e ^ The compensated image, I m , i s then given by mu l t i p l y i n g the data in equation (D.13) by the co r r e c t i o n term expCi-Cc^Cj]) : I m = FT{G(u,v)B(u,v)exp( [gj^'-gj ' + e^ej] J e x p C ^ E c^Cj]) }.(D.18) The problem i s thus reduced to finding a set of c o r r e c t i o n s , c i f for each antenna which c o r r e c t l y compensate for the errors e^. One possible way to s e l e c t these correction terms, i s to pick the set that maximizes a global parameter of the compensated image. Thus we s e l e c t c^ to maximize S : S = f ( I ) , (D.19) m S = 2.f( FT{ G(u,v)B(u,v) ex p ( i [ g i ' - g j ' + e i - e j ] ) exprt - C c j-cj])} ) . (D.20) Given that the amplitudes, G and B, and the object phases, g', are f i x e d , the extrema of t h i s function w i l l depend on the sum of the phase corrections, c, and the measured composite phase, (g+e). In general t h i s function may have a number of extrema. I f the function, f, i s monotonic such as a power or log of the i n t e n s i t i e s , then some of these extrema w i l l occur when the phases g, e, and c are a l l very nearly equal. One extremum of S w i l l occur when Ci-°j = S i ' - S j ' + W (D.21) which gives a l l zero phases and forms the zero-phase image. By v i r t u e of the t r i a n g l e i n e q u a l i t y f o r complex numbers, t h i s extrema w i l l be the global maximum. ( I f the comlex numbers are thought of as vectors, then they can be seen to sum to a greater value i f they are a l l aligned i n the same d i r e c t i o n D Global Image Parameters than i f they have a random or i e n t a t i o n . ) occur when Page 130 Another l o c a l extremum of S w i l l C i " c j = e i ~ e j (D.22) which gives correct compensation and forms the correct image. Another l o c a l extremum w i l l occur when c i ' c j = g i ' " g j ' ( D * 2 3 ) and t h i s w i l l form a d i s t o r t e d image which i s , in e f f e c t , the convolution of the zero-phase image with the atmospheric d i s t o r t i o n pattern. This extremum w i l l not be very pronounced i f the e r r o r s , e ^ are large and widely d i s t r i b u t e d . Other minor l o c a l extrema may also occur depending on the d e t a i l s of the terms g, e and c. This analysis shows that a global image parameter w i l l have several l o c a l extrema, one of which w i l l correspond to the correct image. A c o r r e c t i o n process may be expected to be e f f e c t i v e for small phase errors when the d i s t o r t e d image i s not too f a r away from the l o c a l extremum at the correct image. For larger phase e r r o r s , confusion among the many l o c a l extrema w i l l be expected. Note that t h i s s i t u a t i o n with a coherent imaging process and v i s i b i l i t y phase errors i s d i s t i n c t from the incoherent o p t i c a l imaging process discussed by Hamaker, O'Sullivan and Noordam [ 7 ] . I t i s concluded that, while g l o b a l image parameters may have some use for the c o r r e c t i o n of images with v i s i b i l i t y phase errors, d i f f i c u l t y w i l l be experienced in d i s t i n g u i s h i n g the many l o c a l extrema, and in choosing the correct image. ( D-5 Experiments Based on t h i s formulation for a correction algorithm, some experiments were conducted to t e s t the e f f e c t i v n e s s of the process. Although the analysis suggested the process would only be of l i m i t e d use, i t seemed worthwhile to t e s t i t because of the presence of the B term in equation (D.20). This f a c t o r represents the array sampling pattern, and equation (D.20) indicates that ah extremum of the global parameter can be c a l c u l a t e d without the need to deconvolve the beam pattern from the image, jpther methods of LBI image D Global Image Parameters Page 131 co r r e c t i o n , such as the widely used s e l f - c a l i b r a t i o n method [30], must deconvolve the beam for each i t e r a t i o n and t h i s represents a s i g n i f i c a n t computing burden. An algorithm which functions without the need for the deconvolution at each i t e r a t i o n would be a worthwhile development. The deconvolution of the beam would then be necessary only once on the phase-corrected image. The simulation experiments consisted of s e l e c t i n g a t e s t object, sampling i t to obtain the simulated v i s i b i l i t i e s , adding pseudo-random antenna phase errors, and then attempting to correct the image by maximizing the image sharpness. The general process i s outlined in the flow-diagram of fig u r e D-3. Equation (D.17) with a value for q of 3 was used as the sharpness function. Note that a value for q of 2 cannot be used for the c o r r e c t i o n of phase er r o r s . By Parseval's theorem, the square of the Fourier transform i s not affected by the phases. The cube function however i s affected by the phases, and i t can be used as the basis for a correction algorithm. S i m i l a r l y , note that f r a c t i o n a l values of q, as discussed by Nityananda and Narayan [52], are also not s u i t a b l e because of the negative elements i n the image at the stage of equation (D.20). The phase corrections were estimated by a successive approximation algorithm. A s t a r t i n g phase c o r r e c t i o n estimate (c) was selected by the operator. The program then t r i e d t h i s c o r r e c t i o n on the data by c a l c u l a t i n g the sharpness, S g, of the t r i a l images with values of antenna phase s h i f t e d by +c and - c . These t r i a l values of sharpness were compared with the i n i t i a l value and the correction (+c, -c, or 0) which gave the greatest increase i n the sharpness was selected as the approximate correction and applied to the data. The zero correction was selected i f both t r i a l c o rrections decreased the image sharpness from the i n i t i a l value. The value of c was then divided by two and the t r i a l process repeated. T y p i c a l l y , s i x i t e r a t i o n s were used so that the f i n a l value of the t r i a l c o r r e c t i o n was 2~^ of the s t a r t i n g value. The diagram of fi g u r e D-4 i l l u s t r a t e s t h i s algorithm. This process i s s i m i l a r to the successive-approximation technique used in some forms of a n a l o g - t o - d i g i t a l voltage converters. D Global Image Parameters Page 132 During the development of t h i s process, a p l o t was made of the sharpness function, S , as a function of antenna phase corrections from 0 to 360 degrees. This i s shown in figure D-5, and i t has a smooth curve with several peaks. The successive-approximation process was selected as a simple algorithm for determining the phase corrections within a constrained range. Note that i f the sharpness had exhibited a s i n g l e peak, then the c o r r e c t i o n could be determined d i r e c t l y from two t r i a l phase s h i f t s of 7T/2 (Muller & Buffington : [15]). When more than one peak i s present, a l e s s d i r e c t method, such as successive approximation, i s required. Figure D-5 shows several p l o t s of sharpness vrs phase. In t h i s case the sharpness i s normalized and plotted as S/S Q and the phase varies from 0° to 3 6 O 0 . The aperture points are near the center of the U-V g r i d . Note that some antenna phases have.much more e f f e c t on the, image sharpness than others, and that the p r o f i l e s can be complicated. The d i f f e r e n c e in the s e n s i t i v i t y can be accounted for by the d i f f e r e n t aperture amplitudes sampled by the short 0 o and the long baselines. While the phase component ranges between 0 and 360 in a l l areas of the aperture, the amplitudes are generally sharply peaked at the o r i g i n . The phases sampled by the short, baselines (near the o r i g i n ) thus have a greater e f f e c t on.the image due to the larger amplitudes in that region. Note in f i g u r e D-5 the d i f f e r e n c e s [in the s i z e of the parameter v a r i a t i o n with antenna number. With t h i s array, antennas 4 and 5 form short baselines, and antennas 1 and 2 involve the, long baselines. f' These phase-correction estimates were^applied for each antenna i n turn and then for a l l aperture sampling periods. One antenna phase was held, constant as a reference to keep the image fixed in p o s i t i o n . The whole process was repeated several times u n t i l a l l the corrections calculated.were very small (10, radian). By s e l e c t i n g the s i z e of the i n i t i a l s t a r t i n g value of c, i t i s possible to r e s t r i c t , the maximum phase correction to about 2c and. thus t o l i m i t the range of phase adjustment over which the program would search for the peak in the sharpness function. By applying the phase corrections to the s i g n a l s from each antenna in t h i s manner, the error mechanism of the instrument i s being modeled and the "phase-closure sums" (Jennison: [17]), which are known to be correct, are preserved in the corrected image. D Global Image Parameters Page 133 Three of the objects tested are i l lustrated here. These i l lustrat ions are 64 by 64 pixels in size. Figure D-6a shows an image of the source CTB1 made with the D.R.A.O. aperture-synthesis telescope [12], This i s , by def in i t ion , the correct image I c . Figure D-7a shows a second test image which was prepared from the observed image by removing the strong point sources near the top portion of the image. These sources represent about 20% of the integrated brightness of figure D-6a. Figure D-7a is considered to be a " d i f f i c u l t " object to image because i t has a few small areas of compact brightness together with a large extended area. These two objects were tested to i l lu s t ra te the effect on the imaging process of the presence of strong features in the object. Figure D-9a shows a preliminary radio image of a galaxy made with the Very-Large-Array telescope. This object combines a strong point source with a connected, extended area of brightness. The images were Fourier transformed to obtain the simulated aperture signals. The aperture was then sampled into 28 records representing the 28 correlations from an eight-antenna array. In processing practical data, a coordinate conversion i s necessary to go from the curved aperture sampling tracks formed by the earth's rotation to a grid suitable for a Fast Fourier transform (FFT) process. In order to simplify the simulation experiments, an a r t i f i c i a l set of l inear baseline sampling tracks was used which coincided direct ly with grid l ines . This sampling pattern is similar to what would be obtained in practice from an array which includes wide North-South antenna spacings and which i s observing a source at 0 degrees declination. Random antenna phase errors which were uniformly distributed over a limited range were then added to the v i s i b i l i t y records, and the inverse FFT of this disturbed data gave a distorted image as typified by figure D-6b. D-6 Results The results of the experiments using the sharpening process were both tantal iz ing and discouraging. Many experiments indicated a significant improvement in the image. However in no cases was a ful ly satisfactory image recovered. The d i f f i cu l t i e s essentially relate to the multiple extrema in the sharpness parameter and_the sensit ivi ty of the parameter to the sidelobes in the dirty image. D Global Image Parameters Page 134 Figure D-7b shows the object of f i g u r e D-7a with phase errors l i m i t e d to~~tf/5. With the errors l i m i t e d to t h i s range, the phases remain in the c o r r e c t quadrant and the object i s recognizable in the d i s t o r t e d image. This condition in the data i s equivalent to having the sign of the phases co r r e c t . Figure D-7c shows the image recovered by sharpening with phase corrections l i m i t e d t o — A s i g n i f i c a n t improvement has been made over the disturbed image, although i t i s by no means completely c o r r e c t . As a f i n a l processing step, t h i s image would normally be deconvolved from the beam to remove the a r t i f a c t s due to the sidelobes. This would most often be done by the use of the CLEAN algorithm [Section B-2]. Figure D-8d shows the sharpened r e s u l t from data with s u b s t a n t i a l l y larger phase errors applied to the object of figure D-8a. These errors were in the r a n g e a n d more c l o s e l y simulate the t y p i c a l LBI data condition. In t h i s case the a p p l i c a t i o n of the sharpening algorithm has constructed an image resembling the zero-phase image. The phases, in f a c t , slope l i n e a r l y across the aperture and t h i s positions the peak away from the center of the f i e l d . I t was not possible for the algorithm to locate the correct l o c a l peak in the sharpness parameter when s t a r t i n g from data which have large phase e r r o r s . As the zero-phase image sharpness i s the global maximum, i t i s d i f f i c u l t to avoid f i n d i n g i t in the search. A s i m i l a r r e s u l t was also produced when the "entropy" parameter S g was t r i e d as the image parameter. Several other parameters were also t r i e d . These included : S e' = 5> I n Log(I n) for I n > 0 (D.26). I t was expected that the choice of the sharpness function would have 3 some e f f e c t on the f i n a l image. The parameter I R emphasizes the b r i g h t e s t p i x e l s in the image, and thus favours sharp, compact images. Note also that 3 I R preserves the sign and t h i s discriminates against negative beam sidelobes i n the image. Other sharpness functions were t r i e d to see i f they could influence the image in the correct d i r e c t i o n . The parameter D.24 f o r example, weights the p i x e l magnitude by the distance (R) from the image center. As the S S r e (D.24), (D.25), and D Global Image Parameters Page 135 zero-phase image is sharply peaked at the or ig in , this sharpness function could be used to discriminate against i t s production. This was indeed found to be the case, however the images produced, while no longer the zero-phase type, were nowhere near correct. The parameters D.25 and D.26 were tr ied by analogy with the "maximum entropy" (MEM) methods. The entropy parameter favours a smooth image. The d i f f i cu l ty with the entropy calculation i s that Log(I) i s not defined for negative or zero image pixels . The image in this case can be expected to have (real) negative elements due to the beam sidelobes. These parameters were tr ied without success. They both generated smooth, almost featureless maps. Figure D-6c shows the result of processing the object of figure D-6a which included a strong point source. Phase errors in the ranged/2 were applied to the data for this object. Again, although the recovered image is not completely correct, i t i s a substantial improvement over the i n i t i a l distorted image. A similar experiment to recover the object without the bright feature, produced an image resembling figure D-8c. Note that the presence of a strong component in the object allows the correction of larger phase errors. Figure D-9c shows the sharpened image recovered from the effects of 7f/2 phase errors in the data for the object of figure D-9a. Again in this case, a substantial improvement has been made over the i n i t i a l distorted image. Note, however, that the single-sided "jet" has developed a twin counterpart, indicating a bias towards the zero-phase image. This extra feature, however, i s of quite low amplitude, and overall the image compares reasonably with the object. From the astronomical point of view this extra feature is a significant failure in the image. The size of the phase errors that can be corrected with this algorithm without confusion among the multiple extrema, depends a great deal on the object being imaged. For the objects with compact features i l lus trated here, and others that have been tested,i"hy4 seems a pract ical l i m i t . D Global Image Parameters D-7 Conclusions and Comments Page 136 In a l l of these examples i t should be noted that the extended areas of brightness have been reproduced by the algorithm, and they are not compacted into a point as might naively be expected from a sharpening process. This indicates the actual presence of a l o c a l maximum in the sharpness parameter for the cor r e c t image, although i t may be small and d i f f i c u l t to f i n d . Part of the attractiveness of the algorithm i s based on the premise that the sharpness function can be calculated d i r e c t l y from the image as 3 indicated by equation (D.20). The p a r a m e t e r ^ I n . however, preserves the sign of the p i x e l s , with negative values reducing the summation. This has the e f f e c t of d i s c r i m i n a t i n g against negative regions i n the corrected image and i s in f a c t imposing a form of p o s i t i v i t y c o n s t r a i n t on the image. However, t h i s c o n s t r a i n t cannot s t r i c t l y be applied to the image at t h i s stage, owing to the presence of the negative beam sidelobes. This i s one reason why the example images in f i g u r e s D-6c and D-9c are not an exact recovery - the algorithm has attempted to correct for the sidelobes. An attempt was made to 3 avoid t h i s problem by using the p a r a m e t e r I I R I . However, t h i s approach introduces the further problem that t h i s parameter i s i n s e n s i t i v e to sign and tends to amplify negative peaks a r i s i n g from a d d i t i v e noise. The presence of the beam a r t i f a c t s confuses the image and makes the search for the sharpness peak more d i f f i c u l t . I t i s i n t e r e s t i n g to compare the r e s u l t s of t h i s sharpening algorithm with others using the global image parameter of "entropy". The maximum entropy method (MEM), su i t a b l e for radio-astronomy imaging ( G u l l and D a n i e l l : [433), maximizes the image parameter - 2 I n l o 8 ( I n ) . The process includes an implied deconvolution of the beam pattern as the entropy can only be calculated for a wholly p o s i t i v e image. S k i l l i n g ' s MEM r e s u l t s [42] are s i m i l a r to these sharpening r e s u l t s i n that several l o c a l maxima in the entropy parameter are observed, and the algorithm i s unable to chose the correct one without further information about the object. Unconstrained recovery leads to the symmetric, zero-phase image. I t i s concluded that i t i s not possible to construct s a t i s f a c t o r y images from LBI observations by maximizing a simple global image parameter D Global Image Parameters Page 137 such as sharpness unless the phase errors are small. These parameters have several local maxima when the v i s i b i l i t y phase errors are large, and i t i s not possible to distinguish the correct one as the zero-phase condition has the global maximum. In theory a local extremum of the sharpness exists for the correct image, and this has been demonstrated by experiment. It i s necessary, however, to incorporate further a p r io r i information about the object into a correction algorithm to obtain the "correct" image. This i s of particular interest as i t shows that the widely proclaimed maximum entropy method as well as the sharpness technique are not, by themselves, solutions to the LBI imaging problem with large phase errors. Clearly, the sharpness method is not generally rel iable for phase recovery. The results are perhaps useful for cases where the errors are small, however many other methods also work well in these cases. The tendancy to produce the zero-phase image or an image that i s more peaked than the or ig inal i s not helpful . It i s s ignificant that the method can generate reasonable images which are not correct and yet are consistent with the phase closure constraint. Probably the most significant practical thing to note at this point i s that the sharpening algorithm does not get any better results than other methods for an equivalent amount of computation. As i s described in the next section, the LBI imaging techniques of "Closure-Phase", "Self-Calibration" and "CLEAN" have also been tested for this study. These methods make use of additional constraints on the image to avoid the zero-phase solution. The principal constraints are that the object i s positive and confined to a small portion of the f ie ld of view. These conditions, by themselves, were found sufficient to solve the imaging problem and the introduction of an additional global image constraint such as sharpness or entropy i s not necessary. In further experiments, i t was possible to combine a version of the Maximum Entropy method [431 with the confinement and closure-phase constraint to form an imaging system. However, there did not seem to be any advantage to this technique over methods that did not incorporate the entropy constraint. 138 OBJECT PHASE DISTORTION CORRECTOR SENSOR IMAGE SHARPNESS ALGORITHM FIGURE D-1 Adaptive Optic System F i l e r»3H«e : MCTB1 .SKY F i l e s i z e i 64 Ares t h r e s h o l d t REAL f i l e ? Y Max = 1936.00 ( 5? 54) Min = 0.000000 < 0* 0) Number of p o s i t i v e p i x e l s -Sharpness Sum (1**2) = Sharpness Sum(1**3) Sharpness Sum(R*I**2) Entropy Sum(I*Lo3(I)) -Entropy Sum(Log(I)) = Entropy Sum<I**2*LosK 1**2)) Sharpness Sum(1**3)/A Center p o i n t sum 279 0.152734E+08 0.144951E+U 0.423844E+09 206170. 1172.93 = 0.198361E+09 = 0.519536E+08 = 0.000000 A. CTB1 Sky FIGURE D-2a Sharpness Parameters for Images t—1 F i l e name F i l e s i z e Area t h r e s h o l d REAL f i l e ] M 3 X = 1866*58 M i n =: -82*6566 MCTB1.IFT 64 100* Y ( 5? ( 49? 54) 47) [Number of p o s i t i v e p i x e l s Shar^nes Sum<1**2) = Sum(1**3) Sharpness Sharpness Sum<R*1**2) Entropy Sum(I*Lo2(I)) Entropy Sum (Losf ( I ) ) = 1155 0.135220E+08 0.120837E+11 0.394391E+09 163954. -7962.41 Entropy Sum( I**2*Losi( 1**2) )= 0•170226E+09 Sharpness Sum(I**3)/A ~ 0*120837E+09 Center p o i n t sum ~ -27.0945 B. Undistorted Image (with beam) C. Zero Phase - Autocorrelation Image MCTB1.ZMP 33) 6) F i l e name F i l e s i z e t h r e s h o l d REAL f i l e Max = 2343.60 Min = -141.125 Number of p o s i t i v e Sharpness Sharpness Sharpness Sum<R*1**2) Entropy Sum(I*Lo£KI)) Entropy Sum <Lo«J< I) ) Entropy Sum (I**2*Lo3(1**2)> = Sharpness Sum<I**3)/A = Center p o i n t sum --64 100. Y ( 33s-( 33? p i x e l Sum<1**2) Sum(1**3) 1574 •135220E+08 •183239E+11 = 0 692230E+08 145291. 2840.85 0.181770Ei09 0.3523S3E+09 4981.08 O C ZD m ro cr — i rt) Ui W "D - i 03 3 CD o -\ Oi 3 03 C Q £ ® o CA 141 FIGURE D-2c Sharpness Parameters for Images O CO o o CO o +- H~ O o Ui Ui - u •4- -S- CN T T-i iii I . : U! Ci CN 111 O ro * UT O f N UT C-4 0 3 <T CO ro ^ 3 S 3 r o n UT in c-i • UT <T » 0 3 UT irt T o Ci r> CO 0 3 ro S 3 CO Cr- T-1 • • S3 •t-i -i-l CO ro o O i * * C-4 UT o o o ii !! Ii o T-i UT x IO Ul ii i! !! !! i! c-4 •> i-H <C ^ il) Nv • CO X c-i SS Ul o ro C-i W M ro b fi. *: •» St TH x: S 3 i-i > Ul o c ft Si n w O _ J 0 - J TO -> JS: TO \ 0) •rt 3 £ *=• >— CN C <D • P CO ! £ N - i rrt CO-H CO £ - I CO •H 0 • r t CO 0 3 i-t CU C ui .c + CiG Ul co 5 Ul • i X Ul Ul CO £ ui c 1 5 ; OJ as _ i r s • « 3J Ul cu cu r-i rrt i- T-S<4_ CU Ul to c C J • r t •H J C Ul Ul 31 o 0 . u. u. +> a: ) t- 0 . CU iL t- 31 fc. i- ro s- c o •p iL rs i! II CU JZ ro •X t-c O JZ JCS to J C S~ - P Ul J - to *- 1- £ CO fO c -t-> ro XI 3 JZ Ui CO En o HI CD Ui cc JC o CO o Hi CD W CO sz O CM ui CN o CO o CN -<~ © T-1 O Ui +• + •+• m Ul Ui Ui ro uo <?• o T-1 • co ro UT UT Ci S 3 T-1 Ci o o Ci * ; CN Ci CN T li"3 * ro o CN CO CN CN r x S 3 T-i T-I CN ro •^ r 0 3 TN • T-i Trt * o C-i <-o ro * ro o •s- O S3 i! ii i ii ! o i! r-i <*• Ul 1! II II !! !! H r-i • •Sc <r £ CU *fr \ -l & U"3 CO r-i r-l ui UT •rt ro Ci M i-H ^ ro C J s: S 3 > • ^ r-l r- i O Ul O G _! rrt c •rt c- > _ 1 _ i • » O o> • r t £ Cr: C-4 £ iX ai TO CU CO r-i £ 3 ! £ N =-! r-i Ci • r t CO £ ^ 3 CO s-ro •rt O • r t lil Ul 3 £ CO r-4 Gi c Ul J T <i- Ci o Ul CO 3 Ul +> Ul » Ul Ul to £ Ul ru dl m o> Ul 31 3 cu 5 r~i rrt (- irt S 3 <f- i ~ cu Ul iX CO c CL o •rt •rt J C Ul T-i o a. I*" Ul 31 o u. u . +> o: i u CL Of £L t- 31 f-ro ro u C O -P 0 . ro !i li J C ro CL ( - c o J C 2J JZs CO J C r- +> Ui r- to e- c £ to ro C +> ro •rt x: ui C 2 : r 3 : CO Ui Shar'pness Data Processing Q START ^ c END Properly Sampled Object JL F o u r i e r Transform to obtain Aperture Data Sample to simulate LBI Records (add phase n o i s e i l I Adjus,t Antenna Phases to give Maximum Sharpness Combine new phases w i t h o r i g i n a l amplitudes and inverse F o u r i e r Transform to o b t a i n Image FIGURE D-3 Sharpness Data Processing 143 P r o c e s s i n g ( ^ S T A R T ) V i s i b i l i t y D a t a S a m p l e s d i v i d e t r i a l a d j u s t m e n t s i z e c by t w o A p p l y c o r r e c t i o n t o t h e d a t a t r i a l a n t e n n a p h a s e a d j u s t m e n t +c and..- c F F T " t o o b t a i n Image . C a l c u l a t e s h a r p n e s s . E n d I f s h a r p n e s s i s maximum s e l e c t t h e t r i a l a d j u s t m e n t t h a t g a v e t h e max s h a r p n e s s (+.c, 0 , - c ) Q END ^ -FIGURE 0 - 4 Sharpness program operation 144 FIGURE D-5 Variat ion in Sharpness Parameter with Antenna Phase 1 4 5 FIGURE D -6 Sharpened Image 146 FIGURE D - 7 Sharpened Image A (XCTB1) Undistorted Image Distorted Image 10% (36°)Phase Errors Sharpened image ( 2 2 ° ) 1 4 7 7 fa , & ft • \ ' J': f (to Vl V Zero Phase Image Sharpened Image FIGURE D-8 Sharpened Image 148 S101 Undistorted Image B C 100 Phase Errors Sharpened Image FIGURE D-9 Sharpened Image E I t e r a t i v e Reconstruction Methods E I t e r a t i v e Reconstruction Methods Page 149 At t h i s point we move to a discussion of the "standard" LBI image reconstruction techniques. These methods have been used and have evolved over the years of LBI development, and are now i n varying implementations i n regular use by LBI synthesis observers. While these methods might be considered standard, i n p r a c t i c e each observer uses h i s (or her) own implementation. One of the objectives o f t h i s study has been to l e a r n how the various (reported) methods are r e l a t e d and to see which features of each are important. This section w i l l o u t l i n e the experiments done to t e s t the three forms of the LBI algorithms : image con s t r a i n t , phase-closure, and s e l f - c a l i b r a t i o n . These are, i n f a c t , a l l of the same c l a s s of algorithm with d i f f e r i n g constraint conditions applied to the image. Because these are the basic methods of LBI imaging, some general introduction and development to these topics appears i n section A-2.2b. This section i s , therefore, l i m i t e d to a more d e t a i l e d discussion of the experiments and the r e s u l t s . The f i r s t part of t h i s section (E-1) establishes the c o n s t r a i n t condition on the object that i s required for successful imaging. This shows that i f the object i s confined to somewhat les s than 1/2 the area of the f i e l d , then there i s s u f f i c i e n t information for a s o l u t i o n . I t i s necessary to make the a d d i t i o n a l c o n s t r a i n t that the image brightness be confined i n order to have an algorithm for f i n d i n g the r e s u l t . The second part (E-2) then goes on to show that an i t e r a t i v e algorithm which properly applies the confinement c o n s t r a i n t can be successful for LBI imaging. The important point to note i s that the confinement constraint must be properly applied, and the CLEAN algorithm i s a powerful technique for t h i s a p p l i c a t i o n . E a r l i e r experimenters, who have t r i e d t h i s approach, have not been successful because of f a i l u r e to properly apply the image confinement c o n s t r a i n t . The simple image constraint algorithm becomes l e s s e f f e c t i v e as the object becomes more extended and the p e c u l i a r LBI data constraint of closure-phase must be included for imaging these objects. The Phase-Closure and the S e l f - C a l i b r a t i o n algorithms include t h i s a d d i t i o n a l c o n s t r a i n t . The phase-closure method i s o u t l i n e d i n section E-3 and the s i g n i f i c a n t r e s u l t i s that t h i s method was found to be s e n s i t i v e to the array design. Certain layouts of the antennas i n the array do not allow the s e l e c t i o n of a s u i t a b l e reference antenna required by the phase-closure E I t e r a t i v e Reconstruction Methods Page 150 method, and imaging becomes impossible for these arrays. Section E-4 o u t l i n e s the s e l f - c a l i b r a t i o n method which incorporates the closure-phase constraint but in a way that does not require the choice of a reference antenna. This method was found to be i n s e n s i t i v e to the array design. This section also includes a s l i g h t l y modified development of the algorithm which allows r e s u l t s to be obtained even when the i n i t i a l model i s completely "ignorant" of the object. I t i s concluded that the s e l f - c a l i b r a t i o n method i s r e a l l y a form of the confinement constraint r e f l e c t e d i n t o the aperture domain and that the method w i l l y i e l d very accurate r e s u l t s very quickly for s u i t a b l y confined objects. The l a s t section (E-5) reports on experiments in which the maximum entropy method of deconvolving the beam was substituted for the usual CLEAN technique. This hew combination, shows that the a l t e r n a t i v e method can be used i f the confinement constraint i s also applied, however, no advantage i s found to using t h i s a l t e r n a t i v e technique. The contribution of t h i s section has been the development of a "generic" LBI imaging algorithm and i t s t e s t i n g with a v a r i e t y of images and constraints and the r e s u l t i n g conclusion that the confinement constraint i s fundamental to the recovery process. In contrast to the "sharpness" and "speckle" techniques discussed in previous sections, which process the data without the assistance of a - p r i o r i information about the object, these standard methods make a great deal of use of constraints placed on the acceptable image. In essence, these methods attempt to compensate for the unknown antenna phase errors by placing constraints on the reconstructed image. The d i f f i c u l t y with the process i s that the constraints are in the "image" domain, whereas the data i s c o l l e c t e d in the "aperture" domain, and a Fourier Transform i s required to go from one domain to the other. The problem i s also complicated by the sparse sampling of the aperture which makes a complicated beam and which must be deconvolved from the image before the image constraints can be (properly) applied. The whole e f f o r t i s to f i n d a p r a c t i c a l method to recover (acceptable) images from the measured'data and the a - p r i o r i "information". There are e s e n t i a l l y two methods used by radio-astronomers for LBI reduction. These can be c a l l e d "phase-closure" developed by Readhead & Wilkinson [20] and Readhead et. a l . [21] and the more recently developed " s e l f - c a l i b r a t i o n " of "Corriwell & Wilkinson [30] (CW). These algorithms are quite s i m i l a r in many respects, but the s e l f - c a l i b r a t i o n technique i s the E I t e r a t i v e Reconstruction Methods Page 151 gives superior r e s u l t s i n addition to being easier to use. Both of these algorithms are evolutionary developments of e a r l y work at modeling LBI data and algorithms by Fort & Yee [31] (F&Y). Outside the LBI sphere of study, an image constraint algorithm described by Fienup [32] has become widely used. This i s almost i d e n t i c a l to the e a r l i e r F&Y method. As a l l the algorithms are quite s i m i l a r i n o v e r a l l structure, t h i s s ection w i l l begin by o u t l i n i n g the "generic" method of solving these problems, and then proceed to o u t l i n e the d e t a i l s of each technique. This material i s also discussed i n some d e t a i l i n s e c t i o n A-2.2. E-1 I t e r a t i v e Imaging Algorithms One way to view the process of imaging without phase information i s to consider an object with M picture elements P m« This image i s to be formed by the Fourier transform of the M/2 measured complex data values C \ . Under these conditions of sampled data and image, the Fourier transform i s simply a weighted given by sum of the complex numbers. The m^ *1 element i n the image i s thus M/2 P m = yu„. C, (E .1) i=1 where the W . are the complex Fourier weights (which are a function of the th p o s i t i o n of the m element i n the image). For the complete image, there are M o f these equations in the M/2 complex v a r i a b l e s . I f the phases of the data points are unknown, then 1/2 of the complex numbers are unknown and the image cannot be determined. I f i t i s possible to say, however, that 1/2 of the image elements are zero, then t h i s i s j u s t enough information ( s p l i t between the zero p i x e l s and the Fourier transform magnitudes) to allow a formal s o l u t i o n for the image. I t i s poss i b l e , i n p r i n c i p l e , to solve the M equations to determine the M/2 remaining p i x e l values (and i n c i d e n t a l l y the 3 4 Fourier phases). As M may be of the order of 10 or 10 , t h i s s o l u t i o n may require considerable e f f o r t . However, the basic condition i s established that, i f the object i s confined to 1/2 the area of the f i e l d , then an image can be formed without phase information„ The problem, of course, i s that as the d e t a i l e d knowledge of which p i x e l s are zero i s not known before the image i s known, i t i s not possible to f i n d the image by simply solving the M equations. C l e a r l y , many s o l u t i o n E I t e r a t i v e Reconstruction Methods Page 152 images are possible depending on the pi x e l s chosen to be zero. I t i s necessary to introduce a further constraint to se l e c t the zero p i x e l s and to use an i t e r a t i v e algorithm to have a p r a c t i c a l method for determining the image. The constraint, which i s suitable for astronomical imaging, i s that, in addition to occupying le s s than 1/2 the area, the object should be confined to compact features which are separated by more than twice t h e i r widths so that there i s no overlap i n the autocorrelation image. This "confinement constraint" provides a mechanism for the choice of zero p i x e l s , ensures uniqueness of the image [50], and allows a simple technique of discarding a l l p i x e l s not belonging to a group to be used to force a model image to meet the constraint during an i t e r a t i v e process. I t i s important to note that t h i s i s a su i t a b l e constraint for c e l e s t i a l images of st a r s on a dark sky, but i t i s not applicable for common t e r r e s t i a l images of people and places. Two ad d i t i o n a l features of the data also influence the 1/2 l i m i t of the object confinement. The LBI array does not sample the f u l l aperture, and thus there are less than M/2 complex data samples a v a i l a b l e . The number of zero-pixels in the image must, therefore, be increased to account for the f r a c t i o n of the aperture not sampled by the array. As the data samples w i l l be noisy, then t h i s f r a c t i o n should be altered to include only those points above the noise l e v e l . Many LBI imaging algorithms also make use of closure-phase information. This provides an a d d i t i o n a l set of constraint equations r e l a t i n g the phases of the complex samples C. For an array with N antennas, the closure-phase information provides (M/2)(N-2)/N addit i o n a l constraint equations (see section B-4). In p r i n c i p l e , i t i s possible for the number of non-zero p i x e l s in the image to be increased by t h i s number. For an image with M p i x e l s , and an array of N antennas sampling the f r a c t i o n d of the aperture the maximum number of non-zero p i x e l s i n the image i s therefore : M M ( N - 2 ) M nz = { + } d 2 2 N d M ( N - 1 ) N (E.2) E I t e r a t i v e Reconstruction Methods Page 153 The closure-phase information, however, while i t r e l a t e s the data phases, does not give any actual phase values, and i t i s necessary to use an i t e r a t i v e algorithm begining with some estimated phases to be able to use the extra information. This must r e l y for a s t a r t on only the confinement c o n s t r a i n t , and thus the p r a c t i c a l maximum number of non-zero p i x e l s i s l i m i t e d by t h i s condition and i s only : Mnzp = d M / 2 ( E'3> unless a very (lucky) good i n i t i a l model i s a v a i l a b l e for s t a r t i n g the i t e r a t i v e algorithm. This condition i s more than s u f f i c i e n t i n theory for a s o l u t i o n , but i t i s necessary i n p r a c t i c e to have a working algorithm to f i n d the image. The beauty of the i t e r a t i v e LBI imaging methods i s that they are able to apply the confinement constraint without knowing i n d e t a i l beforehand how the object d i s t r i b u t i o n i s to be confined. I t can be seen that the underlying objective of the LBI imaging algorithms i s to s e l e c t which p i x e l s i n the image to set to zero and thus the confinement con s t r a i n t i s of fundamental importance. Note that i f closure-phase information i s included with well sampled data for a confined object, then the image w i l l be "overconstrained" and the accuracy of the f i n a l image can be independent of the presence o f phase errors i n the data. The i t e r a t i v e algorithms developed for LBI imaging are a l l members of a general c l a s s of deconvolution algorithms. These are reviewed i n some d e t a i l by Schafer, Mersereau & Richards [44]. Their development reaches s i m i l a r conclusions to those described above. They also describe c r i t e r i a for convergence of the algorithms and the basic condition i s to assure that the model becomes closer to the correct image with each i t e r a t i o n . I t i s not always possible ,to f i n d a way to do t h i s , however i t i s shown i n [443 that f o r a confined p o s i t i v e object, confinement and p o s i t i v i t y constraints on the image w i l l be s u f f i c i e n t for convergence. The LBI algorithms have i n p r a c t i c e found that the CLEAN algorithm i s an e f f e c t i v e way to apply the constraints to the model. The diagram of figure A-2.1 o u t l i n e s the generic i t e r a t i v e imaging algorithm as used for LBI, The algorithm begins with the measured data and a E I t e r a t i v e Reconstruction Methods Page 154 set of model v i s i b i l i t i e s representing a f i r s t guess about the image. These are then combined, in a way which depends on the d e t a i l s of the technique, to y i e l d a set of "hybrid v i s i b i l i t i e s " . These are in the coordinates of the data c o l l e c t i o n process and they must be resampled to a regular rectangular g r i d to allow further processing. The gridded data can then be ( e a s i l y ) Fourier transformed to the image domain. In the image domain, two operations are performed. The f i r s t i s to deconvolve, as well as i s possible, the beam, and the second i s to enforce the image domain constraints. This l a s t means the confinement and p o s i t i v i t y and i s accomplished (usually) by throwing away any features of the hybrid image that v i o l a t e the constraint c o n d i t i o n ; This constrained model image i s then Fourier transformed and resampled to y i e l d a new set of model v i s i b i l i t i e s ( i n the data coordinates) for the next i t e r a t i o n . The i t e r a t i o n s are ended for a number of reasons. Perhaps the most s a t i s f a c t o r y conclusion i s when the s i z e of the features being added or discarded, when the image constraints are applied, f a l l s below a threshold that has been set from other estimates of the noise in the data. For the experiments of t h i s study, however, the endpoint could be e a s i l y determined by comparing the model with the o r i g i n a l object. This generic algorithm was implemented as a series of computer programs and the process was then tested using various combinations of constraints and cl a s s of object. The t e s t s on the confined and un-confined basic classes of object, and the three algorithms of image constraint, phase-closure, and s e l f - c a l i b r a t i o n w i l l be discussed i n the. next sections. E-2 Image Constraint Algorithms The simplest process for image reconstruction with unknown phases i s that discussed by Fienup [32], based on e a r l i e r work by Gerchburg & Saxton [33]. This i s a s i m i l a r procedure to that used by Fort & Yee [31], although i t appears to have been derived independently for use in electron-microscopy. This a p p l i c a t i o n was developed for cases where only the amplitudes can be measured and phases are complete unknowns. (LBI has the "advantage" that phases can be measured but the numbers are wrong.) For t h i s algorithm, the operation of combining the model with the data.simply requires that the hybrid v i s i b i l i t i e s should be formed by using the amplitudes from the data, and the phases from the model (see fi g u r e A-2.1). These hybrid v i s i b i l i t i e s are Fourier transformed and, the r e s u l t i n g "hybrid-image" i s then corrected E I t e r a t i v e Reconstruction Methods Page 155 by f o r c i n g the brightness to be p o s i t i v e and confined to a small portion of the f i e l d of view. This constrained image then i s used as the model image for the next i t e r a t i o n . The i t e r a t i o n s are halted when the hybrid image meets the image constraints within a prescribed tolerance and the change in s i z e of the features between i t e r a t i o n s i s below, a threshold. This i s described in more d e t a i l in section A-2.2b. There are two basic p r a c t i c a l problems with t h i s algorithm. The f i r s t involves s e l e c t i n g the areas of the f i e l d to force to zero, and the second involves the s e l e c t i o n of the i n i t i a l ( s t a r t i n g ) model. I t was found from the experiments that, i f the object was s u i t a b l y confined, then the s e l e c t i o n of the "box" to confine the image brightness was not c r i t i c a l . An i n i t i a l a r b i t r a r y box could be defined for the f i r s t few i t e r a t i o n s , and then the box could be expanded to f i l l the whole f i e l d once the f i r s t few features of the image had been recovered. The CLEAN algorithm e f f e c t i v e l y uses the image i t s e l f to define the confining regions. The e f f e c t of the i n i t i a l box i s simply to define the l o c a t i o n of the brightest feature i n the image. The i n i t i a l model i s a somewhat more d i f f i c u l t problem. Choosing a simple model, for example a Gaussian shape at the center of the f i e l d , i s not a good choice, as t h i s model has zero (or constant) v i s i b i l i t y phases, and w i l l simply give the zero-phase image as the hybrid-map. The symmetry of t h i s image cannot be broken unless the confining box can be placed o f f - c e n t e r . I t was found in p r a c t i c a l experiments that the symmetry could be broken by s t a r t i n g with a model image with random v i s i b l i t y - p h a s e s . (The measured phases are as good a choice as any for these.) This e f f e c t i v e l y breaks the symmetry from the beginning, but i t has the a d d i t i o n a l d i f f i c u l t y that the peak of the reconstructed image w i l l form at the b r i g h t e s t point (component) of the i n i t i a l random image. I t may be necessary to recenter t h i s peak to l i e inside the confining box as the i t e r a t i o n s proceed. I t was often found that the recentering was necessary only for the f i r s t one or two hybrid images when the process was proceeding c o r r e c t l y . Fort & Yee established the s i z e of t h e i r confining box by trimming a l l those hybrid-image points that are not "attached" to points which are above an a r b i t r a r y threshold. In the experiments for t h i s study, the Fort & Yee technique was improved by using E I t e r a t i v e Reconstruction Methods Page 156 the CLEAN algorithm to separate the image components. This proved to be a s i g n i f i c a n t improvement since, while Fort & Yee report l i t t l e success with t h e i r technique, the modified approach was able to recover images well in the simulations. The reason for the success with the CLEAN algorithm i s that i t s e l e c t s image components with proper regard to the confusing e f f e c t s of the beam sidelobes i n the hybrid image. Figures E-1 and E-2 show images recovered from data with large phase errors by the image constraint algorithm. These examples started with data with 110% phase errors (that i s the antenna phase errors were random numbers uniformly d i s t r i b u t e d i n the range 2.2 ). The i n i t i t i a l model was formed from data with a random set of phases. Figure E-1 shows a simple four point image. The true image i s shown together with an o u t l i n e of the box confining the sources. This i s a " d i r t y " image in that the beam has not been deconvolved. In t h i s experiment, the image constraints were applied by simply zeroing a l l negative points and a l l points outside the box in the hybrid image. (The beam was not deconvolved before constraints were applied.) Note that the o u t l i n e of the box shows up quite quickly as the i t e r a t i o n s proceed, but the component d i s t r i b u t i o n i s random inside the box. In t h i s case the convergence occurred quite quickly between i t e r a t i o n s 30 - 35 and was allowed to continue to 40 to be sure i t would not diverge. The recovered image i s very good considering that the phase information i n the data i s e n t i r e l y ignored. Note that the image has been recovered up-side-down, i n d i c a t i n g an ambiguity of 180 degrees in the recovered phases. Figure E-2 shows a version of the CTBl.image which was recovered.by the simple image constraint method. This f i g u r e also shows the true image and the confining box, the i n i t i a l d i s t o r t e d image, and the f i n a l recovered image. These images have had the beam deconvolved by the CLEAN algorithm. In t h i s case the 200th t r i a l image i s quite a good representation of the true image. Note that i n t h i s case the confining box cuts quite close to several of the features, and the reconstructed image has "moved" i t s e l f to minimize the edge d i s t o r t i o n . The reconstruction of the image tended to be slow, and was slower f or more complicated images. Convergence to the f i n a l image i s asymptotic as the closer i t gets, the fewer the errors that can be corrected by the p o s i t i v i t y c o n s t r a i n t . Fienup [32] and Schafer, Mersereau & Richards [44] discuss "ac c e l e r a t o r " techniques for the basic algorithm which a l l e v i a t e the problem E I t e r a t i v e Reconstruction Methods Page 157 by over-correcting based on previous c o r r e c t i o n s . Although a simple form of these enhancements was t r i e d with the LBI imaging simulations, i t appeared to give no advantage ( e s p e c i a l l y when compared with the addition of closure-phase information.) The recovery process proceeds i n three phases. The f i r s t i s a more or l e s s random search for a s t a r t i n g point. Once a reasonable s t a r t i n g point i s achieved, movement i s quite quick to a crude image, t h i s i s then followed by a much slower movement to a detai l e d c o r r e c t image. Part of the reason for the slowness of f i n a l convergence i s due to the accuracy with which the "CLEAN" algorithm i s able to c o r r e c t l y deconvolve the telescope beam. Noise in the data l i m i t s the success of the CLEAN algorithm ( i e how deeply i t can clean sidelobes) and t h i s w i l l l i m i t the accuracy of reconstruction. The recovery process w i l l only "converge" as long as the CLEANer i s able to c o r r e c t l y add new components to the model image on each i t e r a t i o n . The theory [44] behind the convergence of the algorithm requires the CLEAN to reduce the di f f e r e n c e between the model and the true image on each i t e r a t i o n . I t i s most important to note from t h i s experiment that the corre c t image (with a 180 degree r o t a t i o n a l ambiguity) can be recovered from v i s i b i l i t y data with large phase errors by simply r e q u i r i n g the image to be p o s i t i v e and confined. This provides some perspective for examining the "phase closure" and " s e l f - c a l i b r a t i o n " algorithms which make use of the (pec u l i a r ) LBI data constraint of closure-phase. The addition of t h i s information, while u s e f u l , i s not n e c e s s a r i l y the d r i v i n g force behind the reconstruction process, although i t can speed up the recovery. In c l o s i n g t h i s section on simple recovery algorithms, i t i s worth noting that t h i s simple method does not place any requirements on the data c o l l e c t i o n process. Because i t e n t i r e l y ignores the measured v i s i b i l i t y phases, the method can be used on aperture data that i s not a l l c o l l e c t e d at the same time. The D.R.A.O. synthesis telescope, f o r example, c o l l e c t s a f u l l aperture of data in a se r i e s of 35 nights (12 hour observations). The image constr a i n t algorithm could thus be used to "improve" images from t h i s instrument. This i s not necessary i n p r a c t i c e however, as the telescope does not s u f f e r from large antenna phase " e r r o r s " . Note however that the following E I t e r a t i v e Reconstruction Methods Page 158 more complicated algorithms, which involve the closure-phase c o n s t r a i n t on the data, can only be used to correct data where a l l the c o r r e l a t i o n s are c o l l e c t e d at the same time. This i s because of the antenna based nature of t h e i r c o r r e c t i o n s which applies only to simultaneous measurements. This image constraint algorithm, because i t does not use the measured phase information, can work with only the gridded version of the data. The griddlng and the resampling operations depicted in the generic diagram can thus be n u l l operations for t h i s technique. As the gridding, i n p a r t i c u l a r , can be a very time consuming operation there i s a s i g n i f c a n t saving in operating time by skipping t h i s step. For many images, i t would be optimum to use the gridded image constraint algorithm alone at f i r s t to e s t a b l i s h the major features of the image. Once these were established, the algorithm would be changed to include the closure-phase information using the o r i g i n a l data and the necessary gridding conversions. E-3 Phase Closure Method The next step in the complexity of the image recovery process i s to introduce the closure-phase c o n s t r a i n t on the data. The d e t a i l s of t h i s c onstraint are explained i n more d e t a i l i n section B-U. This section w i l l o u t l i n e the use in imaging algorithms. The processing method has evolved over the years. I t was f i r s t used only for phase-recovery by Readhead & Wilkinson [20], and l a t e r f o r both amplitude and phase recovery by Readhead, Walker, Pearson, & Cohen [21]. This development was based on e a r l y work by Jennison [17]. E s s e n t i a l l y the closure-phase c o n s t r a i n t recognizes that the phase errors enter the data at each antenna and that therefore the e r r o r s cancel i n sums of the v i s i b i l i t i e s f o r a set of baselines defined by three antennas. This i s a property of the data c o l l e c t i o n process in the aperture domain, and the t r i c k i s to combine t h i s property with the c o n s t r a i n t s placed on the image in the image domain. Note that for an array with N antennas, there are N(N-1)/2 v i s i b i l i t i e s (measured) but there are only N(N-1)/2-(N-1) = (N-1)(N-2)/2 closure sums. Thus, there i s not enough information i n the closure sums f o r a complete s o l u t i o n . A d d i t i o n a l a - p r i o r i information about the object must be used to make up for the unknowns. E I t e r a t i v e Reconstruction Methods Page 159 Readhead and Wilkinson [20] chose to include the closure-phase information during the combination of the model and data phases i n the following ( a r b i t r a r y ) way. For an array with N antennas, the "hybrid" phases would be selected from the model for N-1 baselines, and the remaining (N-1)(N-2)/2 baseline phases would then be determined by these model phases and the closure-phase sums of the data. T y p i c a l l y the i n i t i a l N-1 model phases are selected by choosing a reference antenna (which defines N-1 ba s e l i n e s ) . These selected phases from the model are then combined with the data closure sums to solve for the remaining (N-1)(N-2)/2 hybrid v i s i b i l i t y phases. The closure-phase sums are defined as : S i j k = + ^ k + ^ i ' in terms of the data, d, and the indices s i g n i f y the antenna numbers for the c o r r e l a t i o n . Note that the order of the in d i c e s i s important as d ^ s - d^ k. i f the model phases f o r reference antenna r are denoted by M ^ then the hybrid phases are given by : Q. . = S. . + M . - M . (E.5) i j i j r r j r i for i , j not equal to r and by : Q i j = M r j for i = r . (E.6) The phases for the v i s i b i l i t i e s can thus be calculated for a l l baselines given the closure-phase sums and the model phases Mrl{. These define a s u i t a b l e set of phases which can be combined with the data amplitudes to be used to form the hybrid image where the image domain constraints can be applied. Although the phase closure algorithm i s often successful, i t may not converge without some guidance or supervision to steer i t in the r i g h t d i r e c t i o n . The e s s e n t i a l problem i s that an a r b i t r a r y choice i s involved in the s e l e c t i o n of the model v i s i b i l i t i e s using a reference antenna. Not a l l choices are equivalent and a bad choice can prevent convergence. To i l l u s t r a t e the problem for the choice of reference antenna, f i g u r e E-3 shows E I t e r a t i v e Reconstruction Methods Page 160 a s e r i e s of hybrid maps made from data with the same errors and i n i t i a l model, but with d i f f e r e n t reference antennas. In t h i s case the "model" was a Gaussian shape coinciding with the bright peak i n the CTB1 image [12]. The eight antenna choices are shown and, shown i n addition i s the choice where the model baselines are selected as those which are between adjacent antennas. The "true" image i s also shown for reference. Note that these are " d i r t y " images, i n that the telescope beam has not been removed. As can be seen from the f i g u r e , the r e s u l t i n g hybrid-image v a r i e s from recognizable to incomprehensible depending on the choice of reference antenna. C l e a r l y some reference antenna choices are better than others. These i l l u s t r a t i o n s show that even the best hybrid image contains a s u b s t a n t i a l number of i n c o r r e c t points which somehow must be removed i n the formation of the model for the next i t e r a t i o n . In terms of the measured (RMS) error between the hybrid map and the true image, they a l l have a larger error than the i n i t i a l model. A v i s u a l examination however, indicates that several of the hybrid images show more of the structure of the source than of the model. The RMS error parameter i s not, i n t h i s case, a useful number to use to judge the error i n the hybrid image. An attempt was made to t r y to understand the e f f e c t of the reference antenna and to see i f i t was possible to develop a general c r i t e r i o n for i t s s e l e c t i o n . I t was noted at the outset that the adjacent antenna choice never seemed t o be a good s e l e c t i o n . To study the e f f e c t of the reference antenna, the " d i f f e r e n c e " between the hybrid maps and the true map was c a l c u l a t e d . This was a simple parameter obtained by subtracting the two maps point-by-point and d i v i d i n g the sum of the absolute values o f these d i f f e r e n c e s by the sum of the correct image. This y i e l d s a normalized (dimensionless) error parameter s u f f i c i e n t f or these t e s t s . This parameter, c a l l e d the "hybrid map e r r o r " was then c a l c u l a t e d for a number of reference s e l e c t i o n s to see i f any pattern existed. Based on i n i t i a l t e s t s , i t seemed that the "best" choice of reference antenna (the one that gave the smallest error parameter) was the antenna for which the average of a l l the baselines i n v o l v i n g the antenna was the smallest. Further t e s t s suggested that a better choice was the antenna with not only the shortest average length but also the smallest spread of of baseline lengths. This means that i t would cover the smallest area of the aperture. Plots of these parameters for several images and a number of arrays are shown in figures E - 4 and E - 5 . These fi g u r e s E I t e r a t i v e Reconstruction Methods Page 161 i n d i c a t e a random spread of points from which i t seems that no simple general r e l a t i o n e x i s t s between hybrid map error and baseline lengths. There i s too much v a r i a t i o n between arrays and images to be meaningful and an antenna with a narrow baseline coverage would not be the best choice. "A A b i t of r e f l e c t i o n on the success of the image constraint method, which gives good r e s u l t s when a l l phases are taken from the model, suggests that the best choice of the reference antenna i s the one which s e l e c t s the set of v i s i b i l i t i e s which best represents the model. This means that the selected v i s i b i l i t i e s contain most of the structure in the model. Thi s could be expected to be d i f f e r e n t for each object (or state of image in the reconstruction) and for each LBI array. As each object has d i f f e r e n t , unpredictable, important points in the aperture, t h i s i s not an easy choice. I t would perhaps be possible to embellish the basic phase closure algorithm to t e s t a l l p o s s i b l e reference choices and s e l e c t the one which best reproduces the model at each i t e r a t i o n . The approach that was adopted for t h i s study, and which was usually s u c c e s s f u l , was to change the reference antenna f o r e a c h " i t e r a t i o n . The reference antenna number was simply incremented for each pass. While c l e a r l y t h i s w i l l not converge as q u i c k l y as when the best reference i s always used, i t i s much better than using only the worst reference. An "optimum" method would be to observe the f i r s t number of i t e r a t i o n s c a r e f u l l y with changing reference and note those that provide a poor hybrid image. These antennas would then be skipped in the c y c l e of reference s e l e c t i o n s f o r the remaining i t e r a t i o n s u n t i l convergence. Another approach might be to (somehow) a l t e r the algorithm so that a choice of reference i s avoided. One possible way to do t h i s i s to f i t the model to the closure phases by another method. Fort and Yee [31] suggest one method, and a s i m i l a r procedure was t r i e d f or t h i s study. In t h i s case, instead of choosing a reference set of phases from the model, the d i f f e r e n c e between the model and the data closure sums were c a l c u l a t e d , and then one-third of t h i s d i f f e r e n c e was applied as a c o r r e c t i o n to each o f the model phases in the closure set. Each baseline receives a c o r r e c t i o n from s e v e r a l closure sets, and the t o t a l a p p l i c a t i o n of the corrections y i e l d s a hybrid image with (almost) the same closure sums as the data. Figure E-6 shows a hybrid image made with t h i s procedure. Note that t h i s i s a much bett e r r e s u l t than any of the images made with "reference antennas" shown in f i g u r e E-3. E I t e r a t i v e Reconstruction Methods Page 162 This was the only t e s t image used with t h i s method. The major p r a c t i c a l d i f f i c u l t y with t h i s approach was that the closure forcing process was very slow to c a l c u l a t e and required about 15 minutes for each " f i x " . This could not compete with the s e l f - c a l i b r a t i o n algorithm. Although several t e s t images were recovered using the phase closure algorithm, the r e s u l t s were not e n t i r e l y s a t i s f a c t o r y . In several cases i t was found that changing the array layout made a previously recoverable image un-recoverable. This d i f f i c u l t y led to an early abandonment of the process and emphasis instead on the s e l f - c a l i b r a t i o n method. The phase closure algorithm i s judged to be not suitable for general LBI imaging. E-1 S e l f C a l i b r a t i o n Although the Phase Closure algorithm has seen wide use by astronomers, and may be c r e d i t e d with making LBI p r a c t i c a l , the a r b i t r a r y mechanism for including the model v i s i b i l i t i e s with the phase closure sums and the subsequent s e n s i t i v i t y to the choice of the reference antenna has led to a search for a better technique. Cotton [34], Rogers [35] and Cornwell & Wilkinson [30] have provided the development of an algorithm that has become known as " s e l f - c a l i b r a t i o n " . This method i s now quite extensively used with connected interferometers such as the VLA, MERLIN, and Westerbork, where i t i s successful i n recovering very high dynamic range images in the presence of the small antenna er r o r s c h a r a c t e r i s t i c of these instruments. Legg [41] also reports good r e s u l t s from the use of the algorithm for simulated LBI data. This section w i l l o u t l i n e (in some d e t a i l ) how the method operates. While the algorithm as o u t l i n e d by Cornwell & Wilkinson [30] i s generalized for both amplitude and phase error corrections and allows d i f f e r e n t ranges of errors for each antenna, in the begining stages, the process t r e a t s the amplitude and phase c o r r e c t i o n s separately. As the LBI data i s p r i n c i p a l l y d e f i c i e n t in v i s i b i l i t y phases, t h i s discussion w i l l be l i m i t e d to the use of s e l f - c a l i b r a t i o n f or imaging under conditions where the phase errors are large and uniformly d i s t r i b u t e d . This s i m p l i f i c a t i o n greatly eased the s t u f f i n g of the simulation programs into the (small) DRAO computer. I f a p r a c t i c a l LBI data recovery system was being b u i l t , then i t would of course include both amplitude and phase " s e l f - c a l i b r a t i o n s " , but the phase corrections would s t i l l dominate the a b i l i t y to reconstruct the c o r r e c t image. ; '< -E I t e r a t i v e Reconstruction Methods Page 163 The S e l f - C a l i b r a t i o n method i s another way of including the closure-phase constraint when combining the model phases with the data. The process resembles the c a l c u l a t i o n s r e g u l a r l y done to c a l i b r a t e connected interferometers using unresolved c a l i b r a t o r sources, however as i t works instead with the image i t s e l f , i t has been given the name " s e l f - c a l i b r a t i o n " . The method begins by ignoring the phase closure sums and recognizing that the phase errors o r i g i n a t e at each antenna. The problem i s thus reduced to f i n d i n g a c o r r e c t i o n for each antenna rather than a c o r r e c t i o n for each base l i n e . This change i n point of view i s equivalent to the phase closure approach but i t s i m p l i f i e s the problem to f i n d i n g N ( a c t u a l l y N-1) antenna corrections instead of N(N-1)/2 baseline c o r r e c t i o n s . The s e l f - c a l i b r a t i o n method thus derives N antenna corrections and applies these to the data without d i s t u r b i n g the closure sums. The following i s a simple d e r i v a t i o n of the basic s e l f - c a l i b r a t i o n process for phase e r r o r s . The process begins with statements about the data and the problem. For an array of N antennas, there are N(N-1)/2 v i s i b i l i t y measurements (the data) and there are N(N-1)/2 simulated v i s i b i l i t i e s from an i n i t i a l guess at the image (the model). Let the data points be D.. and the model be M.. where the indexes i and j go from 1 to N with i not equal to j . These subscripts i d e n t i f y the antennas involved i n each c o r r e l a t i o n . An approach for a s o l u t i o n i s to choose a set of antenna corrections X. J ( j = 1 to N ) i n such a way that, when they are applied to the data, they minimize an "error parameter" between the model and data v i s i b i l i t i e s . This i s a "reasonable" thing to do as i t assumes that the model contains the best a v a i l a b l e knowledge of what the image looks l i k e . This may include the possible areas of blank sky, and an estimated source d i s t r i b u t i o n . I t i s thus a good idea to t r y to fin d a set of antenna corrections which w i l l best re-cast the data to conform to t h i s a d d i t i o n a l information. I t i s perhaps possible to think of the antenna corrections as a set of adjustment "knobs", one on each antenna, that are twiddled u n t i l the best match between the data and the model i s achieved without d i s t u r b i n g the closure-phase sums. I t i s important to note that, while these corrections may be thought of as a data domain co n s t r a i n t , they are i n f a c t an image domain confinement cons t r a i n t r e f l e c t e d into the data domain. The model image, at l e a s t i n i t i a l l y , w i l l be confined to a few bright features and thus the choice of the antenna E I t e r a t i v e Reconstruction Methods Page 164 corrections i s being made to confine the image as much as possible to match the model. The corrected data Q ± j i s : = + X± - X^ . (E.7) The d i f f e r e n c e squared between the model and the data i s then E i j = ( Q u - V 2 = ( D u + x i - X J - v 2 • (E-8) To f i n d the "optimum" value of the c o r r e c t i o n for the j antenna, X^, we can take the p a r t i a l d e r i v a t i v e with respect to X.= and set i t to zero (and hope for a minimum!) : d E d — = ( D i j + X i " X j - M i j > = ° d X. d X. J J J J J i 0 = 2 ( + X± - Xj - M t J ) . (E.9) There are N-1 of these equations, one for each antenna (minus one) and each of which i s a sum over i = 1 to N ( i not equal to j ) : or N N (N-DX '= ) + ^ ( X . ) . (E . 1 0 ) J i=1 J 3 i=1 However, we only have (N-1) of these equations to solve for the N values,of X. and some form of a d d i t i o n a l c o n s t r a i n t i s required. This w i l l use up the s i remaining a r b i t r a r y degree of freedom l e f t which sets the pos i t i o n i n g ; o f the image in the f i e l d of view (the orign of the coordinate system). For convenience in t h i s d e r i v a t i o n l e t us assume : N "X ( X . ) = 0 . (E . 1 1 ) i=1 This i s equivalent to saying that the average antenna phase error i s zero and i s a d i f f e r e n t assumption from that made by Cornwell & Wilkinson [ 3 0 ] at t h i s E I t e r a t i v e Reconstruction Methods Page 165 point in t h e i r d e r i v a t i o n . They assume X r = 0 f o r a selected antenna which introduces the (unnecessary) choice of a reference antenna. With t h i s assumption about we can proceed. From the assumption : N K i=1 1 i=k I f t h i s i s introduced into the solution for the optimum correction Xj • _ 1 N X, = ;£( D.. - M . ) . (E.13) J N i=1 J J i=j This says, i n e f f e c t , that the (best?) estimate of the antenna phase correction ( i n a l e a s t squares sense) i s (simply) the average of the dif f e r e n c e between the model and the data phases for each baseline i n v o l v i n g the antenna. On the surface, t h i s i s a simple ;and pleasing r e s u l t . t I This i s a s i m i l a r r e s u l t to that derived by Cornwell & Wilkinson [ 3 0 ] for the phase corrections when the errors are equally large at a l l antennas. There i s a d i f f e r e n c e r e s u l t i n g from the d i f f e r e n t assumptions for solving for N unknowns with only N-1 equations. In these experiments, the above r e l a t i o n s were used, because they were easier to c a l c u l a t e , but i t also seemed that when compared with Cornwell & Wilkinson's r e s u l t , the above formulation gave better performance in the sense that the error between model and data, a f t e r correction, was ( s l i g h t l y ) smaller. I t also avoids the need h for an i m p l i c i t choice of a reference antenna[for the c a l c u l a t i o n s . Note that the above derivation i s an approximation. The true comparison of the v i s i b i l i t i e s (rather than just the phases) includes a sine function which makes i t unsolvable except in the approximation where the corrections s are small and thence that sin(x) x or sin( D, . + X. - X. - M. .) JSiD, . + X, - X. - M. , . i j i j i j i j i J i j This formulation gives what appears to be a simple c a l c u l a t i o n for the estimates of the antenna phase errors. I t i s not so simple in p r a c t i c e however. The numbers to be "averaged" to estimate the corrections are in f a c t angles, as they are the phase of the complex v i s i b i l i t i e s . Angles are s l i p p e r y things to average. In the case of short baseline (connected) \ E Iterative Reconstruction Methods Page 166 interferometry, where self-calibration has been extensively applied, the phase errors are small, and the i n i t i a l starting model i s quite good. Under these conditions, the angles are a l l small, the sin approximation is good, and the averaging i s easy to perform. For the case of LBI data however, the phase errors are large ( as large as 210, and the i n i t i a l guess may be quite poor. The corrections are thus large and the averaging of angles is tricky to perform. As an example, consider a case where the differences between the model and data phases, yield a set of numbers a l l around 180 degrees (eg +150, +170, -175, -168). A simple averaging of these numbers according to the derived formula w i l l yield a number of about zero (-5.75), which i s clearly not a good "average" angle for this set as i t is in an entirely different direction. Cornwell & Wilkinson [30] do not make clear in their paper, nor does Legg [41], i f they encountered or solved this problem. Note that their examples applied to cases with small errors and good models and so they may not have encountered the d i f f i c u l t y . In the regular use of self-calibration at the VLA, corrections are always small so the d i f f i c u l t y would not be encountered. For these simulation experiments, the d i f f i c u l t y in calculating the angles was resolved with the following adaptive method. The computer calculated the correction angle in two ways. The f i r s t used equation (E.13) directly by making use of the relation : sin((_A+B)/N) = sin(A/N)cos(B/N)+cos(A/N)sin(B/N) . (E.14) The second calculation added the components of the angles, and calculated the correction as the arctangent of the ratio of the sum of the sine and cosine components. This yields the angle of the resultant vector. This i s not the "correct" angle however since sin(A) + sin(B) is not equal to Sin(A+B) (unless the angles are very small). The program would compare the two estimates, and i f they were the almost the same (^20% ) then the direct formulation answer was used. If the two differed by about Tf(±20% ) then the direct formulation answer with a correction of Tf was used. If the two numbers were not nearly the same, then the result from the sum of components was used since i t was at least in the right quadrant. In addition, i t was found useful for the algorithm to weight the phases in forming the sum for the averaging. This was done so that the large phase errors in the small amplitude v i s i b i l i t i e s would be.suppressed (not E I t e r a t i v e Reconstruction Methods Page 167 corrected), u n t i l the larger amplitude v i s i b i l i t i e s were corrected. The weighting factor selected was the r a t i o of the model to data v i s i b i l i t y magnitudes. Either M/D or D/M was chosen, which ever was l e s s than one. This fa c t o r has the e f f e c t of making a l l the corrections small in the i n i t i a l stages when the model estimates are poor, and allowing the c o r r e c t i o n s to grow as the estimates become more accurate. In operation, the algorithm was found to perform quite w e l l . In most cases the reconstruction began with a narrow Gaussian model. This was placed in the f i e l d at the expected l o c a t i o n of the peak of the image. A centered model can be used, but then the reconstruction w i l l have i t s peak centered, and with the small image si z e s a v a i l a b l e in the simulations, t h i s would not permit many asymmetric images to be recovered without "wrapping" around the edges of the f i e l d . A p r a c t i c a l data processing system would be able to work with image f i e l d s s u f f i c i e n t l y large to allow the image peak to be centered. I t was found that the time taken for recovery was strongly dependent on the accuracy of the i n i t i a l model. The presence of a strong point component i n the image also greatly speeded convergence, as the i n i t i a l Gaussian model could well represent the point component. , Three examples of images recovered with a combination of s e l f - c a l i b r a t i o n and CLEAN are shown in figures E - 7 , E-8 and E-9. These examples are a l l 64x64 p i x e l s i n extent and the recovery was simulated with an LBI array with 8 antennas. Figure E-7 shows the CTB1 object [12] with the prominent point source component removed. This object i s severely if underpopulated in that only about 250 of the t o t a l 4096 p i x e l s are non-zero. * I t was not possible to recover t h i s (or the following two examples) with only the confinement constraint. In the case of figure E - 7 , the r e s u l t i s almost an exact duplicate of the object. The accuracy was mainly l i m i t e d by the extent to which CLEAN could restore the image d e t a i l s . Recovery can be swift and accurate i f the object i s well confined. Figure E-8 shows the HC-40 object (courtesy of Jim Caswell), which i s more extended and has approximately 1/2 of i t s p i x e l s non-zero. This reconstruction i s also quite accurate although i t required more than 10 times as much computation as fig u r e E - 7 , and some of the d e t a i l s are not completely c o r r e c t . The recovery process becomes longer as the object becomes more complex. Figure E-9 shows an (unfinished) recovery of another extended object (S142 courtesy of Lloyd E I t e r a t i v e Reconstruction Methods Page 168 Higgs) which also has about 1/2 p i x e l s non-zero. This example was developed using the new CLEAN algorithm described i n section B-2. Other experiments using the standard CLEAN algorithm had found t h i s object to be impossible to recover as the extended portion could not be restored. The new algorithm for CLEAN makes possible the recovery of more extended objects than was previously possible. E-5 F i l t e r i n g and De-convolving the Beam The o v e r a l l reconstruction process, as i l l u s t r a t e d i n fig u r e A-2.1 involves combining the model & data v i s i b i l i t i e s , forming a hybrid map from the r e s u l t , and then operating on t h i s hybrid map to apply the image domain co n s t r a i n t s . There are essentially two problems here. Due to the sparse sampling of the aperture, the instrument has a complicated beam with many sidelobes. The hybrid map therefore, i n addition to having many large e r r o r s and a r t i f a c t s due to the phase errors, i s also complicated by the presence of the beam sidelobes. Before the image constraints can be applied, the beam must somehow be removed (deconvolved). Consider for example an array which does not sample the zero length baseline (no synthesis array can sample the zero spacing without having antennas merging together). The r e s u l t i n g image from the Fourier Transform of the v i s i b i l i t i e s w i l l have zero average value (because the DC term i s missing). Zero average value implies quite a few negative regions, which i s contrary to the postulate that the sky has only p o s i t i v e brightness. Removing these e f f e c t s , i s the job of the "deconvolve-the-beam" operator (CLEAN or MEM) indicated i n f i g u r e A-2 . 1 . I t i s not an easy task to perform however. During the ea r l y i t e r a t i o n s , when the phase corrections are s t i l l unknown, deconvolution i s impossible because the beam i n the d i r t y image i s not the known beam, but rather a beam which includes the (unknown) phase errors, and i s too extended for use. During the f i r s t i t e r a t i o n s , therefore, the deconvolution algorithm can only pick out a few of the brightest points i n the image and hope that they are parts of the object. The standard algorithm to perform t h i s task i s CLEAN which i s discussed i n some d e t a i l i n section B-2. When used i n t h i s mode i t must be c a r e f u l l y c o n t r o l l e d and terminated much e a r l i e r than i f the beam were accurately known. E Iterative Reconstruction Methods ^ Page 169 In normal operation, there are three c r i t e r i a for stopping CLEAN : when the residual peak goes below a preset limit, when the RMS of the residual goes below a limit, or when the residual peak stops decreasing in amplitude. This last criterion i s a "safety net" and termination for this reason usually indicates an attempt to CLEAN below the noise level. The RMS limit was found to be quite useful for the LBI image processing. The limit value would be set quite high for the i n i t i a l iterations, forcing early termination. Then, as the corrections become better, the level was reduced to allow restoration of more detail. The main function of CLEAN however during the f i r s t iterations, is not to deconvolve the beam, but to separate the hybrid image into a limited number of features, and thus to ensure the separation requirement needed by the theory to permit the image recovery. It was found that quite often the output from CLEAN was not satisfactory and i t was supplemented by a thresholding operation. This removed a l l the components below a set fraction of the peak. This had the effect of reducing the number of components and removing any negative bits CLEAN may have put in by mistake. A method suggested by Fort and Yee [31] was used to set the threshold level as a fraction of the absolute value of the largest negative component in the CLEAN density f i l e . The presence of negatives in the CLEAN output implies errors also in positive components of similar amplitude and thus a l l doubtful components are thrown away. As the phase corrections become more accurate, the negative features reported by CLEAN become smaller, and thus the trim level i s automatically reduced to allow the smaller detailed features to be recovered in the end. This CLEANing/threshold process i s the agent by which P. the image domain constraints are applied, and the operation of the whole recovery algorithm i s c r i t i c a l l y dependent on these processes. Often, several t r i a l recoveries were made to establish a suitable set of of trimming and stopping values before the process was allowed to run to completion. Although the CLEAN algorithm was found to be an effective agent for deconvolving the beam and applying the image domain constraints, i t i s not the only algorithm available. The maximum entropy method (MEM) of Gull & Daniell [43] i s another process suitable for deconvolving the beam. The MEM is a widely proclaimed method of solving many imaging problems however, as i t s implementation i s not straightforward, i t has not been as well tested as CLEAN. In these tests, an early version of the MEM (ver: I.June 1981) program, very kindly supplied by Steve Gull, was tested as a replacement for E I t e r a t i v e Reconstruction Methods Page 170 CLEAN i n the LBI imaging simulation of fi g u r e A -2 .1 . When the MEM was f i r s t introduced, only the zero-phase image could be recovered. The MEM deconvolution produces a broad, smooth image which i s in c o n f l i c t with the confinement constraint. I t was found necessary to supplement the MEM program with a thresholding operation to s e l e c t only the br i g h t e s t features and ensure confinement of the model. With t h i s supplement, simulated LBI images could be recovered equivalent to those provided by CLEAN. As the MEM never produces any negative features however, the threshold l e v e l had to be con t r o l l e d manually and gradually reduced to zero from an i n i t i a l high value. The necessity to do t h i s , plus the longer running time for MEM i n comparison with CLEAN, gave an o v e r a l l algorithm that was not as easy to use as CLEAN. It was shown to work and y i e l d equivalent r e s u l t s for a few images, but not otherwise tested extensively. There did not seem to be any advantage i n using the MEM technique. E-6 Aperture Sampling Simulations Included i n the generic algorithm of figu r e A-2.1 are two operations ("grid" and "sample") for transforming between the coordinates of the measured data and a rectangular gri d s u i t a b l e for the f a s t Fourier transform process. These operations are necessary f o r algorithms that make use of the closure-phase constraint. This constraint r e l a t e s the phases of the antenna sig n a l s and thus i t can only be applied to the data i n the measurement coordinates (and not to the gridded approximation of the data). These are not easy conversions to perform as i t i s necessary to i n t e r p o l a t e the data between the measurement l o c a t i o n s . The problem i s e s p e c i a l l y d i f f i c u l t as the data are complex numbers and the phase component i s d i f f i c u l t to extrapolate over long distances. These coordinate conversions are a s i g n i f i c a n t computational burden i n the i t e r a t i v e LBI imaging algorithms. In order to si m p l i f y the simulation experiments and to speed the operation of the programs for t h i s study, the resampling operation was simulated i n a s i m p l i f i e d manner. The aperture sampling tracks were taken to be l i n e a r (rather than curved as shown i n fi g u r e B-1.7) and to be aligned with a g r i d i n the aperture. (Think of "straightening out" the curved tracks and s t r e t c h i n g them to form a s e r i e s of p a r a l l e l l i n e s . ) The telescope sampling points were thus made coincident with the g r i d points. For an 8 E I t e r a t i v e Reconstruction Methods Page 171 antenna array with 28 baseline sampling tracks then the aperture coverage was 28 of the 33 g r i d l i n e s in the aperture for a 64x64 image. The "zero" l i n e and the "high frequency" grid l i n e s were not sampled. This l i n e a r sampling pattern accounts for the l i n e a r sidelobe features found on some of the " d i r t y " map i l l u s t r a t i o n s . These l i n e a r tracks resemble ( s l i g h t l y ) the l i n e a r sampling tracks formed by an array with large North-South baseline components viewing a source at the equator. This s i m p l i f i c a t i o n of the aperture sampling pattern greatly eased the task of programming, allowed the programs to be f i t t e d into the (small) DRAO computer, and s i g n i f i c a n t l y speeded the t e s t i n g process. E-7 Summary of I t e r a t i v e Algorithm Results. In t h i s section the experiments with the standard LBI imaging algorithms have been described. The s i g n i f i c a n t c ontribution has been the development of the generic algorithm ( f i g u r e A-2.1) and i t s t e s t i n g with a v a r i e t y of image classes and constraints. The s t a t e - o f - t h e - a r t p r i o r to t h i s work was the existence of a number of separate working algorithms with considerable uncertainty as to how they were r e l a t e d and the r e l a t i v e e f f e c t i v e n e s s of the constraints applied by each process. As a r e s u l t of these experiments, i t i s now possible to say that a l l of the algorithms are of the same c l a s s and that i t i s the p o s i t i v i t y and confinement con s t r a i n t that i s mainly responsible for the recovery of the image. Imaging i s possible, without any phase information, i f the object i s s u i t a b l y confined. i The introduction of closure-phase information enables more accurate r e s u l t s to be obtained more quickly, however the i n i t i a l development of the main features from a poor model s t i l l r e l i e s on the confinement c o n s t r a i n t . The operation of the CLEAN algorithm to enforce the image constraints of confinement and p o s i t i v i t y i s c r i t i c a l to the success of the LBI algorithms. I t i s perhaps not generally r e a l i z e d j u s t how important a r o l e t h i s algorithm plays in separatng the image components and s u c c e s s f u l l y deconvolving the beam. The phase-closure algorithm has been shown to be s e n s i t i v e to the choice of the reference antenna and for t h i s reason i t i s not always able to operate with c e r t a i n array designs. I t i s thus not a good choice of algorithm for an LBI imaging system. The s e l f - c a l i b r a t i o n algorithm has been shown to be not s e n s i t i v e to the array design and for t h i s reason i t i s a good choice E . I t e r a t i v e Reconstruction Methods Page 172 introduced to allow the operation i n cases where the phase errors are very large and the i n i t i a l image estimate i s poor. The standard algorithm for deconvolving the beam, CLEAN, has been replaced i n the LBI imaging process with a version of the maximum entropy method. This a l t e r n a t i v e combination was found to y i e l d equivalent r e s u l t s (to the standard CLEAN) when combined with the s e l f - c a l i b r a t i o n method and an a d d i t i o n a l operation to ensure the confinement of the image. Although t h i s new combination was found to work, there seemed to be no benefit (other than the development of a "maximum-entropy" type image) i n i t s use and the implementation was slower and more d i f f i c u l t to use. 1 7 3 True Image & "Box" Initial Image Final Image Figure E-1 Image Constraint Reconstruction V 1 7 4 True Image & "Box" Initial Image 1 1 0 % 1 200 Final Image Positive inside Box, Zero outside Figure E-2 Image Constraint Reconstruction (J3M array) 1 7 5 True Image Model •rms=20.60 CTB1 First Hybrid Maps Variation with Reference Antenna Adjacent= 58.1 2 Figure E - 3 (cont.) M C T B 1 CENTER MODEL Z3 O c 33 m m i »< cr 03 • o m CO ZT o Z3 > CD CO 03 3 (D CD . M C T B i CENTER MQDEL cr SB • o m - i - i o x; I i ! ! o> ! i i + X < * > £ 7 ' ! <A : i + i • : A * i ! i i 1 e; 3 j ; j j ** [ i I x j £ I $< I ! J... oA! o ; f jo j A I i eg ! I ! I 1 ! I o ! • ? i i f i i f ! ! i i ! I f i ! I c i I I ! I • I ! I 33 ! \ I ' | I ! \ \ \ m • I j j j j i I j [ j j j j j j j [ i I . Ol "VToO l'o.OO l'l.OQ 12.00 13.00 A.00 15.00 15.00 1*7.00 1*6.00 lb.00 2b. 00 2*1.OO 2*2.00 2*3.00 2U.G0 25.00 2S.00 2*7.00 AVERAGE BASELINE LENGTH zr O Zi > < CD CQ CD > CD =3 Zi f» DO SB 05 CD ine H r- oa CD Zi CQ rr 1 7 9 Initial Image 110% errors 'Average 22.69 Hybrid Image from Averaged Closure Sums Figure E-6 1 8 0 XCTB1 Object Recovered Image FIGURE E - 7 Self-Calibration Recovery HC40 - CKCAM 280 iterations FIGURE E-8 Self-Calibration Recovery S142M Object 70 iterations 0 O 0 Recovered Image WM M n z » 1 / 2 M FIGURE E-9 Self-Calibration Recovery (incomplete) with Extended Source and Modified CLEAN References Page 183 References [1] H.W. Babcock, J.Opt.Soc.Am. Vol 48 1958 pp500, & "The P o s s i b i l i t y of Compensating Astronomical Seeing", Pub.Astron.Soc.Pac. Vol 65 #386 Oct 1953 pp229-236. [2] J.E. Baldwin & P.J. Warner, "Phaseless Aperture Synthesis", Mon. Not. R. Astr . Soc. Vol 182 1978 pp411-422. [ 3 3 T.M. Brown, "Reconstruction of Turbulence-Degraded Images using Nonredundant Aperture Arrays", J. Opt. Soc. Am. Vol 68 # 7 Ju l y 1978 pp883-892. [4] A. Buffington, F.S. Crawford, R.A. Muller, A.J. Schwemin, & R.G. Smits, "Correction of Atmospheric D i s t o r t i o n with an Image-Sharpening Telescope", J.Opt.Soc.Am. Vol 67 #3 March 1977 pp298-305. [5] F.M. Cady & R.H.T. Bates, "Speckle Processing Gives D i f f r a c t i o n - L i m i t e d True Images from Severely Abberated Instruments", Optics Letters Vol 5 10 Oct 1980 pp438-440. [6] F.J. Dyson, "Photon Noise & Atmospheric Noise i n Active O p t i c a l Systems", J.Opt.Soc.Am. Vol 65 #5 May 1975 pp551-558. [7] J.P. Hamaker, J.D. O'Sullivan & J.E. Noordam, "Image Sharpness, Fourier Optics, and Redundant-Spacing Interferometry", J.Opt.Soc.Am. Vol 67 #8 Aug 1977 pp1122-1123. [8] K.T.Knox, "Image Reconstruction from Astronomical Speckle Patterns", J.Opt.Soc.Am, Vol 66 #11 1976 pp1236-1239. [9] K.T.Knox & B.J.Thompson, "Recovery of Images from Atmospherically Degraded Short Exposure Photographs"„ Astrophys.J, Vol 193 Oct 1974 pL45-L48. [10] D. Korff , G. Dryden, & M.G. M i l l a r , "Information R e t r i e v a l from Atmospheric Induced Speckle Patterns", Optics Comm. Vol 5 #3 June 1972 pp187-192. References Page 184 [11] A.Labeyrie, "Attainment of Diffraction Limited Resolution in Large Telescopes by Fourier Analysing Speckle Patterns in Star Images" Astron.Astrophys, Vol 6 #1 1970 pp85-87. [12] T.L. Landecker, R.S. Roger, & P.E. Dewdney, "The Supernova Remnant G116.9 + 0.2 ( CTB1 ) 21 cm Continuum and HI Emission", Astronomical Journal Vol 87 #10 Oct 1982 pp1379-1389. [13] S.L. McCall, T.R. Brown, & A. Passner, "Improved Optical Stellar Image Using Real-Time Phase-Correction System: I n i t i a l Results", Astrophys.J. Vol 211 Jan 1977 pp463-468. [14] T.R. O'Meara, "The Multi-Dither Principle in Adaptive Optics", J. Opt. Soc. Am. Vol 67 # 3 March 1977 pp306-315. see also other papers in this issue [15] R.A. Muller & A. Buffington, "Real-Time Correction of Atmospherically Degraded Telescope Images Through Image Sharpening", J. Opt. Soc. Am. Vol 64 # 9 Sept 1974 pp1200-1210. [16] J.E. Pearson & S. Hansen, "Experimental Studies of a Deformable-Mirror Adaptive Optical System", J.Opt.Soc.Am Vol 67 #3 March 1977 PP325-333. [173 R.C. Jennison, "A Phase Sensitive Interferometer Technique for the Measurement of The Fourier Transform of Spatial Brightness Distributions of Small Angular Extent", Mon.Not.R.A.S. Vol 118 #3 1958 pp276-284. [18] A.E.E Rogers et a l , '• "The structure of Radio Sources 3C273B and 3C84 Deduced from 'Closure Phase' and v i s i b i l i t y Amplitudes Observed with Three Element Interferometers", Astrophys.J. Vol 193 Oct 1974 pp293-301. [193 D.H. Rogstad, "A Technique for measuring V i s i b i l i t y Phase with an Optical Interferometer in the Presence of Atmospheric Seeing", "' ' Applied Optics Vol 7 #4 April 1968 pp585-588. [20] A.C.S. Readhead & P.N. Wilkinson, "The Mapping of Compact Radio Sources from VLBI Data", Astrophys.J. Vol 223 July 1978 pp25-36. References Page 185 [213 A.C.S. Readhead, R.C. Walker, T.J. Pearson, & M.H. Cohen, "Mapping Radio Sources with Uncalibrated V i s i b i l i t y Data", Nature Vol 285 May 1980 pp137-l40. [223 W.R. Burns & Stanton S. Yao, "A New Approach to Aperture Synthesis Processing", Astron.Astrophys. Vol 6 1970 pp481-485. [233 J.A. Hogbom, "Aperture Synthesis with a Non-Regular D i s t r i b u t i o n of Interferometer Baselines", Astron.Astrophys.Supp. Vol 15 1974 pp417-426. [243 B.G. Clark, "An E f f i c i e n t Implementation of the Algorithm 'CLEAN'", Astron.Astrophys. Vol 89 1980 pp377-378, VLA Computer Memorandum #152 December 1979. [253 T.J. Cornwell, "Can CLEAN be Improved?", VLA S c i e n t i f i c Memorandum #141 March 1982. [263 U.J. Schwarz, "Mathematical-Statistical Description of the I t e r a t i v e Beam Removing Technique (Method CLEAN)", Astron.Astrophys. Vol 65 1978 pp345-356. also "The Method 'CLEAN' - Use, Misuse and V a r i a t i o n s " , C.VanSchooneveld(ed) 'Image Formation from Coherence Functions i n Astronomy* p261-275 1979 D.Reidel Pub.Co. [273 E.B. Fomalont, "Earth-Rotation Aperture Synthesis", Proc. IEEE Vol 9 1973 pp1211-1218. [283 A.C.S. Readhead, "Radio Astronomy by Very-Long-Baseline Interferometry", S c i e n t i f i c American Vol 246 #6 June 1982 pp52-61. [293 G.J.M. Aitken, , "A Ratio-Measurement Technique for M i l l i m e t e r Wavelength Interferometry", Rev.Sci.Instrum. Vol 45 #9 Sept 1974 pp1066-1O7O. [303 T.J. Cornwell & P.N. Wilkinson, "A New Method for Making Maps with Unstable Radio Interferometers", Mon.Not R. Astr. Soc. Vol 196 1981 pp1067-1086. References Page 186 C31] D.N. Fort & H.K.C. Yee, "A Method of Obtaining Brightness D i s t r i b u t i o n s from Long Baseline Interferometry", Astron. Astrophys. Vol 50 1976 pp19-22. [32] J.R. Fienup, "Reconstruction of an Object from the Modulus of i t s Fourier Transform", Optics Letters Vol 3 # 1 J u l y 1978 pp27-29. [33J R.W. Gerchberg & W.O. Saxton, "A P r a c t i c a l Algorithm for the Determination of Phase from Image and D i f f r a c t i o n Plane P i c t u r e s " , Optik Vol 35 1972 pp237-246. [34] W.B. Cotton, "A Method of Mapping Compact Structure i n Radio Sources Using LBI Observations", Astronomical Journal Vol 84 # 8 Aug 1979 pp1122-1128. [35] A.E.E. Rogers, "Methods of Using Closure Phases i n Radio Aperture Synthesis", Proc. 1980 I n t n l . O p t i c a l Computing Conf. SPIE Vol 231 pp10-17 ( TA 1632 157 ). [36] H. Nakajima & et a l , "A New 17-GHz Solar Radio Interferometer at Nobeyama", Pub. Astron. Soc. Japan Vol 32 tf 4 1980 pp639-650. [37] J.W. Goodman, "Introduction to Fourier Optics", McGraw-Hill 1968 QC355 G65. [38] R,N. Bracewell, "Radio Interferometry of Discrete Sources", Proc. IRE Vol 46 Jan 1958 pp97-105. [39] A.E.E Rogers & J.M. Moran, "Methods of Experimental Physics" Vol 12, Part f C , Academic Press 1976 pp139-143 & pp228-259. [40] N.C, Mathur et a l , "Atmospheric E f f e c t s i n Very Long Baseline Interferometry", Radio Science Vol 5 #10 Oct 1970 pp1253-1261. References Page 187 [41] T.H. Legg, "The Canadian Long Baseline Array", Bull.Can.Assoc.Physicists Vol 38 #1 Jan 1982 pp3-7. [42] J . S k i l l i n g , "Maximum Entropy Image Reconstruction from Phaseless Fourier Data", Proc. 1983 Topical Meeting on Signal Recovery and Synthesis with Incomplete Information and P a r t i a l Constraints, Opt.Soc.Am. In c l i n e V i l l a g e NV. Jan. 1983. [433 S.F. G u l l & G.J. D a n i e l l , "Image Reconstruction from Incomplete and Noisy Data", Nature Vol 272 A p r i l 1978 pp686-690. [44] R.W. Schafer, R.M. Mersereau, & M.A. Richards, "Constrained I t e r a t i v e Restoration Algorithms", Proc. IEEE Vol 69 #4 A p r i l 1981 pp432-450. [453 T.J. Cornwell, "A method of s t a b i l i z i n g the clean algorithm", Astron.Astrophys. 121, p281-285 1983. [463 J.A. Hogbom, "CLEAN as a Pattern Recognition Procedure", Proc. IAU Symposium 'Measurement and Processing f o r In d i r e c t Imaging 1. Sydney, September 1983. Cambridge University Press ( i n press). [473 F. P a l a g i , "<Least square f i t t i n g > and <CLEAN> : a combination f o r analysis of one dimensional synthesis", Astron.Astrophys.Suppl.Ser 49, p101-104 1982. [483 U.J. Schwarz, "The r e l i a b i l i t y of CLEAN maps and the corrugation e f f e c t " , Proc. IAU Symposium Measurement and Processing f o r In d i r e c t Imaging. Sydney, September 1983. Cambridge University Press ( i n press). [493 D.G. C h i l d e r s , D.P. Skinner, & R.C. Kemerait, "The Cepstrum: A Guide to Processing", Proc. IEEE Vol 65 #10, (October 1977) pp1428-l443. [50] J.R. Fienup, T.R. Crimmins, & W. Holsztynski, "Reconstruction of the Support of an Object from the Support of i t s Autocorrelation", J.Opt.Soc.Am. Vol 72 #5 May 1982 pp610-624. also T.R. Crimmins & J.R. Fienup, "Uniqueness of Phase R e t r i e v a l f o r Functions with S u f f i c i e n t l y Disconnected Support", J.Opt.Soc.Am. Vol 73 #2 February 1983 pp218-221. References Page 188 [51] M.H. Hayes, "The Reconstruction of a Multidimensional Sequence from the Phase or Magnitude of i t s Fourier Transform", IEEE.Trans. Vol ASSP-30 #2 A p r i l 1982 pp140-154. [52] R. Nityananda & R. Narayan, "Maximum Entropy Image Reconstruction - A P r a c t i c a l Non-Information-Theoretic Approach", J .Astrophys .Astr . Vol 3 #4 1982 pp419-450. [53] A. Segalovitz and B.R. Frieden, "A 'CLEAN'-type Deconvolution Algor i thm" , Astron.Astrophys. Vol 70 1978 pp335-343. [54] G .P . Weigelt , "Speckle Interferometry and Image Reconstruction", High Angular Resolution S t e l l a r Interferometry. IAU Proc. C o l l . #50, 1979 pp33/1-33/12. [55] G.B. Feldkamp and J .R. Fienup, "Noise Properties of Images Reconstructed from Fourier Modulus", SPIE Internat ional Opt ica l Computing Conf. Vo l 231 1980 pp84-93. [56] T . L . Landecker & R. Wie leb insk i , "The Galact ic Metre Wave Radia t ion" , Aust . J .Phys . astrophys. s u p p l . , October 1970, #16, pp1-30. [57] F.R. Schawb & W.D. Cotton, "Global Fringe Search Techniques for VLBI" Astronomical Journal , Vol 88 # 5, May 1983, pp688-694. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0096594/manifest

Comment

Related Items