UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Reconstruction of stellar images from correlations Shi, Xiaotian 1993

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1993_spring_phd_shi_xiaotian.pdf [ 5.25MB ]
JSON: 831-1.0064901.json
JSON-LD: 831-1.0064901-ld.json
RDF/XML (Pretty): 831-1.0064901-rdf.xml
RDF/JSON: 831-1.0064901-rdf.json
Turtle: 831-1.0064901-turtle.txt
N-Triples: 831-1.0064901-rdf-ntriples.txt
Original Record: 831-1.0064901-source.json
Full Text

Full Text

RECONSTRUCTION OF STELLARIMAGES FROM CORRELATIONSbyXiaotian ShiB. Sc. (Electrical Engineering) Zhejiang UniversityM. Sc. (Electrical Engineering) Zhejiang UniversityA THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinTHE FACULTY OF GRADUATE STUDIESDEPARTMENT OF ELECTRICAL ENGINEERINGWe accept this thesis as conformingto the required standardTHE UNIVERSITY OF RITISH COLUMBIASept. 1993© Xiaotian Shi, 1993In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.(Signature) Department of Elect teick( Eiel ;heeliPtiThe University of British ColumbiaVancouver, CanadaDate ^001" L) 113DE-6 (2/88)AbstractThe problem of object reconstruction from a series of short-exposure astronomical imagedata observed by ground-based telescopes is addressed. The observations are seriouslydegraded by atmospheric turbulence as well as corrupted by photon noise, detection noiseand background noise.An approach for the reconstruction of the unknown object using the "self cross correlation"is proposed and implemented. This self cross correlation is defined as the cross correlationbetween an image observation and a truncated part of it. The relation amongst the phase ofthe cross spectrum, the phase of the object and the edges in the object are established. Basedon these relations some efficient and effective algorithms for retrieving the unknown phaseof the object and its modulus are developed. These algorithms are found to be insensitiveto the presence of noise.Writing the Fourier transform (FT) of an image a as IA en)°, we call the inverse FT of thephasor part eie° as the phasor-image of a. We show that the phasor-image of the self crosscorrelation preserves the edges of the unknown object. If the object does not contain edgesthen its phasor-image represents an estimate of the Fourier phase of the unknown object.If all the stars and sub-objects in the object have edges then a phase retrieval methodbased on the iterative FT method is proposed. In this case the iterative FT method usesthe edges obtained from the phasor-image of the average cross spectrum as object supportbounds constraints.If some of the stars or sub-objects have edges then a non iterative phase retrieval methodis developed. If none of the stars has edges then a modified version of this method is usedto retrieve the object phase.We also address the anisoplanatic atmospheric turbulence case. We show that our approachcan be extended to deal with the object reconstruction problem under such a condition. Thisis unlike the existing methods which only apply to the isoplanatic case.We evaluate the performance of our algorithms by comparing the signal-to-noise ratioswith those of existing reconstruction methods. The results from both simulated and realastronomical data demonstrate the superiority of our methods in accuracy and computationalTable of ContentsAbstractList of Figures^ viiiAcknowledgement xiA Glossary of Symbols^ xiiTHE EXPLANATIONS OF SOME KEYWORDS^ xiv1 INTRODUCTION^ 11.1 Introduction  11.2 Past Achievements ^  31.3 Self Cross Correlation Method ^  42 LITERATURE REVIEW^ 72.1 The Speckle Interferometry (SI) ^  72.2 The Knox-Thompson Method  72.3 Triple Correlation Method ^  82.4 Phase Retrievals from the Phase Difference ^  102.5 Iterative Fourier Transform Method ^  122.6 Weighted shift-and-add (WSA) Technique  143 ATMOSPHERIC TURBULENCE^ 153.1 Optical Waves Through Atmospheric Turbulence ^ 153.2 Assumptions ^  18iv4 DEFINITIONS AND NOTATIONS 225 THE MATHEMATICAL MODEL OF A NOISY DEGRADED IMAGE 275.1 Phase Screen and Correlations ^ 275.2 The Self Cross Spectrum for the Photon-Limited Speckle Images . .. . 305.3 Segmentations of Isoplanatic Patches from Anisoplanatic Patch ^ 335.4 The Physics of the Noise ^ 386 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLY 416.1 Introduction ^ 416.2 Estimation of the Fourier Modulus of the Object ^ 456.2.1 The Noisy Degraded Image Model 456.2.2^Restoration Via the Wiener Criterion^ 466.2.3 An Ad-hoc Scheme for Restoring the Fourier Modulus ^ 486.2.4 Discussion of the Fourier Modulus Estimate ^ 516.3 Reconstruction of the Fourier Phase of the Object 526.4 Simulations and Results ^ 546.5 Summary ^ 607 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFTALGORITHM^ 627.1 The Importance of Obtaining Object Bounds ^  627.2 The Phasor-image of an Object and its Support Bounds^ 637.3 Edge Preservation in the Phasor-image of the Self Cross SpectrumPhase^  647.4 Algorithmic Procedure ^  717.5 Results ^  747.6 Summary  778 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERY USING THE SELFCROSS SPECTRUM 858.1 Introduction ^ 858.2 Phase Preservation in the Self Cross Spectrum ^ 868.3 Retrieving the Phase from the Self Cross Spectrum 888.4 Reconstruction Implementation using Self Cross Correlation ^ 938.5 Results ^ 948.6 Summary 979 OBJECT RECONSTRUCTION FOR THE ANISOPLANATIC CASE 1069.1 Introduction ^ 1069.2 The Fourier Modulus of the Object ^ 1079.3 Phase Recovery From the Self Cross Spectrum ^ 1109.3.1^Obtaining sub-object's support bounds 1109.3.2 The object reconstruction case when the truncated object has noobvious edges ^ 1129.4 Results for the Anisoplanatic Case ^ 1129.5 Summary ^ 118vi10 CONCLUSIONS^ 12010.1 Advantages of the Self Cross Correlation Image ReconstructionApproach ^  12010.2 Contributions  12110.3 Future work ^  122Bibliography^ 124viiList of Figures^3.1^Young's experiment ^  19^4.1 Illustrations of Mo truncated images from go(s, y) ^ 255.1^An example of Phase Screens ^  275.2 The partition of an object, each sub-object h, f2 and h beinglocated in an isoplanatic patch ^  355.3^An example of the sub-image and truncated sub-imageregions resulting from three isplanatic patches ^ 366.1^(a) Object; (b), (c) two of the noisy, degraded, short-exposureimages ^  576.2^(a) Typical simulated PSF; (b) section of the PSF in (a). . . ^ 586.3 Restored images from 10, 20, 40, 50, 60, and 100 observationswith our filter [(a)-(f)] and with speckle interferometry [(A)-(F)] 596.4^Estimated RMS error corresponding to different numbers ofshort-exposure images with speckle interferometry (1) and thead hoc filter (2) ^  607.1^Two computer-simulated objects ^  647.2 The phasor-images of the objects in Fig.- 7  1^  647.3^The example of edge preservation in the phasor-image of theself cross correlation ^  657.4^The cross correlation (h- k p)  687.5 The overlapping area of h and p via separation r ^ 697.6^The 3—D plot of the phasor-image of the cross correlation(h- k p) ^  70VI"^7.7^The long-exposure images of the first object ^ 787.8 The long-exposure images of the second object ^ 787.9^The phasor-images of the self cross spectra  797.10 The enhanced phasor-images of the self cross spectra. . . ^ 797.11^The reconstructed images ^  807.12 The example of the image reconstruction using realastronomical speckle data ^  848.1^The relation between objects and their phasor-images . . . ^ 918.2 An example of the Theorem^  988.3^Two original objects ^  998.4 The phasor-images of two original objects in Fig. 8  3^  998.5^Long-exposure images of object 1 ^  1008.6 Long-exposure images of object 2  1008.7^The phasor-images of the self cross spectra ^ 1018.8 The estimated truncated objects ^  1018.9^The phasor-images of the estimated truncated objects in Fig.8.8^  1028.10^The phasor-images of the reconstructed images by using theself cross correlation method ^ 1028.11^The phasor-images of the reconstructed images by using theTC method ^ 1038.12^The example of self cross correlation reconstruction using realastronomical speckle data ^ 1059.1^Illustration of overlapped area of two sub-images ^ 1089.2 Illustrations of sub-image regions ^  111ix^9.3^The original object and its sub-objects (notice the absence ofsharp edges) ^ 1149.4^The phasor-images of the original object and of itssub-objects ^ 1159.5 The long-exposure image of the object of Fig. 9.3a and thetruncated long-exposure images resulting from anisoplanaticpatch  1159.6^The phasor-images of the self cross spectra ^ 1169.7 The phasor-images of the self cross correlations (hi*poi) and(h2*p02) ^ 1179.8^The initial estimate of the sub-objects from the crossspectrum^ 1179.9^The scaled versions of the initial estimates of the sub-objectsto show the artifacts of overlap between isoplanatic sub-imageregions ^ 1189.10^The improved estimate of the sub-object by the generalizedprojections method ^ 118AcknowledgementI wish to express my thanks to my supervisor, Professor Rabab K. Ward from whom Ireceived the most valuable guidance, inspiration, encouragement and support.We thank Dr. E. K. Hege, Steward Observatory, Tucson, Arizona for his providing theADS 11344 and the /3 Del photon-list data acquired using the MAMA detector in collaborationwith J.S. Morgan, Center of Space and Astrophysics, Stanford University.I am grateful and in debt to my wife and daughter for their understanding and support.I would like to thank system manager, Robert Ross and our department secretary, DorisMetcalf for all their help.xiA Glossary of SymbolsNotation^ExplanationE, Membership relation "belong to" and its negative "does not belongto"V^For allC, D Strict containment relationsU, n^Union, intersectionEmpty set8,T,U^Sets of real numbers, integersfg, 119^ Original two-dimensional images and its Fourier tranformw, W Truncation window and its Fourier transformg, G^Degraded images resulting from an anisoplanatic turbulence patchand its Fourier tranformfk7 Fk^The k sub-object and its Fourier transform9k, Gk Degraded image resulting from kth sub-object and its FouriertransformPk, Pk^kth truncated image and its Fourier transformhk, Hk kth point spread functions from kth isoplanatic turbulence patchand its Fourier transformPOk POk^kth noise-free truncated degraded image and its Fourier transformgo, Go Noise-free degraded images and its Fourier transform90k, GOk^Noise-free degraded images from kth sub-object and its FouriertransformPOki, POki^kth Noise-free truncated image in the region of goi and its FouriertransformOz,eie.^The phase and its phasor of the Fourier transform of anytwo-dimensional signal zze The inverse Fourier transform of the phasor^(referred as tophasor-imagexii:6,1;^The estimates of the sub-objects and the objectKg, Kp Photon numbers in a framen nP^Signal independent additive white noiseni, n2 Noise including signal independent additive white noise and signaldependent photon noise in a frame0.2^Variance of noisedo, D The atmospheric coherence diameter and the telesope diameterc, v^The wavelength of light source, speed of light and frequency oflightThe atmospheric cut-off angular frequency measured in cycles perradian of arcFD^^The diffraction-limited angular frequency measured in cycles perradian of arc10, 1, Lo^Atmospheric turbulence scalesn no + ft^The refractive index of atmosphereriz, (Dfy Atmospheric turbulence structure function and power spectrumt?=-: (s, y, z)^2- or 3-dimensional space vector= (Kx, K.,, ice)^3-dimensional spatial frequency vector, also referred to as thewavenumber vector in cycles per meter.= (wx^2-dimensional spatial frequency vector in cycles per meterThe equivalent phase screen of atmospheric turbulenceA1, A2^Object constraintsPi, P21 Q1) Q2^Projection operatorsAi, A2^Relaxation parametersCorrelation operatorO Convolution operatorTHE EXPLANATIONS OF SOME KEYWORDSAll parts of the object are affected identically by theatmospheric turbulence, at least over a long time average i.e.the point spread function is space invariant.All parts of the object are not affected identically by theatmospheric turbulence, i.e., point spread function is spacevariant.The imaging integration time is much longer than thecharacteric fluctuation time of the atmospherically inducedwavefront deformations.The imaging intergation time is shorter than the charactericfluctuation time of the atmospherically induced wavefrontdeformations.The equivalent diameter of amospheric turbulence aperturefor long-exposure imaging.The normalized autocorrelation function of the equivalentphase screen of the atmosphereThe image that is free from photon-noise, detection noiseand background noiseThe discontinuities either in the brightness or in its gradientThe phase value of a plane wave after the propagating planewave passing through optical mediaThe non-zero regions of the objectThe inverse Fourier transform of the Fourier phasor (of anyfunction)ISOPLANATTC—ANISOPLANATIC—LONG-EXPOSUREIMAGE—SHORT-EXPOSUREIMAGE—ATMOSPHERICCOHERENCEDIAMETER—ATMOSPHERICTRANSFERFUNCTION—NOISE-FREE IMAGE—EDGE—PHASE SCREEN —OBJECT' SUPPORT—PHASOR-IMAGE—xivChapter 1INTRODUCTION1.1 IntroductionImage restoration and reconstruction can be defined as the general problem of estimatinga two-dimensional object from a degraded version or versions of it. The image restorationproblem usually addresses the cases for which the observations are not severely blurredversions of the object and the signal noise ratio (SNR) is much larger than 1. In imagereconstruction, the observations result from the strong interaction between the unknownobject and some scattered wave (e.g. X-ray) or and randomly changing optical media (e.g.atmospheric turbulence). The object is reconstructed from a set of projections (or fromderivations, integrations, etc.).The image restoration or reconstruction problem involves every known scale — fromcosmic to atomic; from determining the structure of unresolved stars to determining thearrangement of the tiniest molecules. This problem has been extensively studied for itsobvious practical importance as well as its theoretical interest. Literature on the subject isabundant and highly varied since the problem arises in many branches of engineering andapplied physics. It is, for instance, frequently encountered in fields such as optics, X-rayimaging, diffraction tomography, astronomy, biomedical engineering, machine vision, non-destructive evaluation, geophysics, etc.In this thesis, we investigate the reconstructions of stellar objects from images taken byground-based telescopes in the presence of noise. The object is restored from either a long-exposure image or reconstructed from a series of short-exposure observations. In this workwe consider the latter.1Chapter 1 INTRODUCTIONAtmospheric turbulence had long prevented us from seeing highly-resolved stellar objects.In the conventional long-exposure imaging, the image is seriously blurred by the atmosphericturbulence due to time-integrating the randomly fluctuating wavefronts. The achievableresolution of a long-exposure image is about only that of a telescope of lOcrn diameterwhich is the average atmospheric coherence diameter denoted by do.When a series of short-exposure images with a narrow-optical-bandwidth are recorded, theintegration time of a short-exposure image is shorter than the characteristic fluctuation timeof the atmospherically induced wavefront distortions for a given wavelength. Thus the imagedegradations are considered to be caused by fixed patterns of wavefront distortions ("frozenturbulence"). In such a case, the effects resulting from averaging the randomly changingwavefront distortions (which appear in the long-exposure imaging) can be avoided.However, at low level of light, in the short-exposure case, the integration time is so shortthat the number of photons detected in one picture is very limited ( usually under 103 photons).In a single short-exposure image frame, the signal to noise ratio (SNR) is much smaller than1. It is impossible to reconstruct the object from a single frame at such low level of light.Image reconstruction requires multiple image frames so that the statistics-limited measure ofthe object can be obtained from multiple frames with the maximum recovery of informationof the object. This is the subject of this research.The first order approximation of the wavefront distortion (tilt) shifts the whole image butotherwise does not degrade the image. The higher order wavefront distortions, on the otherhand, result in degraded observations as an image will contain multiple speckles spread allover the observations. For this reason, short-exposure images are sometime referred to asspeckle images. The other influential noise sources are the photon noise, background anddetection noise. Beside the noise, the point spread function (PSF) of each short-exposureimage is usually unknown a priori. Only the statistics of the PSF's is measurable.2Chapter 1 1N7'RODUC770N1.2 Past AchievementsThe reconstructions from multiple short-exposure images traditionally entails two steps:the recovery of the Fourier modulus and that of the Fourier phase. The speckle interferometry(SI) methodEll is used to recover the modulus. Phase recovery is a more difficult problemthan that of the modulus estimation. The most effective techniques available to recoverthe phase are the Knox-Thompson (KT[23), the triple correlation (T631) (also known as thebispectrum method) and the Iterative Fourier Transform (1FT) method (or more generally, theGeneralized Projection method).The implementation of the KT method is much faster and easier than that of the TCmethod, however it requires centroiding the images before processing. The IC methodovercomes this difficulty but is more sensitive to noise because it involves triple insteadof double correlation. The TC method also takes considerably longer computational timesince the bispectrum (the Fourier transform of the triple correlation) is 4—dimensional. Bothof these methods give estimates of the phase differences between adjacent frequencies insteadof the actual phases. In retrieving the phases from the phase differences, errors could beaccumulated.The IFT method iteratively recovers the phase given the estiina  e of the autocorrelation ofthe object, is less sensitive to noise and is fast to compute. It uses known constraints on theobject both in the object domain and the Fourier-domain. Due to the absence of the constraintof tight object-support bounds, the WI' method has seldom been used in practice so far.Apart from the above stated problems, all the present available image reconstructiontechniques are applicable only to the isoplanatic case. Image observations within an anglesize of isoplanatic patch often suffer truncation effects. Even modest amounts of imagetruncation can cause serious errors in the reconstructed object[4]. Thus it is very important3Chapter 1 INTRODUCTIONto find a technique to reconstruct images beyond the isoplanatic patch limit (anisoplanaticpatch). The KT spectrum and the bispectrum do not apply to the anisoplanatic case. Sincethe KT spectrum and the bispectrum estimate the phase differences, they do not have obviousinterpretations of the object. The reconstructio' n of the object using either the TC or the KTtechnique requires long processing time although the KT is faster than the TC method.1.3 Self Cross Correlation MethodIn this thesis, we propose a new method for image reconstruction using the "self crosscorrelation". This self cross correlation is defined as the correlation between an image anda portion of it. This reconstruction method intends to overcome the difficulties of the aboveimage reconstruction techniques.Our self cross correlation method is easy to implement and fast to compute. Like the TCmethod, it is shift-invariant, thus does not need prior centroiding of all the observations. Itwill shown that it is applicable to both the isoplanatic and the anisoplanatic cases. It will beshown that from the self cross spectrum phase, the edges of the unknown object are obtainable.These edges form the tight object constraints needed by the IFT method. Alternatively, thephase may be obtained directly.We are going to design some different reconstruction algorithms based on the self crosscorrelation. In specific we shall:1. establish a theorem of the basic properties of the self cross spectrum phase. This theoremestablishes: 1) the relation between the object bounds and the self cross spectrum phaseand 2) the relation between the self cross spectrum phase and the Fourier phase.2. retrieve the phase by the IFT method using the object bounds obtained from the self crossspectrum (in the isoplanatic case).4Chapter 1 INTRODUCTION3. retrieve the phase by a non-iterative method from the self cross spectrum (in the isoplanaticcase).4. modify our phase retrieval algorithms so that they are applicable to the anisoplanatic case.This dissertation consists of ten chapters. Chapter two reviews the relevant literature.Chapter three provides some background information about the optical wave through turbulentatmosphere and the turbulence transfer function described by the statistics of the refractiveindex. Chapter four gives the definitions of self cross correlation and other related concepts.Chapter five elaborates on the mathematical derivations of the general self cross spectrumin the presence of the noise in both the isoplanatic and the anisoplanatic cases. Someimportant properties of the self cross spectrum are described. Chapter six presents an ad-hoc optimal Fourier modulus estimation method for the case when only signal-independentadditive white background- and detection- noise are present. In chapter seven, we first givea heuristic analysis of self cross spectrum phase. We exploit the relations among the Fourierphase of the object, the object support bounds and the self cross spectrum phase. Then wedescribe how the object support bounds are used in the image reconstruction using the lFT(Generalized Projection method) in the isoplanatic case. In chapter eight, a theorem andits strict mathematical proof about self cross spectrum is described. A non-iterative phaseretrieval method based on the self cross spectrum is described. Chapter nine describes themodified phase retrieval algorithms in the isoplanatic case to the anisoplanatic case. Severalexamples of the image reconstruction are presented from both computer-simulated and realdata in the chapters six to nine. The results are discussed. In these image reconstructionexamples, we place our emphasis on the recovery of the Fourier phase of the object. Therecovered phases are not going to be refined since all traditional linear filters do not haveany effects to the Fourier phase but to the Fourier modulus. Finally we draw the conclusionsabout the self cross correlation reconstruction method and speculate on further work. All the5Chapter I INTRODUCTIONmaterial covered in chapter six to ten is novel.6Chapter 2LITERATURE REVIEW2.1 The Speckle Interferometry (SI)Early efforts for removing the effects of atmosphere turbulence date back to 1970.11115].Labeyrie[1] proposed and successfully implemented a method known as "speckle interferom-etry" (SI). This method recovers the Fourier modulus (or equivalently the autocorrelationfunction) of the object. Unfortunately, the information in the autocorrelation function is notcomplete since the Fourier phase is lost. This technique remains the most effective methodfor determining the objects' magnitude1 F9 1 (2.1)where Fg Gi and Hi denote the Fourier transforms of the object, the ith observation and ithPSF, respectively.Since Labeyrie's initial studies, much progress has been made to recover the Fourier phaseso that the unknown object can be reconstructed.2.2 The Knox-Thompson MethodIn 1974, Knox and Thompson (KT)21 proposed a method which measures the phasedifferences between adjacent frequency components of the image. These components arehighly correlated due to the atmospheric coherence. From a set of phase differences measuredacross the entire image spectrum, it is possible to determine a corresponding set of phases.The implementation of the KT algorithm is easy and relatively fast. The KT correlation, being7Chapter 2 LITERATURE REVIEWa double correlation, is not sensitive to noise because of its low-order correlation. The KTspectrum of an image g(0 (the vector F denotes (x, y)) isC KT(W, ACj) = G(a;^ G*(),^ (2.2)where G() is the Fourier transform of g(f). The main limitation of this method is thatCKTP, AO) is not invariant when the image g(f) is shifted in space (equivalently, thewavefront is tilted). If g(r) is shifted to G(f'+ 6), the corresponding KT spectrum becomesC KT(W, A6.1 , 7'1) GP + Ac7.5)e—iP+Acj)T1 G* (ci)ei awl^= GP + Ac7i)G* (IS)e—jA`Z^ (2.3)= CKT(, Ac-j)e—i6".""?1.Since the shift^= (xi, Yi) (corresponding to wavefront tilt relative to the optical axis ofthe imaging system) changes randomly from frame to frame, the complex numbermay vary wildly. When the KT spectrum (2.2) are averaged from multiple image frames, theKT spectrum is blurred in a similar way as the direct averaging (long exposure) of imageframes. As a consequence, the SNR of the KT spectrum greatly decreases, especially for highfrequencies. To remove the wavefront tilt, it is necessary to centroid the images before theyare Fourier-transformed. For a bright object, centroiding is quite accurate. For a faint object,however, centroiding is quite uncertain as the detected photons per frame are very limited .Improper centroiding will reduce the SNR of the KT spectrum. This limits the application ofthe KT method to stellar image reconstruction as most stellar objects are very faint.2.3 Triple Correlation MethodTo overcome the centroiding difficulty faced by the KT method, another phase retrievaltechnique, known as the Triple Correlation (TC), was developed. It was first suggestedby Lomann et a/.[3]. This technique involves averaging the triple correlation function (or8Chapter 2 LITERATURE REVIEWequivalently, the bispectrum), instead of the double correlation as in the KT method. Thebispectrum (the Fourier transform of the triple correlation) is defined asCTc(Z), Co% = CV.) Cji )G(Co)Gpi)^ (2.4)which is the 4-dimensional spectrum. Notice that the bispectrum (2.4) differs from theKT spectrum (2.2) by the term G(1). As mentioned above, the advantage of the bispectrumCTc(a ,Cii) over the KT spectrum is that the former is invariant to any shift in signal g(71). Thisshift-invariant property is proven as follows. For a shifted signal g(i? F.0, the bispectrumCTC (C.; 17‘1) becomes^CTG, (47), Cji^= G* +^G(L7.1)e-julfi G(1)e-iChn(2.5)= G*( +4-0.1)G(c,)G(Lzi),which is exactly the same as the bispectrum of the signal g(). It is found that in the highestSNR region where ICjil, lAull -< A , the SNR's of both the bispectrum and KT spectrum areof equivalent values161. In this region, at low light level, the SNR's are linearly dependenton the number of photons per speckle and at high light level they are approximately equalto unity. In the other region wherePi I, I&1> 10-, the values of SNR of the KT spectrumfor a single frame is insignificant. So is the SNR of the bispectrum. However, the valuesEiof the SNR of bispectrum for a single frame show an 71- dependency at low light leveland a ix dependency at a high level, where IC represents the average photons per framey.,and fi, ( = 2(X) 2 ) represents the average number of speckles per frame. This suggests theeffective equivalence of the two techniques, as for typical value of fis, of order 1000, anextremely large number of frames must be processed for the additional information present inthe bispectrum to be any use. However, the bispectrum, being a triple product is more sensitiveto the noise since it contains noise terms to the third power. The variance of the observedsignal is thus greater than that of the KT method (which involves double product). This is a9Chapter 2 LITERATURE REVIEWbasic property of high-order correlations and suggests the use of lower-order correlations, ifthe lower-order correlations permits the recovery of the same information.2.4 Phase Retrievals from the Phase DifferenceBoth the KT and the TC method estimate the phase difference instead of the actualphase of the object. Retrieving the phase from the phase difference is thus required. Thesimplest algorithm from the phase difference is based on a recursive techniquern. The phasedifference for the KT spectrum at frequency 0 isAOG 6,0) = OG(0 + .A0) — OG(0).This relationship is the basis of a recursive algorithm for finding the actual object's phaseOF(0) from the phase difference AOG(0, 6). Arbitrarily setting the phase OF(0) at the firstfrequency to zero, the phase at the next frequency and therefore all the remaining phasesare computed. From the phase differences, multiple estimates of the object phase at anyfrequency may result depending on the frequency path chosen for the recursion. Theseredundant estimates are combined in a weighted average where the weighting function isdetermined from the SNR's of the individual phase estimates. This phase recovery techniqueleaves much scope for improvement. Because of its recursive nature, it is fairly sensitive tonoise due to the noise propagation.The problem of noise propagation has been studied for many years. One solution is the[least-squares estimation1811911io]i 13^The least-squares phase estimation was derived fromHudgin's iterative relaxation algorithm[121, which constrains a phase function to fit an arrayof measured phase differences. The convergence, existence and uniqueness of the solutionwere proven1131 for the case when noise is absent. Since the algorithm operates directly onphases, the modulo 2 ir phases must be unwrapped before the algorithm is implemented1141.The resulting signal parts of the unwrapped bispectrum phases are linked to the Fourier phases1 0Chapter 2 LITERATURE REVIEWby a linear system of equations. Phase unwrapping is a complicated process involving a largevolume of computation1151[161[91. Phase unwrapping is not reliable in the presence of severenoise1918]. However, phase unwrapping can be avoided by replacing phases by phasors. Asa result, the least-squares algorithm becomes a non-linear recursive onelicg.The least-squares estimation basically involves a local averaging process plus the aver-aging of the connecting phase differences (with appropriate signs in the phase differences).The correlation between two frequency components decreases fast as their separation in thefrequency domain increases due to the limited atmospheric coherence length (do = 10-15cm).Thus only averaging of the nearest neighbors of the phase makes sense. The noise propagationproblem still exists since only the nearest neighbors of the phase differences are averaged. Theexistence of the noise propagation may prevent the least-squares iterations from convergenceto the best solution (in the presence of considerable noise). The least squares estimation mayfail in the bispectrum phase or in the KT spectrum phase.In the presence of severe atmospheric turbulence, the power spectrum, the KT spectrumand the bispectrum have finite expected values up to the diffraction limit of the telescope,they are severely attenuated by their respective atmospheric transfer functions at frequenciesabove the atmospheric cut-off frequency fo = t. In high-angular-resolution measurements,the telescope diameter D is much larger than do. The magnitudes of spectra are relativelylarge at frequencies below fo, and are attenuated very fast by a factor of (4)2 over a verymuch broader, low-magnitude region from fo to the diffraction limit frequency fp = IP17].Thus, the SNR above fo is much lower than that below fo. This makes the recovery ofthe information in the high frequencies difficult. It is thus necessary to find a reconstructionmethod which is less sensitive to noise.11Chapter 2 LITERATURE REVIEW2.5 Iterative Fourier Transform MethodA different solution to the phase retrieval problem is the iterative transform algorithm(IFT) which was put forward by Fienup1181. The IFT algorithm iteratively retrieves the object'sphase given the object's magnitude (or the autocorrelation function) which is obtained by theSI technique. If sufficient object constraints are given, the IFT method yields good resultsfor most general types of objects . The advantage of the IFT algorithm is that it can easilycombine all the available constraints on the object. The algorithm is not highly sensitive tonoise. The simplest version of the IFT algorithm follows the philosophy of the Gerchberg-Saxton algorithm 1920] ,E^and it is known as the error-reduction (ER) iterative method121] [In the ER algorithm, one iteratively transforms back and forth between the Fourier and theobject domains, imposing the constraints in each domain before returning to the other domain.Thus a solution that satisfies the constraints in both domains is sought. It has been proventhat the ER algorithm "converges in a weak sense such that the squared error cannot increasewith an increasing number of iterations"1181. The algorithm can be viewed in three differentways: 1) in terms of the method of successive approximations1231 2) as a form of steepest-descent gradient search[221 and 3) as the generalized projections[24] onto sets in a Hibert space.Because the Fourier modulus constraint is onto a non-convex set, strong convergence of thealgorithm is not ensured. In the ER approach the mean-squared error decreases rapidly forthe first few iterations, however, its decrease for later iterations extremely slowly. The ERapproach requires an impracticably large number of iterations for convergence. Thus severalmodifications of the ER algorithm have been proposed and most of them converge faster thanthe ER algorithm. A solution to the slow convergence of the ER algorithm is the hybridinput-output (HIO) algorithm. This has been proven to converge faster than the ER algorithmfor the phase retrieval problem. The HIO and the ER algorithms differs only in the stepinvolving the projection onto the object-support set. Details are described int221.] [22] [18]12Chapter 2 LITERATURE REVIEWIt has been found that the IFT solution of the algorithm is usually unique even in thepresence of noise. However, in some cases, the IFT algorithm will stagnate before reachinga solution. The algorithm can be considered to be have stagnated if the error has failed todecrease after a few additional cycles. The stagnation points are found to correspond to localminima in the Fourier-domain error metric and result in images which are close to beingambiguous [131• The probability of ambiguities increases as noise level increases. It has beenshown that ambiguities in the phase-retrieval problem are present if and only if the 2—D Fouriertransform of the object is factorable. However, if a given nonfactorable Fourier polynomialof an object is near enough (in an integrated mean-squared difference sense) to a factorablepolynomial, then the ambiguous solutions (associated with the factorable polynomial) will beconsistent (to within the noise) with the Fourier modulus data. Under this circumstance theobject is considered to be ambiguous in a practical sense, even though it may be unique.The problem of ambiguous solutions in the presence of noise and in the absence of tightobject bounds is the main obstacle of the IFT method. Although the ambiguous solutionscan be most effectively avoided by finding the tight object bounds, unfortunately, in mostcases, the determination of the object support domain by using the Fourier modulus only isvery difficult. It is only for a few cases that the tight object domain can be derived from thedomain of the autocorrelation function [25] [23] [26]Recently, Schulz and Snyder1271 proposed a new iterative image-recovery (from correla-tion) method in which the constraint of object support bounds (which are used in the IFTmethod) is replaced by the constraints of the measured high-order correlations of images.The results of their computer-simulation demonstrated effectiveness of the method. Yet, theconvergence of the iterations needs to be proved.13Chapter 2 LITERATURE REVIEW2.6 Weighted shift-and-add (WSA) TechniqueThe WSA[281[2911301 technique is in fact a cross correlation method for image recovery.In WSA method, a rough estimate of the PSF, Ii(x,y) is extracted from each speckle imageg(x, y) by detecting the speckle pattern. The cross correlation (g (x , y)*Iz(x, y)) and theautocorrelation (h(x, y)*Tz(x, y)) are then formed to estimate the object. WSA methodworks for fairly bright objects. Unfortunately, for faint objects, it is very difficult to identifythe speckle pattern of a PSF in the observed image due to the noise.14Chapter 3ATMOSPHERIC TURBULENCEIn this chapter, we will briefly review some characteristics of the atmospheric turbulence.The autocorrelation function (equivalently, the power spectral density in the Fourier domain)of the refractive index of the atmospheric turbulence is described. The relation of thispower spectral density with the atmosphere transfer function are described. Our atmosphericturbulence simulations in later chapters will be based on the power spectrum density of therefractive index.For the purpose of imaging by ground-based telescopes, the basic assumptions about theatmospheric turbulence are given.3.1 Optical Waves Through Atmospheric TurbulenceAs a consequence of the non-uniform heating of the Earth's surface by the sun, atmo-spheric turbulence is formed. According to Kolmogorov's [31] theory, atmospheric turbulentflows produce large eddies with scale denoted by L0 FZe, h/2 called the turbulence outerscale,where h is the height above ground. Typical numbers quoted for Lo vary between 1 and100m, depending on atmospheric conditions and the geometry of the experiment. These ed-dies break down into smaller and smaller eddies transferring the turbulent energy to smallerand smaller scales. This energy cascade ends up when the energy is finally dissipated intoheat by molecular friction. Kolmogorov's cascade theory also applies to turbulent mixing.Large scale temperature inhomogeneities break down into smaller and smaller scale inhomo-geneities, until thermal molecular dissipation dominates. The size of the smaller eddies withscale denoted by /0 is called the turbulence inner scale. The typical value for lo near ground15Chapter 3 ATMOSPHERIC TURBULENCEis in the order of few millimeters. The scales between the outer and the inner scales arecalled inertial scales.The temperature inhomogeneities induce inhomogeneities in the refractive index of theair denoted by n. The optical waves reaching the lens of the telescope, from an objectthrough the turbulence atmosphere, thus have random fluctuations in their wavefronts andamplitudes. The random wavefront fluctuation (i.e., random phase fluctuation) is due to therandomly varying refractive index in space. The random amplitude fluctuation of the opticalwave is the result of nonhomogeneous absorption of the air. This results in optical systems,even the aberration-free ones, having actual resolutions that are far poorer than the theoreticaldiffraction limit.It is common to refer to the refractive index inhomogeneities as turbulent "eddies,"which are envisioned as packets of air, each with a characteristic refractive index. Asmentioned above, the turbulent eddies in the air have a range of scale sizes, varying froma few millimeters to tens of meters or more. The wavelength dependency of these randomfluctuations can generally be ignoredr171. The most straightforward method of describingthe effects of atmospheric turbulence is by the autocorrelation function of the phase co inthe plane at the lens. The yo is the equivalent phase screen of atmospheric turbulence. Thewavefront fluctuation is equivalent to the refractive index fluctuation. Conveniently the effectsof atmospheric turbulence are described in terms of the structure function (autocorrelation)of the refractive index i.e.r,i(6,77.2)=E[n(6)12(77.2)l, (3.1)where ii represents the randomly fluctuating part of the refractive index n (= no + fi) abouta mean value no. The rfi is also referred to as the atmosphere transfer function similar toour familiar optical transfer function.16Chapter 3 ATMOSPHERIC TURBULENCEIt is assumed that ft is spatially stationary in the three-dimensional space, i.e. it isstatistically homogenous in space and is ergodic, it's autocorrelation function takes the simplerformr) Elii(6)ft(6 +^= ft(fi)it(ii += Ern —1 I 11 7-7,(6)fi(fi +R .00 RThe power spectral density of ft is the three-dimensional Fourier transform of rfi (iz)( _^rfiM'(27)" f°00^e-id r= Lmoo 7R1 E [lAr' (17)12] '(3.2)^.(3.3)where k' = (nx, K , Kz) is called the wavenumber vector and ROO denotes the Fouriertransform of ii. Atmospheric turbulence can be divided into many layers (phase screens)which are perpendicular to the direction of propagating optical wave[321. These layerscan further be represented by an equivalent single layer[r] such that the 3—dimensioanlwavenumber r7 reduces to 2—dimensional vector c73. This simplifies the computer simulationprocess of atmospheric turbulence. On the basis of Kolmogorov's classic work on the theoryof turbulence, the power spectral density 0,1(g) contains three separate regions[311. Forvery small g (corresponding to very large turbulence scale sizes L = 1-r), the mathematicalform for Ofz(g) is not predicted by the theory, for it depends on large-scale geographic andmeteorological conditions. Furthermore, it is unlikely that the turbulence would be eitherisotropic or homogenous at such large atmospheric turbulence scale sizes. However, fewoptical experiments are significantly affected by eddies with scale sizes larger than the outerscale Lo = where go is the critical wavenumber. It is not necessary to consider thisrange for our purpose. For g larger than the critical wavenumber go, the shape of Ci(g) isdetermined by the physical laws that govern the breakup of large turbulent eddies into smallerones. For i> go, the spectrum is called inertial subrange. On the basis of Kolmogorov's17Chapter 3 ATMOSPHERIC TURBULENCEwork cited earlier, the form of On(t7) is given by(1),-,(W). 0.033CK—V,^ (3.4)where C„2 is called the structure constant of the index fluctuations and serves as a measureof the strength of the fluctuations. When k' reaches another critical value Wm, the form of11,i(k') again changes. Ibrbulent eddies which are smaller than a certain scale size dissipatetheir energy as a result of viscous forces, resulting in a rapid drop in 0,-,(W) for k' >(4) = ;2z2L). The scale size is referred to as the inner scale of the turbulence. A typical valuefor 10 near the ground is around a few millimeters. Tatarskir331 includes the rapid decay of0,,,CW) for > t7,n by using the modelto,i(g) = 0.033Cn2 K-131 :172-71^ (3.5)We will use 3.4 and 3.5 in conjunction with (3.6) below to simulate atmospheric turbulencein the later chapters.After Tatarski's review work about the turbulence, Hefnagel and Stanleyr341 showed therelation between the wave structure function FT-,(3.1) and the total average optical transferfunction (OTF):E[H(w)] = Ho(w)rii(A.fico),^ (3.6)where Ho (w) is the OTF of an ideal circularly symmetric lens in the absence of turbulenceand w =^I, in 3.6 is the focus length.3.2 AssumptionsRandom fluctuations of the refractive index result in random wavefront fluctuationswhich are sometime called atmospheric noise or speckle noise. The term "speckle" was18Chapter 3 ATMOSPHERIC TURBULENCEfirst used to describe the granular appearance that images of objects with rough surfaceshave when these objects are illuminated by a laser-produced coherent light. When a self-illuminated object emits incoherent light, the image formed shows similar speckle phenomena,if the incoherent light (passing through a turbulent medium) satisfies the quasimonochromaticcondition. The quasimonochromatic condition states that Tc, the temporal coherence length ofthe light emitted by a point incoherent source is much greater than the maximum pathlengthdifference encountered in passage from the source to the interference region of interest. Statedmathematically, we require that for all source points and all points in the observation region(r2-Fr'2)—(ri+ril)of interest, the bandwidth Ay satisfies v < P- and^ <re,, where is theaverage frequency of transmitted light and c is the speed of light. The differential distancesinvolved are shown in Fig. 3. Such light is said to satisfy the quasimonochrvmatic condition.The point spread function (PSF) appears as a multiple of tiny spots (i.e., speckle) whichFigure 3.1 Young's experimentspreads over the image and have varying intensities as a result of the wavefront fluctuation.19Chapter 3 ATMOSPHERIC TURBULENCEFive important assumptions are maintained throughout this research.1. It is assumed that the objects of interest radiate incoherently as is the case in the vastmajority of problems of practical interest. We are not going to consider the case whenobject radiates coherently. The problem of reconstruction of an object radiating coherentlyis much simpler than that of an incoherent object.2. The second important assumption concerns do, the coherence diameter of turbulence.We shall always assume that the coherence diameter of turbulence is much larger thanthe optical wavelength A of the radiation being considered. This assumption eliminatesconsideration of problems involving imaging through clouds or aerosols, for which, thecoherence length of turbulence is comparable with or smaller than an optical wavelengthA and for which the refractive index changes are sharp and abrupt. This latter class ofproblems may be referred to as "imaging through turbid media," whereas in our workwe are concerned with "imaging through turbulent media," for which the refractive indexchanges are smoother.3. The isoplanatic case: The object of interest is at a very large distance from the lens,its angular extent is so small that all parts of the object are affected identically by theatmosphere, at least over a long time average. As a result, the PSF is space-invariant. The(ro(,h,t) ro  i(A,h,t)isoplanatic angle extent 00 is defined as 00 = 2 tan-1 —1,--, - P--' h n arcsecs,where to (A, 70) denotes the average turbulence inner-scale size for the mean height(above ground) h of the atmospheric turbulence. The time t dependency of To(A, h, t)indicates that the isoplanatic angle 00 is not a constant. However, we assume thatatmospheric turbulence is stable at least over a long time period such that 00 remainsconstant at the time period of observations. 00 is about 3 arc seconds for the visiblewavelengths and 10 arc seconds for the infrared wavelengths.4. The anisoplanatic case: The object of interest is at a very large distance from the lens20Chapter 3 ATMOSPHERIC TURBULENCEand its angular extent is small but larger than the above defined isoplanatic angle 00. Allparts of the object are not affected identically by the atmosphere. In this case, the PSF isspace variant. However, it is reasonable to assume that the PSF's are stationary in space.This assumption is consistent with the stationarity of the refractive index5. The imaging system lies deep within the near field of the most important turbulent eddies,with the result that, to a good approximation, each ray incident on the inhomogeneousmedium is considered to be simply delayed by that medium, with no significant bendingof the rays. We limit our attention to the spectral of the visible and the infrared opticallight regions where atmospheric absorption is negligible[351. This assumption is primarilya statement that the turbulence is so weak that no significant amplitude scintillation effectsare present. Thus the amplitude fluctuations of optical wave are ignored. The travellingoptical waves can then be represented simply by their wavefronts or equivalent phasescreens.6. Computer-simulated original objects are assumed to have resolutions no finer than thediffraction-limited ones of the telescopes. The obtained resolutions of the reconstructedimage are conveniently compared with the diffraction-limited ones.21Chapter 4DEFINITIONS AND NOTATIONSIn this chapter we write the notations and specify the definitions of the different termsand concepts used in the thesis.^•Definition 4.1 A truncated image is a sub-image of an image observation. Let g(x,y)denote the image observation defined over the rectangular region R. The expression of asub-image is:xx,y). 10,^(x,y) R — Tg(x,y), (x,y) € 7, (4.1)where T is the subset denoting the truncated region (or truncation window) in R. Denotingthe truncation window w(x, y) to be { 0,be rewritten as(x, y) E T, th(x,y) E^, e truncated image (4.1) canp(x,y) = w(x,y).g(x,y).^ (4.2)Unless otherwise stated, the region 1. is assumed to be rectangular i.e.= {(x,y), Vx E [x1, x2], Vy E [yi,y2]},where x2 > xi and y2 > yi. The region T of the truncation window w(x, y) is assumedto be circular i.e.T = {(x,y), VV(x — x0)2 (y — yo)2 < Rol,where (xo,yo) is the center of the circular window and Ro is the radius.Definition 4.2 A self cross correlation function is the circular (periodic) correlationbetween an image observation g(x,y) and its truncated sub-image p(x,y). The circular22Chapter 4 DEFINITIONS AND NOTATIONScorrelation denoted by rin,(x, y), is defined over the region R,^rop(x,y) = g(x, y)- r p(x , y) =^g (xl , yl)p(x^, y yl)dx' dy'7. (4.3)^= ^g (xl , yl)w(x xl , y yl)g (x x' ,y yl)dx` dylwhere xl < x < x2 and Y1 < y < y2, and the self cross correlation is circular over theregion R.Definition 43 The KT' self correlation is the circular correlation of the self cross correlationdefined in 4.3:I^1rifT(x, Y, &D.) = f rgp , y )rgp(x x ,y yl)e-14";" clx1 dy^(4.4)l71Again, xl < x < x2 and yi < y < yz, and the KT correlation is circular over the region R.Definition 4.4 An image observation g(x, y) isg(x,y) = go(x,y) ni(x)Y))^ (4.5)wherego(x,y) =^h (x y , y') f g (x' y')dx` 44'^(4.6)represents the image free from the photon noise and other additive noise. fg(x,y) is theunknown object, and h(x,y, ,y') denotes the general space-variant point spread function(PSF). n1 (x ,y) represents all the addtive noise introduced in the imaging process. The Fouriertransform of the PSF is the autocorrelation of an equivalent pupil function of an optical imagingsystem. For the isoplanatic case, the PSF is space invariant, that ish(x , y, x', y')^h(x — x' , y — )and (4.6) can be simplified as the convolution (denoted by 0) form:23Chapter 4 DEFINITIONS AND NOTATIONSgo(x,y) = h(x,y) @ fg(x,y).Definition 4.5 A truncated object denoted by fp(x,y) is the sub-object of fg(x,y). fp(x,y)contains all the points (of fg(x,y)) which contribute to the truncated image p(x, y). Notethat U, the support-region of fp(x , y) is usually larger than T, the support-region of p(x , y).However, if the non-zero intensities of f,, (x, all lie in its center region i.e. the outer regionsof f p(x , y) are all zeros, then U may be equal or smaller than T. Let fp(x ,y) be such that\^0,^(x, y) E^— 1,fP(x'Yi^t fg(x,y), (x,y) EUthen the truncated image p(x , y) isp(x, y)^h(x , y , , yi) .fp (xi Yi)dx1 cly' n2(x , y), V(x , y) ET= po(x , y) + n2(x, y)where po(x , y) represents the part of the truncated image which is free from the photon-noise and other additive noise. It can be seen from (4.8), (4.5) and (4.2) that n2(x , y) isthe truncated noise,n2(x,y) = w(x,y).ni(x,y)andpo(x, y) = w(x, y).go(x, y).Definition 4.6 Assume that an anisoplanatic patch is partitioned into No approximatelyisoplanatic patches. Thus for No isoplanatic patches, we partition the object fg(x,y) into Nonon-overlapped regions and such that the kth sub-object is0^(x,y) ER—fk(x'Y)^fg, (x,y), (x,y)^Sk(4.9)No N.where the union of the sub-sets U Sk = R and their intersection n Sk = 0, i.e. the emptyk.1^ k=1set (see Fig. 5.2). Then, a noise-free approximately isoplanatic image, denoted by gok(x, y) is(4.7)(4.8)24Chapter 4 DEFINITIONS AND NOTATIONSthe sub-image that is formed from the kth sub-object fk(x , y). The gok(x,y) can be expressedin the convolution form:hk(s,Y) fk(x,Y) (x,Y) E 74,gok(x,Y)^ (4.10)0 (x, y)^— 74,where hk(x, y) represents the kth PSF corresponding to kth isoplanatic patch. Thus for ananisoplanatic patch with No isoplanatic patches, the noise-free image go(x , y) is the sum ofNo noise-free sub-images i.e.No90(X, V) =EgOk(X,Y).^ (4.11)k=iNote that the non-zero sub-regions of gok(x, y), k = 1, • , No could be overlapping due tothe wide-spread PSF (see Fig. 5.3).Now, suppose that Mo noise-free sub-images denoted by po,„(x, y), m = 1,•• • ,M0 (seeFig. 4.1) are truncated from different regions of the noise-free observation go(x,y).goFigure 4.1 Illustrations of Mo truncated images from go(r,p)25Chapter 4 DEFINITIONS AND NOTATIONSThen, the mth truncated sub-image pom(x, y) can be expressed asPom(x,Y) = wm(x,Y)go(x,Y)/1/0=^pomn(x,^rn = 1, ' • • , MO,n=1(4.12)where the mth truncated window11,^y) E Tm^wm(x, y) = (4.13)(x, y) E — Tniand ponin is the noise-free sub-image truncated in the region of the sub-image gon(x, y). Thevarious sub-regions of sub-objects fk (x, y)'s, sub-images go k(x , y)'s and truncated sub-imagesPorn's will be further illustrated in Figs. 5.2 and 5.3.Definition 4.7 Let the Fourier transform of any function z(x, y) be the complex functionZ(wx,wy) IZ(wx, wy)lejc°. and the inverse Fourier transform of eic°. (the phasor ofZ(wx, wy) with unity magnitude) be z,p(x, y). We shall call zso(x, y) be phasor-image ofz(x, y).Definition 4.8 An .edge in an image is the discontinuities either in the brightness or in itsgradient. A discontinuity in the gradient is also referred to as a "ridge".Unless otherwise stated, a lowercase mathematical function will always represent an imagewhose Fourier transform is denoted by an uppercase.The Fourier transform of the self cross correlation (4.3) is^Ryy((.4.2x wy )^G(wx, wy)P* (wx , wy ),^(4.14)The ensemble average of (4.14) is the self cross spectrum(R„(wx,wy)) = (G(wx,wy)P*(wx, WO),^(4.15)where 0 represents ensemble average. For notational simplicity, we will drop the subscripts(w, w) and (x, y) sometimes.26Chapter 5THE MATHEMATICAL MODELOF A NOISY DEGRADED IMAGE5.1 Phase Screen and CorrelationsIn one aspect, the turbulent atmosphere causes the optical wavefronts to randomly fluctuatein space and time. The effects of the atmospheric turbulence on the wavefronts propagatingthrough the turbulence layers can be represented by a series of equivalent phase screens [321 .Fig. 5.1 shows an example of phase screens. The resultant phase screen in Fig. 5.1 is thesum of the three phase screens.Phase Screen 1Phase Screen 2Phase Screen 3ResultantPhase ScreenFigure 5.1 An example of Phase Screens27Chapter 5 THE MATHEMATICAL MODEL OF A NOISY DEGRADED IMAGEIn another aspect, we always assume that the atmosphere turbulence is weak such that thewavefront is basically a plane with small fluctuations. This indicates that the atmosphere isin good "seeing" condition with coherence diameter do whose value ranges from 10 to 20crn.Within an isoplanatic patch whose solid angle' spanned by a telescope is about 3 arcsecs, thePSF's are basically space invariant.In the speckle interferometry methodril, the autocorrelation functions of the observationsand of the PSF's are used to estimate the power spectrum (or the Fourier modulus) of theobject. The power spectrum of the object is insensitive to the relative positions of the sub-objects within the object region and thus gives little information about the object. The KTand TC methods use the KT spectrumr21 and the bispectrumr31 respectively to measure theFourier phase differences of the object. Though the phase difference can lead to the solutionof the phase of the object, the phase difference gives no direct information about the object.When considerable noise is present in the speckle data, the true phase of the object may notbe recovered by the iterative process described in chapter 2.We note that when the atmosphere turbulence is strong, the wavefront can no longer beassumed to be a plane wave. In such cases, image reconstruction is not possible due to lackof atmospheric coherence.We have defined the self cross correlation as the correlation between an image and atruncated sub-image (Defs. 4.1 and 4.2). When the truncated sub-image approaches thewhole image , the self cross correlation becomes the autocorrelationg*p = g*gin which the phase information is totally lost. When the truncated sub-image approaches animpulse, the average of the self cross correlation becomes the long-exposure image(g*p) = (g)28Chapter 5 THE MATHEMATICAL MODEL OF A NOISY DEGRADED IMAGEwhich only retains the turbulence-limited low frequency information of the object. When thetruncated sub-image is the image of an unresolved point star (i.e. the PSF), the phase of theself cross spectrum is simply the phase of the object. This can be seen fromGP* = HFgH* = IHI2Fgwhere noise is ignored. Unfortunately, in real astronomical imaging, the presence of a wellisolated point star source is most unlikely for a given object.In chapters 7-8 we will show that when the sub-image of any star source is properlytruncated, the self cross spectrum not only contains the Fourier phase information up to thediffraction limit, but also presents essentially different properties from that of the TC (or KT)correlation. Though this phase is not the phase of the object, we will show that it has muchdirect and intimate relations to the Fourier phase of the object. Therefore, it is possible toestimate the phase of the object from the self cross spectrum. In fact, the self cross spectrumphase itself contains much information about the object. Such information is very usefulin reconstructing the object both in iterative and non-iterative phase recovery methods. Theretention of diffraction-limited information in the self cross spectrum can be ensured if thetruncated image p contains the image formed by about one complete wavefront at the aperture.The main purpose of astronomical image reconstruction is to deconvolve the randomlyfluctuating wavefront from the unknown object. In this thesis we assume that the unknownobject is time-invariant. There are two kinds of reconstructions, one of which reconstructs thewavefront in real time and is called holographic reconstruction. This is one of the main topicsof "Adaptive Optics"[361. Adaptive optics needs a reference source to correct the randomlyfluctuating wavefront in real time. The measured wavefront of a reference point source isused to adaptively adjust the angles of the multi-sub-mirrors which form the main mirror.Unfortunately, a reliable reference source seldom exists in astronomical imaging, especiallyin the visible light region because the solid angle spanned by an isoplanatic patch is usually29Chapter 5 THE MATHEMATICAL MODEL OF A NOISY DEGRADED IMAGEvery small. We are not concerned with adaptive optics here. Instead we are considering imagereconstruction by post-processing the astronomical images.Even if the PSF is known, It is usually impossible to reconstruct the wavefront (whichresults in this PSF). This is because the OTF (the Fourier transform of the PSF) is theautocorrelation function of the wavefront. The solution of the wavefront obtained fromthe knowledge of the PSF only, is not unique except in very simple cases of wavefrontdeformation. Therefore, for post-image processing, we have to consider the statistics of thehigh-order correlations of the wavefront. The time-average of any order correlations of thewavefront can be obtained by imaging any isolated point source. Therefore an estimate ofany-order correlation of the object can be obtained. Unfortunately, estimating the object fromany-order correlation of the images may not be feasible or may not give a unique solution ofthe object due to the intrinsic properties of the correlations. The KT correlation and the triplecorrelation are two examples of wavefronts correlations. In the KT or TC method, the phasesof the spectra of the average wavefronts correlations are cancelled as a result of the statisticalsymmetry of the wavefront surface, leaving the KT spectrum phase or the TC's bispectrumphase i.e. the phase difference of the object. However, the estimation of the phase from thephase difference is not an easy task. Neither the bispectrum nor the phase difference shows aclose relation to the object. Our task in this work is to use the self cross correlation (Def. 4.2)of the wavefront such that the correlation of the images shows minimized correlations betweenthe object and the high-order wavefront correlations and also the self cross correlation showsclose relationship to the object itself. This kind of self cross correlation makes reconstructionof the object easy.5.2 The Self Cross Spectrum for the Photon-Limited Speckle ImagesLet g t(x , y) and pi(x , y) represent the ith of T image observations and its truncated sub-30Chapter 5 THE MATHEMA77CAL MODEL OF A NOISY DEGRADED IMAGEimage, respectively. Let the unknown object be fg(x,y). We take gi(x,y) and pi(x,y) assamples of random processes. For notational simplicity, we will drop the subscript i. Thiswill be consistent with the definitions in Chapter 4. The detected short-exposure image g(x,y)of any object can be described as a Poisson stochastic process, thusK,g(x,y) =^45(x — x„„y — ym)-1- ng(x,y), (x,,,,y,n) EandK„P(x Y) = E b(x — xin, — Ym) np (x, Y), (srn, Ym) Em.iwhere Kg and Kp are the numbers of detected photon events in a single image frame withPoisson distribution[17], and (xm, y,n) is the random position where the mth photon is detected.In (5.1) and (5.2), ng and np represent additive white noise which includes the detection noisethat is inherent in the measurement process[371 and the background noise which is due toscattered moonlight, or twilight. These noise sources and the photon noise will be describedin detail in section 5.4.The self cross correlation of g and p isK, Kprgp = g*p = E 8(x xk — xby yk — yi)k=1 1=1(Kg+ E 45(x — xk, Y — Yk)) *npk=1Kp+ ng* ( E b(x — xi,y — yi) -I- ng*np.1=1In the Fourier domain, (5.1) and (5.2) can be expressed asK,G(coz,wg) = E^+ Ng(wx,wg), (xk,yk) E Rk=1(5.1)(5.2)(5.3)(5.4)31Chapter 5 THE MATHEMATICAL MODEL OF A NOISY DEGRADED IMAGEandK„P(wx,wy) = E e-i(wszk+wok) + Np(wx,wy), (xk,A) E T.^(5.5)k=1The product of G and P* (* denotes the complex conjugate) gives the Fourier transform ofthe self cross correlation rgp:Rgp = GP* = E E e-i_rux(xk-zo+wy(yk-yz), EKg Kg^ KgPk=1 1=1^ k=1^ (5.6)Kg+ Ng E e-vikexxi+wod + Np*Ng.1=1Using the method similar to that in reference [17], the expected value of GP*, given go andphoton number Kg isE{GP*1G0,Kg] Kp GoPP E[NplGo(5.7)E[Ng]POT E[NgNp],where Go is the Fourier transform of go which is defined in Def. 4.4, Ifp is the photon-noisebias term which results from the photon noise. E[NgNp] in (5.7) is the detection noise biasterm that results from the detection noise correlation. We have assumed that the zero-meanKgwhite noise ng is independent of the signal E^- xk, y — yk). (5.7) can be rewritten ask=1E{GP* 1G0,K9] = KP GC1P() a2np,^ (5.8)where we usedE[NgNpl = E[INpll= a,z2 p.^ (5.9)Eq. (5.9) follows from the fact that the noise np(x, y) is truncated from the noise ng(x, y),i.e.32Chapter 5 THE MATHEMATICAL MODEL OF A NOISY DEGRADED IMAGEnp(x , y) =ng(x, y) can be written asong(x,Y) y) E T(x,y) E^T^ng(x,y) = np(x,v)-F Ang(x,y),^ (5.10)whereAng(x,y) = f 0,^(x,y) E T1 ng(x Y) (x y) E 7?. — T .Therefore np(x,y) is uncorrelated with the noise Ang(x, y). From (5.8) we know that theself cross spectrum E[GP* IG0,14] is a biased estimate of GoP. The two bias terms Kpand ern2 are known a priori. The two bias terms have the following relation with the noiseni (or n2 (4.5 and 4.8).E[N1/V2] = E{IN2121 = K,, E[1.1Vp12] = K,,The implementation of removing these biased terms will be described in Chapter 8.In the above we considered only one image frame. Since the object to be considered isvery faint, one image frame is of very low SNR. To increase the SNR, we need to obtainan ensemble average of GP* from a series of short-exposure images. In real astronomicalimaging, only a limited number of image observations is available. The ensemble average ofGP* from a total of T image frames is given by(G P* ) = (K,,) (Gon) + 04 N(gp)^ (5.11)where 0 denotes the ensemble average. N(gp) in (5.11) represents the noise due to the limitedimage observations.5.3 Segmentations of Isoplanatic Patches from Anisoplanatic PatchThe previous work"' was all based on the assumption that the object lies within aturbulence isoplanatic patch, so that the convolutional relationship go = h ® f9 applies. Due33Chapter 5 THE MATHEMATICAL MODEL OF A NOISY DEGRADED IMAGEto the limited turbulence coherent length, an isoplanatic patch spans only a very small solidangle ranging from 2-4 arcsecs. An image resulting from one isoplanatic patch is oftennot isolated. Rather, images resulting from different isoplanatic patches are overlapped. Asa result, the above convolutional relationship does not apply. Truncation errors cannot beavoided if an image sub-region which is considered to be formed through an isoplanatic patchis truncated from the whole-image. Such truncation decreases the obtainable resolution ofthe reconstructed images[41. Therefore, it is necessary to consider the overall effects of theanisoplanatic patch to avoid the truncation effects.We assume that the anisoplanatic turbulence patch can be partitioned into several isopla-natic patches, such that the impulse response for each isoplanatic patch is space-invariant.Fig. 5.2 shows the sketch of an object with three sub-regions. We can always assume thateach star is located within an isolated region since the dimensions in the focal plane of manystellar objects are tiny compared to the diameter of a telescope. go(x,y) is now the sum of34Chapter 5 THE MATHEMATICAL MODEL OF A NOISY DEGRADED IMAGEThe darker area resprents the non-zero region of each sub-object.Figure 5.2 The partition of an object, each sub-object 11,12 and h being located in an isoplanatic patchNo isoplanatic but overlapping sub-imagesNogo(X)Y) = Eyu(x,y),k=1where(5.12)hk(x,y)EDA(x,Y), (x,y)EgOk(S, V) =^ (X, g) E(5.13)gOk(x, y) represents the sub-image formed through the kth isoplanatic patch and hk (x, y) is thePSF corresponding to the kth isoplanatic patch. Substituting (5.13) into (5.12), it becomesNo^go(x, y) = Ehk(x,y)®fk(x, y).^(5.14)k=iN.The sub-regions Rk's are overlapping, i.e. n 7^0 in general and U R.k = R.. Further,k=1 k=1we assume that for each isoplanatic image region Rk , there is a truncated image denoted byPOk(x, Y)Jo,^(x,y) ER. — Tkgo(s,y), (x,y) E(5.15)35Chapter 5 THE MATHEMATICAL MODEL OF A NOISY DEGRADED IMAGEN.where Tk is a proper subset of 74 and n Tk 0 in general. It can be seen from (5.151.1and 5.12) and that pok(x,y) could have contributions from fk(x,y) and other sub-objectsneighboring fk(x,y) i.e. pok(x,y) results from more than one isoplanatic patch. In specific,pok(x,y) can be written asNoPOk(x)Y) POki,(x=^Y) (x10 Es=1o,^(x,Y) e^— rk,where the contribution from sub-object fi(x,y) isPoki(x,Y) = gooi(x, y), (x, y) E rk n Ri,(x,y) E — (Tk nFig. 5.3 shows an example of various regions of sub-images.(5.16)(5.17)Figure 5.3 An example of the sub-image and truncated sub-image regions resulting from three isplanatic patchesSubstituting for P in (5.11) by the Fourier transform of Pk = Pok -I-. n2, (5.11) he—comes(G/I) (Kph) + (Go Pk) cf!p,No^ (5.18)= (Kp,) + E (H/PA)Fi + cm + N(gpk), k = 1,..., NO.1=136Chapter 5 THE MATHEMATICAL MODEL OF A NOISY DEGRADED IMAGEThe No linear system of equations (5.18) constitute the general truncated cross spectra formfor an anisoplanatic patch. In order to estimate the unknown objectNofg(x,Y)(= E h(x,y)),1=1from 5.18, we need to find Ne cross spectra,{(H1P1,), 1,k = 1,• • • , No} ,or equivalently the Arg cross correlations{(hi*Pok), 1,k =1,• • • ,No},that is, we need to calculate the cross correlation between images from different isoplanaticpatches. This is very difficult since the PSF's{h1, 1= 1,• • , No}are usually unknown a priori.Consider now the kth sub-object fk• We assume that the cross correlation between twoimages from widely separated isoplanatic patches is very low. Thus the resolution limit oftheir cross correlation{(hrkhk), / k}is equivalent to that of the long-exposure. However, we need to consider the cross corre-lationshi*poki =^goi, i k and i = 1, - , No.^(5.19)Fortunately we will show in our simulations in later chapters that the intensity distributionof hi*Poki is mainly located outside the non-zero region of the kth sub-object. Besides, wechoose pa to be the brightest region within the sub-image gok• Thus for the region near thecentroid of the kth sub-object fk, hk*Pokk dominates the RHS of (5.18). For No = 1, theself cross spectrum (5.18) becomes(G11)^(Kp1) (111P1)F1 an2,3^(Ngpi).^ (5.20)37Chapter 5 THE MATHEMATICAL MODEL OF A NOISY DEGRADED IMAGEThis is the simplest isoplanatic case (isoplanatic hypothesis).5.4 The Physics of the NoiseTo understand the nature of the noise in astronomical imaging, we must study the speckleimage detection process.There are analogue and photon-counting detectors. All the presently used infrared imagersare analogue, while photon-counting detectors are used for imaging in the visible lightregionE"1.Taking into account the image detector, the ith short-exposure monochromatic image g(5.1) is changed toK,g = E k(xm,ym)dm(x,y) @ 5(x — x m, y — ym) ng(x , y),^(5.21)m=iwhere dm(x, y) is the detector point spread function for the mth photon and k(xm, ym) isthe weight function. For photon counting detectors, dm(x, y) is approximately an impulsefunction andk(sm, ym)^1.However, this condition is untrue for other detectors such as intensified vidicons.In (5.21), dm(x, y) is applied as a convolution, it is written with a subscript m to emphasizethat it may vary on a photon-by-photon basis. Similarly, the weights k(xm,ym) of the photonevents are in general not identical, but depend upon the location at which the photon isdetected. Thus, the values of dm(x, y) and k(xm,ym) may vary from frame to frame andtherefore introduce non-linearity to the observation system. dm (x ,y) and k(xm, ym) mustbe calibrated and compensated for, on a frame by frame basis, at least, if not on a photon-by-photon one. The signal dependent nature of dm(x,y) typically arises from the geometricdistortion of the image. The photon locus dependency of k(xm, ym), often called the "flat38Chapter 5 THE MATHEMATICAL MODEL OF A NOISY DEGRADED IMAGEfield" response, reflects the gain and sensitivity of the particular detector elements involved.The geometric distortion dependency of dm(s , y) is compensated for via a reference to aregular grid observed in the focal plane. The flat field response is compensated for via areference to a uniform intensity distribution observed in the focal plane. Thus it is quite easyto compensate for these effects since they are not random.The other noise term in 5.21 is the additive noise ng(x, y). The additive noise includesthe detection noise and the background noise.The detection noise is inherent in the measurement process. It may be statistical in originand may be associated with the detector used. Detection noise may arise from sources in thedetector with photon-like responses (dark noise) and from thermal electronic readout noise.The dark noise and readout noise differ significantly. The photon-like responses are colored bythe detector's PSF with its non-linearities as described above. The thermal electronic readoutnoise on the other hand have an entirely different spectrum, which is commonly white, ornearly white. The readout noise is not a weak source of noise but is usually relatively stableand is associated with any analogue detector.The other additive noise, the background noise arises when additional sources of photonsin the field of view are present. These photons add signals which are considered as noise sincethey do not add relevant information. For example, there may be a bright sky backgrounddue to scatted moonlight, or twilight. The background noise is assumed to be white andsignal independent.In the infrared region, the additive white noise sources dominate. In the visible lightregion, photon noise which is additive and signal-dependent with well-known Possion dis-tribution dominates. Photon noise is due to the quantization of light. The effects of photonnoise are that of adding a photon noise bias term and another signal-dependent noise termto the image correlation function.39Chapter 5 THE MATHEMATICAL MODEL OF A NOISY DEGRADED IMAGEAs is customary, we will simply ignore the additive white noise in the visible light region,while in the infrared light imaging, the photon noise is ignored.There is another complicated noise source associated with an image intensifier requiring again of around 500,000 to detect a single phothevent in a CCD (not MAMA detector). Owingto the image intensifier's properties, a photoevent is a spread out spott381 with statisticallyvarying intensities. Additionally, the above mentioned background noise degrades the rawdata. To avoid these noise sources, only the addresses can be extracted from the rawspeckle interferograms by searching for the local maxima of the spread photoeventsr391.Another method to avoid this noise is a photon counting method which utilizes a skeletizingalgorithm[40] [41]• A common problem of photon counting is that two or more overlappingphotons are detected as only one event. In other words, close photoevents produce overlappingspots which appear as one spot with one local maximum, and hence the address of only onesingle photoevent is detected. The missing photons of close photoevent addresses cause the"photon-counting hole" in the average correlationsr383. This noise becomes severe when theobject is not very faint. Compensation for this noise is complicated and is discussed in detailsin reference [38]. This noise is ignored as we assume that the probability of detecting two ormore photons in one address is very small for the faint objects considered.Later, we will analyze and prove some important properties of the self cross spectrum.Then we use these properties to retrieve the Fourier phase of the object by both iterative andnon-iterative methods. Before retrieving the phase we shall in the next chapter consider asimpler case of reconstruction in the presence of background and detection noise.40Chapter 6THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THE PRESENCE OFDETECTION AND BACKGROUND NOISE ONLY6.1 IntroductionThe speckle interferometry technique estimates the amplitude of the Fourier transform ofthe object from images degraded by atmospheric turbulence. It was developed for the casewhen noise is not present. In this chapter we consider the a simple problem of recoveringthe modulus of the object in the presence of signal-independent detection and backgroundnoise. While the signal-dependent photon noise is ignored. We consider the case when theimaging operates in infrared light region or operates in the visible light region with a fairlybright object. In such cases, the backgound and detection noise dominate. In this sectionwe discuss the extension of the traditional speckle interferometry technique to this problem.In section 2 we report on a filter with a better noise suppression performance. We assumethe knowledge of a series of short-exposure image observations and the knowledge of theaveraged autocorrelation functions relating to the point spread function (PSF).To retrieve the information (about the object) contained in the speckle data, a numberof interferometric imaging techniques have been developed and applied in optical astron-omy and other situations involving imaging through turbulence as mentioned in chapter 2.Labeyrierll first proposed and implemented the technique of speckle interferometry for obtain-ing diffraction-limited estimate of the spatial autocorrelation of stellar objects. This techniquefinds the modulus (equivalently, the autocorrelation) of the unknown object. Since (1G12) thetime-averaged squared Fourier modulus of the observation ( i.e., the power spectrum estimated41Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLYfrom a limited number of image observations ) is the product of 1Fg1 2 , the squared Fouriermodulus of the unknown object and (1H12), the time-averaged squared Fourier modulus ofthe impulse response, then(1G12) =(IH12)1Fg 12.^ (6.1)Thus, the Fourier modulus of fg is obtained s Eg (6.2)Note that the SI estimate (6.2) could be considered as an inverse filter applied to the power2spectra relation (6.2). The inverse Fourier transform of is the objects' autocorrelationFgfunction which can be used to measure star diameter and oblateness. However, it is clearthat the retrievable information about the object will, in general, not be complete, for it is thesquared modulus of the object spectrum, (or equivalently, the autocorrelation of the objectwhich is obtained,) and not the complete spectrum itself that is obtained. As discussed inchapter 2 the Knox-Thompson (KT) method[21, the triple correlation method[711421 and thephase retrieval by generalized projections1241 are the extensions of the speckle interferometrytechnique for recovering the Fourier phase from atmospherically degraded astronomicalspeckle data.Some attempts to reconstruct the atmospheric turbulence degraded images have consideredthe case when photon noise is present. The photon-noise limited resolutions are obtained by[ [using the KT method [43114411451 and the triple correlation method [7]42] 45] when the numberof image observations at low light level is large enough. In later chapters, we will detail the42Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLYalgorithms and implementions of object reconstructions from the truncated cross correlationin the presence of both photon- and detection noise. As discussed in the previous chapter,when the image sensor operates in the photon-counting mode, the photon noise that is due tothe quantization of the light dominates the detection noise at visible wavelength at low lightlevel. The photon noise introduces a complex bias term and a random term in both the powerspectrum equation and in the phase difference equations in the KT phase retrieval algorithm.The effects of the photon noise bias term may be removed by subtracting appropriately scaledconstants from the power spectrum of the image and from the phase-difference equations. Theaccurate compensation for the effects of photon noise bias term is thus easy to implement.The estimation of the Fourier modulus of the object is realized by the speckle interferometrytechnique[431 (which is basically the inverse filter applied to the power spectra relation betweenthe object and the observations). This is valid since the photon noise contributions to theFourier modulus equation is a constant if the randomly changing term is ignored.However, since most space objects of interest are quite faint for the imaging process, a highgain image intensifier is also needed. Also in many cases the recorded data has to be convertedto a digitized form for the required digital processing. In these cases, intensifier is then needed.In these cases, intensifier shot noise and digitization noise are introduced. In addition, at longerwavelengths (e.g., in infrared astronomical speckle interferometryM) the image degradationdue to atmospheric turbulence is less than that at visible wavelengths. Infrared speckle dataare contaminated by detector noise at shorter wavelengths and by thermal noise at longerwavelengths, where these noise sources dominate the photon noise. The detection noise, thedigitization noise and the thermal noise may reasonably be assumed to be additive. Thesenoise sources are described in detail in the previous chapter. Compensation for the effects ofthese additive noise sources is a more difficult problem than that of compensation for photonnoise and generally leaves severe residual errors in the reconstructed image147]. In this chapter43Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLYwe address this problem. We consider the case when the atmospheric turbulence degradedimages are contaminated with additive white noise only. This additive noise may includephoton noise if the latter is approximated by white noise[441. In this chapter, we assume thatthe noise is signal independent with zero mean.For the model written in the Fourier domain asG = HFg + Ng,^ (6.3)the time average of squared modulus of G can be expressed as(1G12) = (IH12)1Fg12 + (H*F;Ng)+ (HFgN;)+ (INg12).^(6.4)If the random variables (H*Fg*Ng) and (HFOrg*) are approximated by their respectiveexpected values which are zeros, then applying the speckle interferometry technique we obtainthe Fourier modulus of fg 2^(1G12) — (iNg12)(1H12)(6.5)The same estimate (6.5) is also obtained by applying the inverse filter on the model (6.4).Using this speckle interferometry (inverse) filter may cause significant errors in the spectraestimates. The modulus of the turbulence optical transfer function is usually very small at highfrequencies. Thus the value of the denominator of the above filters may approach zero. Alsoas a consequence of very short-exposure time and low light level of astronomical objects, thesignal-noise ratio will be low. The effect of neglecting the two noise terms in (6.4) therefore44Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLYcan seriously degrade the quality of the reconstructed image. Unsatifactory estimates of themodulus of the object could also affect the phase estimate of the IFT phase retrieval method.Seldin and Fienup [13] have shown that the chances of obtaining a unique solution in theiterative Fourier transform algorithm decrease- as the error in the estimated power spectrumof the object increases.In the rest of this chapter we derive a new filter to estimate modulus of the object whenadditive noise is present. Our filter is based on the least squares criterion. The Fourierphase is reconstructed by using the Knox-Thompson method and the phase dislocation isconsidered[431[461. The results of our filter are much better than those of the inverse speckleinterferometry approach in that the high spatial frequencies are emphasized (enhanced) andthe noise is suppressed. This is demonstrated both theoretically and experimentally.6.2 Estimation of the Fourier Modulus of the ObjectThe Noisy Degraded Image ModelFor an incoherent imaging system, the ith short-exposure image intensity g(x,y,ti) isobtained by the convolution of the object intensity fg(x,y) with the space-invariant pointspread function h(x,y,ti) which is a random and time-varying function, its randomness is theresult of random wavefront fluctuations when light propagates through the turbulent medium.Suppose that the integration time At of each short-exposure image is less than the turbulencecharacteristic fluctuation time, then the point spread function basically remains unchanged inthe time interval At. Moreover, the time interval between image observations is assumedgreater than the fluctuating time of the atmosphere. This is to ensure that image observationsare statistically independent. White noise ng(x,y,ti) is then added leading to the equation:g(x,y,ti) = h(x,y,ti)^fg^ng(x,y,ti),^(6.6)45Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLYwhere ® represents the convolution operator and time ti takes discrete values {4}, i1, 2, • • , T. T is the number of independent short-exposure image observations. The ob-servations g(x,y,ti) are assumed known. The noise process ng(x,y,ti) is assumed to beuncorrelated with zero mean and known strength = o.Restoration Via the Wiener CriterionThe Wiener estimate ig is based on simultaneously considering "all" the observationsg(x,y,ti), i = 1, 2, • • • , T. The criterion is that of minimizingE[fg — f9] 2^(6.7)and such thatTfg = Ehr(x,y,ti)® g(x,y,ti)^ (6.8)i=1We first consider the case when the impulse responses h(x,y,ti), i=1, 2,• • • ,T are samplesof a random process h(x, y, t). It can be shown (proof not included) that the sought after hirsin (6.8) are the solution of the following filter of linear equations in the frequency domain:E [IH 112]W i9 cr;,9^E[H: 1121W fg^E[H: H T]W f gE[I-1;^f g^E[IH2121W fg a2n,^•^E{.11; I I T]W f g_ E[I-13-Hi]W jg^E[11;412iW jg^• • E[IHT12]147fg+[ling7 ffHITT:"^E[E[H '21W f f-=H T^E[H'4',]V17 f f(6.9)46(6.10)E[H*]EG,i'g =E{IH121+ YE{N Hi) + re;Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLYwhere Hi denotes H(0, ti), W19 denotes Wh(0), the power spectrum of h and H:: denotes14(0, ti). Thus the estimate depends on the temporal cross correlations E[HtHi] which areassumed unknowns. For the special case whenE[Hi] = E[H]^= E [11112}^ViandE[1-1: Hi] = E[1111-4]^V i^j and 1the solution of the system (6.9) yields the answerwhere Gi denotes G(, ti). If the temporal cross correlations E[Htiki] are ignored, (6.10)reduces top E[HlEGi g E [I H 12] + re;(6.11)This Wiener estimate is thus equivalent to that of the Wiener filter based on the time averaged(or long-exposure) image and impulse response.If the h(x,y,ti), i = 1, 2, • • • , T are considered as fixed or deterministic and not assample of a random process h(x, y, t) then the Wiener estimate based on the criteria as in(6.7) and (6.8), i.e. using all the multiple observations is the solution to the following linearsimultaneous equation:{ 111112Wfp + edig^Ili * H2Wfg^HI HTW ja^1:0 1 rWigH^ IIIIIHJW/g^1H212W19 + cd,g ; HTWfa^?^H;Wig. =^I. .1:1;411Wig^114112Wfg^• • • 1117•12Wig + erk,^.I1 7^HIW 4 •(6.12)47Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLYSolving these equations we obtainPg =Ei=1 1^2^a 2T +1=1(6.13)This filter needs the values iris, i = 1, 2, • , T for its solution. For our problem theseare unfortunately unknowns.From the above it is deducted that the Wiener estimates based on simultaneously consid-ering all the estimates (6.10 and 6.13) needs more information about the OTF's than merelytheir time-averaged autocorrelation functions. We thus shall take a different approach.An Ad-hoc Scheme for Restoring the Fourier ModulusWe shall assume h(x,y,ti), i = 1, 2, • • ,T as deterministic but instead of obtainingthe estimate based on simultaneously considering all the observations (as above), we shallfirst obtain for every ti the estimate fg(x,y,ti) from its corresponding observation g(x,y,ti)and impulse response h(x,y,ti). Then we shall manipulate the result so that after (time)averaging we obtain the result Pg in terms of the independently observed time-averagedvalues of ( IH 1 2), 1 G12 ) (and ((IG12))). The phase of Pg is obtained later separatelyusing a method based on the KT techique121.To obtain ig(x, y, t) we use the least squares criterionminimizeE [fg(x, y, t) — fg(x,y)] 2^(6.14)and such that48Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLYfg(x,y,ti) = hr(x,y,ti) ® g(x,y ,ti)^ (6.15)The answer is known to bewhg(C:), ti) 117P ,ti) — W1(415 ti)whereWg(3,ti) = IH(,t1)121F912 +(6.16)(6.17)andWfg((7.;,ti)= H*(C.5,4)1Fg12.^ (6.18)With (6.6), (6.15-6.18) 1'4(4 to I becomes^W f g(CD. t)^H* (,t1)1Fg12 Pg(f.7),ti) =^G(w,ti) —^W g(w,ti) W9(475,4)(6.19)And since H(, ti) is unknown we multiply both sides of (6.19) by the conjugate of Ifigp,tol,and by taking the expected value and substituting (6.17) into it, we obtain the power spectrumof the estimate ..4(x,y,ti)cr2121^ng ^11F912.E{Vg(LZ,ti)1 =^Wg(,ti))I(6.20)49Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLYfrom all the images, we now take time average < > of E [ 14(0,4) 21 (Here < > is equivalentTto the summation + E.) (6.20) becomes1.1The estimate in (6.20) is based on a single observation. To obtain the estimate E { 4(0;4)<El2C7ng Pg(0,4)12]) =^Wg(0,ti))) IFgl2 (6.21)IFg12 may be obtained from the measured quality (IG(TS, ti)12), i.e. from the equation(1G12) = (11112)11g12 (H*F;Ng)-1- (liFgN;)+^(6.22)where we have dropped the variables w and ti for simplicity and have ignored the differencebetween the two quantities o-n2g and (1Ng12). Substituting (6.22) into (6.21) and approximatingWg by 1G12, yield<El2 I) =^°-tig ^KIG12) — (H*F;Ng) — (HFgN;) —(1G12) (1//12)Pa (6.23)The Fourier transforms of the time-averaged correlations (H* Fg*Ng)and (HFgNg*) in(6.23) are approximated by their respective expected values which are zeros.With this approximation, (6.23) becomes<El21)^(1^crn2g^(1G12) — cr?zg(1G12)^(11/12)^•Pg (6.24)50Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLYFinally the Fourier modulus of the estimate ig is found by taking the square root of 6.24[(1^'74-)(1G'2)-`9' 1/2(1G12) (1„,2)^1 (6.25)If a measurement of the quantity (1H12) is not available then this term can be calculatedby the method described in Reference[17] with the assumption that the lens aberration isknown a priori.Discussion of the Fourier Modulus EstimateIn the special case when the noise is not present^= 0) then our estimate equation(6.24) reduces to412=(E[lEg(473,t012D = (1 G12)) (6.26)which is the same as that of the traditional speckle interferometry technique (6.2) in theabsence of noise. In the presence of additive noise the traditional speckle interferometrytechnique yields the inverse filter (6.5) which is written again2^(IGI2)1F91 =(1W)(6.27)The difference between our filter (6.24) and the inverse filter (6.26) is the presence of the0,2term (1 ^nS^in the numerator of our filter.(pi )51Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLYTo anticipate the performances of the above filters we note that the filter might run intoproblems when the common denominator (1H12) has values near zeros.This term, the time-averaged squared modulus optical transfer function (M'TF), is knownto attain very small values at high spatial frequencies. When this term is equal to zero, (6.26)and (6.27) will blow up, i.e. the estimate will go to infinity.However for our filter in (6.24) when (1H12) = 0 the extra term in the numerator in(6.24) becomes(1^C.ligOG12)2Cing KINa12)2— ng) = O.crn2(6.28)Since (1H12) = 0 when each 1H12 = 0. Thus the computational instability of this filter issignificantly reduced. The above discussion considered the very severe case when (1H12) isequal to zero. When (1H12) attains very small values, the effects of these high frequencyamplifications, as shall be found experimentally, are very obvious in the images filtered bythe speckle interferometry method and images filtered by our filter are not present.6.3 Reconstruction of the Fourier Phase of the ObjectTo compensate for the noise effect in the Fourier phase of the reconstructed image, we hereuse the original KT techniquer21 and apply it to our turbulence-degraded image restorationproblem. The KT method uses the fact that the phase difference between two values ofFg , separated in spatial frequency by ACi (1,64 < 27p\) can be retrieved from the timeautocorrelation function of the original speckle data g(x,y,ti). Writing (6.6) in the Fourier52Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLYdomain, we obtainG(Ci ,ti) = (ii,ti)Fg N9(6".1,4).^ (6.29)The time autocorrelation in Fourier domain can be expressed by(G(1, ti)G* tO)=(.14131, ti)H* ti)) Fg(Ci i)F; + (Ng^ti)H* (44)Fg*)1-(H(iiti)Fg(ah) N; (a; , ti)) (Ng(i.7.11,ti)N;(c7.^(6.30)where Cji denotes al +When the number of observations goes infinity, the 2nd, 3rd and 4th terms of the RHS of(6.30) vanish if the noise ng(x, y, ti) is white and uncorrelated with h(x, y, ti) and fg(x, y).We point out here that in realistic situations, the sum of these three terms might have anon-negligible value compared to the first term when the number of images are not largeenough. Thus the accurate phase retrieval estimation usually requires more short-exposureimage observations than the accurate modulus restoration does. That is the Fourier phaseestimate is more sensitive to additive noise. Ignoring the 2nd, 3rd and 4th terms on the RHSof (6.30) and dividing by I (G(i, ti)G*(C.S, ti)) I, (6.30) becomes(G(C31,ti)G* (Co* ,ti))  _  (H(ulh,ti)H* (c1; , ti))  Fg(1 )F;ti)G* (C3 ,t1))1^i(11('i, ti)H* (X0))1 IFg(6")(6.31)Denoting the phase of Fg by 9 9 we obtain[Fg (01)F;  Iphase of^= 9 ( )^= pefg, ALI).^(6.32).1^f, wl 0fgIFg(wi)Fg I53Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLY019 can be obtained by summing the phase difference A019((:), Afi). For the case of imagingthrough the atmosphere, with a telescope having significant aberrations, (H(w.1, ti)H* (c7; ,ti))the time autocorrelation of the transfer function is a complex function and point star pictureswould be needed to correct the phase informationt481. In the absence of telescope aberrations,the time autocorrelation of the transfer function should be a real function. The phase Ofgis given by integrating the phase differences from the original to the point. The phase iscalculated by averaging four different paths1431 to reduce the error of accumulation inherent inthe process involving the continued addition of a large number of noise quantities. To avoidthe 27 phase wrap (or phase distortion)E4311461 when multiple-path phases are averaged, wedivide Eq.(6.3 1) by \A P(Ci, 4)12) instead of l(G(Cji, t i)G* , t 0)1.6.4 Simulations and ResultsIn this section, we present the experimental results obtained by using our filter to restorethe Fourier modulus and phase of the object from noisy data. The blurred images are obtainedby computer simulation in which the Gaussian distributed wavefront distortions 171[483 oftthe turbulent atmosphere are used with the assumption that the most significant distortionsintroduced by the turbulence are phase distortions. The wavefront distortions due to turbulenceis assumed to be isoplanatic1151 over the section concerned with the aberration free lens. Thenoise which is white Gaussian with ang = 1.2 is then added to the 64 x 64 blurred images(see Figs. 6.1b-6.1c. ). The object chosen  (see Fig. 6.1a) is an arbitrarily structured star-likeAfimage with o-f, = 50, where afg =^E E g(i,J) and M=64.i=l j=1The Gaussian random phase function (,0 (x, y ,ti) corresponding to the atmospheric turbu-lence coherence cell size 11 x 11 is produced by an independent Gaussian function W(x,y,ti)passing through a lowpass filter 1-1(s ,y)[21 , i.e.54Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLYco(x , y ,ti) = hi,(x,y)€‘11(x,y,ti)^ (6.33)Thus the autocorrelation function of co(x,y,ti) isliw(xr, Yr ,ti) = 141(x 1Y) e ht(-x,-y) liw(sr, Yr, ti)^(6.34)Where the autocorrelation function of ti(x,y,ti) is given by/4(x7,yr,ti) = E[T(x, x, y,- y,ti)W(xr,yr,ti)]= 48(sr, yr).^(6.35)The coherence cell size is then determined by the filter hw(x, y). The Gaussian phaseso(x, y, ti) whose RMS value is chosen equal to 27r, is used to produce the incoherent opticalfunction,H( ,t1) = f f C(s1 ,y1)C*(x^,y yi)dx'cly'f f IP(x,y)I2dxdy (6.36)wherey) = p(x, oev(xt,),^ (6.37)55Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DEIECTION AND BACKGROUND NOISE ONLYP(x,y) is the pupil function of the lengths and (x, y) = V(wx,coy).. A typical point spreadfunction is shown in Fig. 6.2a. A one-dimensional section of the impulse response is shownin Fig. 6.2b. Each point spread function was then recentered to remove large phase tilts[2].The images restored by our filter from 10, 20, 40, 50, 60 and 100 noisy degraded imagesare shown in Figs. 6.3a-6.3f. For comparison, Figs. 6.3A-6.3F show the images restored  bythe inverse filter (6.26). The estimation RMS error 6 = .442. [fg(i,j) fg(i, j)] 2 isi=l j=1calculated corresponding to the different number of short-exposure images Fig. 6.4.The experimental results show that(1) The estimates based on our LS filter are better than those based on the speckleinterferometry inverse filter specially when the number of image observations is not verylarge.(2) The estimation RMS error f decreases as the number of the short-exposure imagesincreases.(3) When the variance o2 of the additive noise increases, the number of the images hasngto be increased dramatically to obtain satisfactory results. In our example when ong = 2.4,(while o-fgremains 50), more than 500 images are needed to obtain results comparable in theRMS error and visual quality to those obtained using 100 images and an. = 1.2. This ismainly due to the fact that the terms neglected in estimating the Fourier phase (6.32) needmore observations before they diminish considerably.56Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLY(a)^ (b)^ (c)Figure 6.1 (a) Object; (b), (c) two of the noisy, degraded, short-exposure images.(a)Figure 6.2 (a) Typical simulated PSF; (b) section of the PSF in (a).^(Continued . . . )57Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLY(b)Figure 6.2 (a) Typical simulated PSF; (b) section of the PSF in (a).Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLYFigure 6.3 Restored images from 10, 20, 40, 50, 60, and 100 observationswith our filter [(a)-(0] and with speckle interferometry [(A)-(F)]59Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLY70RMS605040302010010^20^30^40^50^60^70^80^90^100NUMBER OF IMAGE OBSERVATIONFigure 6.4 Estimated RMS error corresponding to different numbers ofshort-exposure images with speckle interferometry (1) and the ad hoc filter (2)6.5 SummaryWe have shown that the restoration of the Fourier modulus of the object from additivenoisy speckle data by the LS filter criterion is feasible. The effect of the additive noiseon the estimate of the Fourier modulus can be suppressed and good results are obtainedwith much fewer samples than those required by applying the speckle interferometry inversefiltering method. Our algorithm utilizes time-averaged data, it has the added advantages (asthat of speckle interferometry and Knox-Thompson methods) in that the computer memoryand computations involved have the same order of magnitude as that of processing one singleimage. Thus heavy computations and excessive memory burdens are not needed.The restoration via our filter does not improve on the method by which the Fourier phaseis recovered from the KT method. Other methods for obtaining better Fourier phase estimates260Chapter 6 THE ESTIMATIONS OF TURBULENCE-DEGRADED IMAGES IN THEPRESENCE OF DETECTION AND BACKGROUND NOISE ONLYare discussed in chapters 7 and 8. Furthermore, if the speckle data is recorded on photographicfilm or photoelectric systems, film grain noise and noise due to electron emission1493 wouldbe present, these signal dependent noise models should be studied.61Chapter 7OBTAINING TIGHT OBJECT BOUNDS FORPHASE RECOVERY VIA THE IFT ALGORITHM7.1 The Importance of Obtaining Object BoundsThe object bounds defines the boundaries or edges of the unknown object. The objectbounds constraints are crucial for the IFT algorithm (or more generally, the GeneralizedProjection method) to converge to a unique solution[13].The iterative Fourier transform (FT) method, its other versions [18][21][22] and moregenerally, the generalized projections methodE50 have been used to retrieve the Fourier phase.Successful reconstructions by these algorithms are found in the literature for both computersimulated and real data [18][21][51][52]• The advantage of this phase-retrieval algorithm isits noise tolerance and high computational efficiency. However, when considerable noiseis present in the autocorrelation functions, the generalized projection algorithm often cannotconverge to a unique solution without tight object constraints. Even when considerable noise isnot present, the algorithm still requires sufficiently strong object-domain constraints to ensuresolution uniqueness and to achieve convergence to a solution within a reasonable number ofiterations. Due to the fact that the Fourier modulus is a non-convex constraint, the iterationsconverge in a weak sense and often end at some local minima. The chances of obtaining aunique solution decrease as the noise level in the estimated autocorrrelation function of theobject increases[53] [54].In most cases the estimated autocorrelation function can only give a loose object-domainupper bound. This upper bound is given by half the diameter of the autocorrelation. Thisbound is not tight enough to ensure the convergence of the generalized projection iterativealgorithm. Even for a geometrically simple object, tight-object support bounds cannot always62Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IF7' ALGORITHMbe obtained due to the estimation error in the autocorrelation function. The algorithm usuallystagnates especially when the size of the object is larger' 3]• For an arbitrary object it isusually difficult to obtain tight object bounds.In this chapter, a new method for finding tight object-support bounds using the self crosscorrelation information is addressed. We will discuss how the information contained in the selfcross correlation gives the required tight object bounds. We exploit the relations among theself cross spectrum phase, the Fourier phase of an object and object support bounds throughexamples. From these relations we obtain tight object support bounds. In this chapter, westudy the case when all the stars or the sub-objects in the object region have fairly sharpedges. Also atmospheric turbulence is assumed to be isoplanatic.7.2 The Phasor-image of an Object and its Support BoundsThe object support bounds are defined as the boundaries of non-negative object supportregions. As is known, most of high-value magnitudes of the Fourier spectrum of animage observation are usually concentrated on very low frequency region. While low-valuemagnitudes of the Fourier spectrum are distributed on much broad high frequency region.Since edge information about an object is mostly contained in the high frequency region,when the Fourier phasor is concerned, the unity magnitude of the phasor greatly enhances thehigh frequency components which correspond to the edges of the object. Thus, the inverseFourier transform of the phasor e9 of the Fourier transform of any object gives the edgesof an object. For convenience, we shall call the inverse Fourier transform of the phasor, thephasor-image (Def. 4.7). Figs. 7.2a and 7.2b show two phasor-images of two computersimulated objects in Figs. 7.1a and 7.1b. respectively. From the phasor-images of Fig. 7.2,63Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHMit can be seen that the edge information about the object is well preserved .(a) The first computer simulated object^(b) The second computer simulated objectFigure 7.1 Two computer-simulated objects(a) The phasor-image of the first object^(b) The phasor-image of the second objectFigure 7.2 The phasor-images of the objects in Fig. 7.17.3 Edge Preservation in the Phasor-image ofthe Self Cross Spectrum PhaseThe question now is: does the self cross spectrum (G0P0*) still preserve the edgeinformation about the object fg(x, y), like the phasor-image of the object?To answer this question, let us begin with a simple example. In this example no photon-noise is introduced to simulated speckle images for simplicity. Thus G = Go and P = Po.64(b) The phasor-image of the (c) A typical short-exposure(a) A test objecttest object PSFChapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHMA computer-simulated single star object f is shown in Fig. 7.3a. The phasor-image of thisobject is shown in Fig. 7.3b. It is clear that the edges in f are preserved. f is then convolvedwith the random turbulence-degraded short-exposure PSF h to obtain g. A typical such PSFh is shown in Fig. 7.3c. A truncated sub-image p is chosen from g as the brightest circularwindow whose diameter d is equal to the average diameters of the PSF's. The phasor-imageof the self cross correlation (p*g) where g = h 8 f and where the average is taken over 300images is shown in Fig. 7.3d. It can be seen from Fig. 7.3d that the phasor-image of theself cross correlation (p*g) also preserves the edges of f! The question now is: why doesthe phasor-image of (p*g) preserve the profile of f?(d) The phasor-image of the self crosscorrelation (pl r g)(e) The phasor-image of the crosscorrelation (h*p)Figure 7.3 The example of edge preservation in the phasor-image of the self cross correlation65Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHMHere we give a heuristic analysis of the relation between edges and the phasor-imageof the self cross correlation. In chapter 8, the related theorem and its mathematical proofwill be given.The Fourier transform of the self cross correlation (p*g) is(GP*) = (FHP*) = F(HP*).^ (7.1)Thus (p*g) can be viewed as the convolution of f and the cross correlation (h*p)(.9*P) = (h*P) f.^ (7.2)It is obvious that object information contained in the self cross correlation (p*g) is solelydependent on the cross correlation (h*p).The 3—dimensional plot of the image (h*p) is shown in Fig. 7.4a. A one-dimensionalcross section of Fig. 7.4a is shown in Fig. 7.4b. It is clear that the cross correlation (h*p)has a maximum point at the center when h and p are totally overlapped. It is well known66Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY WA THE IF7' ALGORITHM(a) The 3-D plot of the cross correlation (h* p)Figure 7.4 The cross correlation (h*p) (Continued . .. )67Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IF7' ALGORITHM10. 14020^40^60^80^100^120Points 1, 2 are the two points on the circle c in Fig. 1g(b) A 1-dimensional section of (h*p) of Fig. 7.4aFigure 7.4 The cross correlation (h*p)that the overlapping area between h and p decreases at the speed of21.7. icos-1^— sin (cos-1 OM,(see Fig. 7.5) where r = VT2 + y2 and d is the diameter of the area of p. It can be seenfrom Fig. 7.5, the overlapping area decreases (near) linearly. Thus it is expected the cross68The overlapping areaChapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHM-d^d^rFigure 7.5 The overlapping area of h and p via separation rcorrelation (h*p) also decreases (near) linearly as r increases due to the spread-out speckles inthe areas of h and p. The maximum at the center is a ridge i.e. the slope there is discontinuous(see Fig. 7.4a). It is this ridge that causes the phasor-image of (h*p) shown in Fig. 7.3eto have a very bright impulse in its center corresponding to this discontinuous maximum.The 3--dimensioanl plot of the phasor-image Fig. 7.3e is shown in Fig. 7.6. Notice that the69Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHMFigure 7.6 The 3—D plot of the phasor-image of the cross correlation (h*p)circlular edge of the object f appears in the phasor-image of the cross correlation (h*p) inFig. 7.6 (or Fig. 7.3e) The magnitudes of this circular edge (Fig. 7.6) are much smaller thanthat of the impulse at the center. For many simulation examples we carried we found that themore severe an edge is with respect to the others, the brighter it appears in the phasor-image.So far we have established that the phasor-image of (h*p) has an impulse which is themaximum point in (h*A. One can easily show that the phasor-image of the convolution oftwo signals is the convolution of the phasor-images of these signals, thus(g*p)8 = (h*p)8 ® h. (7.3)The strong impulse in Fig. 7.6 when convolved with that of Fig. 7.3b results in Fig. 7.3d.It is this strong impulse in the phasor-image (h*p)e that makes the phasor-image (g*A9 ofself cross correlation in Fig. 7.3d preserve the edges of the unknown object.From the above we conclude that to get the phasor-image of (g*p) to preserve the edgesof f we should obtain a strong impulse in the phasor-image of (h*p) or a maximum in70Chapter 7 OBTAINING 77GHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHMthe center of (h*p). We find that this is the case if the size of the area of p is chosen tobe between half the area of the average diameter of the PSF's and the area of the averagediameter of PSF's ( and preferably equal to the area of the PSF). This is true when the PSF'sare simulated as described in section 4 (7.17). In the absence of any knowledge about thesize of the PSF, then one repeats the experiment with varying sizes of p until the desirededges of f are obvious.From the above it is clear that the self cross spectrum (GP*) retains the information ofthe edges of the object provided that the correlation (h*p) has a maximum that is a ridge (notsmooth). The presence of this maximum can be ensured if we choose the truncated sub-imagep as a bright region (preferably the brightest) of g and its size to be equal to the size of thePSF or anywhere between that and half of the PSF's size.In this example, the test object is near circular. In fact, the phasor-image of the self crossspectrum (GP*) for an arbitrary object still preserve the edges (or object support bounds) ofthe object. This property of the self cross spectrum (GP*) makes it very useful for imagereconstructions by the generalized projections in astronomical speckle imaging.7.4 Algorithmic ProcedureFor quite faint objects in the visible light region (the average photons per frame is in theorder of hundreds), correlations are calculated in the image plane using the photon-addressalgorithm In. Using (5.11) and (5.1 and 5.2), the estimation of (GoP(I) the self cross spectrumof Go and Po is obtained from(Gon) Pe, ((G(wx, wy)P*(wx,wy) — Kr))= .FT {((g (x , y)*p(x , y) — Kpb(x , y)))}= .77( (Kg KE E y))},k=1 1=1,k01p(7.4)71Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY WA THE IFT ALGORITHMwhere the photon-noise bias term K,, are eliminated by omitting all terms in the summation(7.4) for which k = 1, to yield a photon-bias-free estimate of self cross correlation. Aftercalculating the self cross spectrum (7.4) binz(x,y), the phasor-image of the self cross spectrum(G0P) is given by(G013)  1bi,n(x,y) =li(GoPO)I I.The edges of the faintest stars of the original object in bi,,,(x, y) may not show clearlydue to the noise. To suppress the noise, instead of using one truncated sub-image we use a setof truncated sub-images pi(x, s which are truncated from each g(x,y) by a set of circularwindows whose coordinate centers on the sample grid are different from each other. Eachtruncated sub-image pi(x,y) is shifted from the original before processing. Using a total ofNo truncated sub-images, the phasor-image of the self cross spectrum becomesgo^{(G013:1) bf,,,(x,y) = ElFT1.1^l(Go-POI)I •(7.6)The phasor-images bfni(x, yrs (such as shown in Fig. 7.9) are then enhanced by scaling andthresholding to obtain the object support bounds (such as shown in Fig. 7.10).Thresholding is done bybLn(x,y) = {01bse,..n(x,y)> thresb7m(x,y) < thres, (7.7)where thres is the threshold value. The threshold can be done locally so that the fainteredges of the object is detected. The object support R is then obtained by filling the areainside the edges with the values l's.(7.5)72Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IF7' ALGORITHMTo estimate the Fourier modulus of the object we use the expression of the photon-addressoperation for SIRgg(wx,Wy)^(1H(Wx,Wy)12)IFg(wx,W7j)12FT{((g(x,y)A-g(x,y) — Kg8(x,y)))}Kg Kg=FT( E E 8(xk - x, - x, yk — — y))}.k=1 1=1,k01An optimal filter 1101 is used to obtain the estimate of the Fourier modulus of the object(w^gg kx Loy)1F91 ^R= (7.9)(IH(wx,wy)12) c(kg + K,,)A(wx,wg),where Kg and Rh are the mean numbers of photon events per frame for the object and for anindependent isolated unresolved reference source. A( ,w) is the ideal diffraction-limitedtelescope transfer function. In "0] the coefficient c in (7.9) is equal to 1. Here, the value of cis adjusted interactively so that the noise can be properly suppressed in the estimated Fouriermodulus. This is necessary since ( IH(wx , wy)12) is obtained from an isolated unresolvedpoint source by independent measurement. Thus the atmosphere conditions, under whichobject observations and point source observations are independently made, may not matchwell.So far we have obtained the estimates of the Fourier modulus (7.9) of the object and theobject support bounds (7.7) from the self cross spectrum. The problem now is to reconstructthe object itself. We use iterative methods based on the generalized projections method[50].The problem now is as follows: assume an "object" fg(x, y) with the Fourier transformFg(CO) = 1Fg((.7))1e--79 fg(c,1) (7.10)find Fg(cil)given estimates of the Fourier modulus Eg(Wx, WO and the object-support bounds.We also have available the a priori information that fg(x,y) is nonnegative.(7.8)73Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHMWe illustrate the algorithm of reconstruction by generalized projectionsr501. The mathe-matical form of these constraint sets are givenfg(x,y) 0 (x,y) E S^1A1 = {fg(x,y) : fg(x,y) = 0 (x, y) 'R, — S (7.11)and A2 = fg (x, y) t ifig(i-j)le—i9f9P) : iFg(c-Z)1 = tig(wx,wy) (7.12)where S is the object support obtained from (7.7). The projections Pi and 22 onto Ai andA2 are given bypiAs)(x, = fg(t+1)(x, y) 0, (x,y) E R0,^(x,Y) 1Z(7.13)and22fi)(x, Y ) LT Eg(40) (7.14)The general reconstruction algorithm is given byf4E1)(x,y) = Q122.6(x,y), i = 0, 1, 2, • •whereQ1 = 1 + A1(21 — 1)Q2 -= 1 + A2(22 — 1)•(7.15)(7.16)Ai and A2 are the relaxation parameters which can be optirnizedE501 so as to maximizethe rate of convergence of the iterations.7.5 ResultsResults of image reconstructions from computer simulated data are presented first. Thereare two computer simulated objects. Each is contained in 128x128 pixels as shown in Fig74Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY WA THE IFT ALGORITHM7.1a and 7.1(b), respectively. The phasor-images, (the inverse Fourier transform of the Fourierphasors) of these two objects are shown in Fig. 7.2(a) and 7.2(b), respectively.In our simulations, the atmospheric turbulence is simulated by a random complex arraywith uniform distribution. The array is then multiplied by a filter function chosen to representthe Kolmogorov turbulence power spectral density (3.4), and then Fourier transformed to realspace, resulting in phase screen co(x, y, t). Note that Kolmogorov power spectral density (3.4)under-represents its low frequency componentr553 (i.e. low order wavefront distortions, e.g.tilt). Another phase screen, which is filtered by a narrow-banded low-pass filter, is addedto the former phase screen. The structure constant C„ varies according to the atmosphericcoherent diameter do. The value of 0.0331-Cri in (3.4) is about 0.1517 for atmosphericcoherent diameter do = 0.15m. The diameter of the telescope D = 2m. The point spreadfunction at time t is calculated by the autocorrelation of the exponential of the phase screenof the phase screen:h(x, y, t) = co(x '0* eica(x,Y,t) (7.17)where a is a scale factor such that ff h(x, y,t)dxdy = 1. The convolution of the PSFwith the object produces a turbulence degraded image. The intensity of each pixel of animage is then replaced by Poisson distributed random number. The simulations were doneat an average of300 photons/per frame. The self cross spectrum is formed by averaging thequantity (G(cox, wy, OP* (wx , wy) — Kp) using 4000 image frames.The long exposure images of two objects in Figs. 7.1a and 7.1b are shown in Fig. 7.7aand Fig. 7.8a, respectively. The reference sources are obtained by truncating the respectiveimages of the objects. An example of long-exposure truncated sub-image for Fig. 7.7(a) isshown in Fig 7.7b and for Fig. 7.8a is shown in Fig. 7.8b. The truncated sub-images arechosen as the brightest regions with area approximately equal to that of the PSF. 5 sub-images75Chapter 7 OBTAINING 77GHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHMare truncated from each short-exposure image frame. The locations of the centers of thesefive sub-images are mutually separated by two samples on the grid, i.e. their centers are at(a, b), (a, b — 2), (a, b+ 2), (a — 2, b) and (a + 2, b). The average of 5 phasor images of theself cross spectra are shown in Fig. 7.9a and 7.9b, respectively.From Figs. 7.9a and 7.9b it can be been seen that the phasor-image of the self crossspectrum gives the edges of the object. We notice that there is a cluster of five bright pointsnear the center of each image. These points appear because the truncated sub-images aremoved to the origin before the self cross correlation is performed.Figs. 7.9a and 7.9b are scaled and thresholded. The results are shown in Fig. 7.10a and7.10(b). From Fig. 7.10, we see that most edges of the object are clearly shown. The faintpart of edges are locally re-scaled and thresholded.By using the estimated Fourier modula and the object support bounds, the objects arereconstructed by the generalized projections method. The reconstructed objects are shownin Fig. 7.11a and 7.11b, from which we see that the original object is well retrieved. Thefidelity of the phase recovery was measured by root mean square (RMS) of the phase differencebetween the recovered and the original 0-A9 =^E (0,(wx,wy) — 00(wz,wy))2 , where0 is the phase. The RMS of the difference between the recovered phase and the original are0.6 rad and 0.7 rad, respectively. The corresponding "Strehl" values are e—'24,0 = 0.7 and2e-6A° = 0.6, respectively.Finally we use the self cross correlation method to reconstruct the triple star ADS 11344from real data. The photon-list data (size 256x256) is measured by the Steward Observatory's2.3m telescope with a Stanford MAMA detector. A total of 12641 frames were used in thereconstruction. The average photons/per frame E g is 180. the wavelength A is 550nm.The long exposure image is shown in Fig. 7.12a. By implementing the photon addressing76Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHMalgorthm (7.4) of the self cross correlation, the phasor-image of the self cross correlationis obtained. The phasor-image is shown in Fig. 7.12b. The dark rings that surround eachstar are the diffraction-limited rings which indicate that the recovered Fourier phase has thediffraction-limited resolution. The distance between two closest stars is about 0.3 arcsec. Itseems that the faintest star (upper right) in Fig. 7.12b shows its ambiguous support bounds.However, by scaling the phasor-image Fig. 7.12b, the support bounds of the faintest star arequite clearly shown (see in Fig. 7.12c). For further examining, the cropped nearby region ofthe faintest star of Fig. 7.12b is shown in Fig. 7.12d. By this way, The edges of the supportbounds of the triple star are easily identified and obtained. The image is then reconstructedby the IFT method using the above obtained object support bounds. The reconstructed imageis the well-resolved triple star shown in Fig. 7.12e.7.6 SummaryWe study the iterative reconstruction algorithm using the bounds given by the self crosscorrelation. The self cross correlation is the correlation between an image and a truncatedsub-image of it. The truncated sub-image is any bright part of the image whose area ischosen to be about the average area of the PSF's. The results of our investigations illustratethe effectiveness of the algorithm. The results show that the self cross spectrum preservesthe bounds of the object. These bounds are obtained by the inverse Fourier transform of thephasor of the self cross spectrum. The computation time needed for the self cross correlationreconstruction is only ki that needed for TC reconstruction. The edges of very faint stars maynot be clearly shown in the phasor-image from the self cross spectrum due to the noise. Tosuppress the noise we use the average phasor-image obtained from more than one truncatedsub-image in one short-exposure image.77Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHM(a) The long-exposure corresponding to the^(b) A sub-image truncated from Fig. 7.7afirst objectFigure 7.7 The long-exposure images of the first object(a) The long-exposure corresponding to thesecond object (b) A sub-image truncated from Fig. 7.8aFigure 7.8 The long-exposure images of the second object78Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHM(a) The average of 5 phasor-images of the^(b) The average of 5 phasor-images of theself cross spectra corresponding to the first^self cross spectra corresponding to theobject^ second objectFigure 7.9 The phasor-images of the self cross spectra(a) The scaled and thresholded average^(b) The scaled and thresholded averagephasor-image of Fig. 7.9a phasor-image of Fig. 7.9bFigure 7.10 The enhanced phasor-images of the self cross spectra79Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHM(a) The reconstructed image corresponding (b) The reconstructed image correspondingto the first object^ to the second objectFigure 7.11 The reconstructed images(a) The long-exposure of the triple star ADS 11344Figure 7.12 The example of the image reconstruction using real astronomical speckle data (Continued . . . )80Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHM(b) The phasor-image of the self cross correlation (g*p — Kpb(x, y))of the triple starADS 11344Figure 7.12 The example of the image reconstruction using real astronomical speckle data (Continued . . . )81Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHM(c) The scaled phasor-image of the self cross correlation (g*p — If p6(x , y)) of the triplestar ADS 11344Figure 7.12 The example of the image reconstruction using real astronomical speckle data (Continued . . . )82Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHM(d) The magnified and rescaled phasor-image of the faintest star (upper right star) in Fig.7.12bFigure 7.12 The example of the image reconstruction using real astronomical speckle data (Continued ... )83Chapter 7 OBTAINING TIGHT OBJECT BOUNDS FOR PHASE RECOVERY VIA THE IFT ALGORITHM(e) The reconstructed image of the triple star ADS 11344 by the IFT methodFigure 7.12 The example of the image reconstruction using real astronomical speckle data84Chapter 8A NON-ITERATIVE ALGORITHM FOR PHASERECOVERY USING THE SELF CROSS SPECTRUM8.1 IntroductionIn chapter 7 we described how to obtain tight object constraints so that the Fourier phaseof the object can be retrieved by the generalized projections method. These constraints areformed from the object's edges which are obtained from the phasor-image of the self crossspectrum. Unfortunately we often encounter the case when some stars in the object region aretoo faint and subsequently their edges are not well defined. Thus it is difficult to determinetight edge bounds of all of the stars in the object region. Furthermore, some objects do notalways have fairly defined edges. In such cases, the Fourier phase can be recovered by ournon-iterative method presented in this chapter. This method obtains the phase from the selfcross correlation instead of first finding all the edges of the object. In chapter 9 we willconsider the phase retrieval case when all the stars in the object region show no obviousedges. In this chapter, we assume that at least one bright star in the object region showsfairly defined edges.This chapter is organized as follows: in section 2 and 3, we derive a non-iterative phaseretrieval algorithm using the self cross spectrum (GP*). The relation between the phase of theobject and the phase of the self cross spectrum is established by a theorem. In section 4, Theself cross correlation reconstruction is implemented using the photon-address algorithm171. Insection 5, the results are presented and discussed using computer simulated data and real data.85Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUM8.2 Phase Preservation in the Self Cross SpectrumIn Chapter 5, we have described how the self cross spectrum (5.11) is formed. Ourpurpose is to retrieve the Fourier phase of F9 from (5.11). It is quite difficult to achievethis purpose since (in (5.11)) the cross spectrum (HP) (or the cross correlation (h*po)) isunknown a priori. In this section, we will analyze and prove the characteristics of the crosscorrelations (h*po) through an example which is one-dimensional. Then in section 3 we willdescribe how the Fourier phasor of (h*po) is estimated.Theorem. Let f(x), lx1 <2b be the truncated function from any real positive function f9(x)and the non-zero region of f(x) be over jxj < a. Let the ith observationgo(x) = h(x)® f(x), (1x1 5_ (a + b))^(8.1)correspond to the ith sample of impulse response h(x), lx1 < b andlx1 < a. Let p0(x) be the ith truncated imagePo(x){ gc1(x), (1x1 5_ b)-^0^otherwise.(x) is finite over (8.2)Then the first-order derivative (h(x)*po(x)) does not exist at x = 0, that is, the left derivative(h(x)*po(x))ijx.0_ is not equal to the right derivative (h(x)*po(x))11x.0+. In other words,(h(x)*po(x)) has a ridge at x = 0.Proof. We rewrite the truncated sub-functionPo(x) = go(x) — Ago(x),^ (8.3)where Ago (x) is identically zero over lx1 < b, otherwise it is equal to go(x). Substituting theabove equation into the derivative (h(x)*po(x)) , we have(h(x)*Po(x))/ = (h(x)Irgo(x))' — (h(x)*Ago(x))/^(8.4)86Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUMIt is obvious that the first term on the RHS of (8.4)(h(x)*go(x))' < (x) < oo, Ix! <a^ (8.5) This is because jw(IHI2F) < jwF and because IH(w)I < H(0) = 1. Since the derivativeat x = 0 of the first term on the RHS of (8.4) exists, the existence of the derivative atx = 0 on the LHS of (8.4) is decided by (h(x)*Ago(x))I, the second term on the RHS. Thecross correlation (h(x)*Ago(x)) has its minimum value at x = 0 since h(x) and Ago(x)have the least overlapped area when x = 0. For the one-dimensional case the region ofoverlap between h(x) and Ago(x) increases as lx1 increases (linearly) from Ix' = 0. Forthe two-dimensional case, it can also be proven that as r (= -‘42 + y2) increases fromr = 0, the area of overlap between h(x, y) and Ago (x ,y) increases (near linearly) accordingto 222d7. [cos-1 — sin (cos-1 OM2?---;ir (5, — 1). Since the area of overlap increasesas Ix' increases from Ix' = 0, therefore the left derivative (h(x)*Ago(x)) ix.o_ and theright derivative (h(x)*Ago(x))1 1,0+ have opposite signs. This means that the derivative(h(x)*Ago(x))'1x.0 does not exist. Therefore the derivative (h(x)*po(x))'o does notexist. Thus (h(x)*po(x)) has a ridge (a kind of edge) at x = 0. Q.E.D.An example of this theorem is shown in Fig. 8.2. The non-zero region of f (x) is definedover Ix' < a. (h(x)* Ago(x)) is shown to have a ridge at x = 0 and (h(x)*po(x)) alsohas a ridge at x = 0.Let the Fourier transform of z(x) be Z = IZIejez and the inverse Fourier transform of eiez(the phasors of Z) be ze(x). z9(x) is called the phasor-image of z(x). The identical unityvalues of the magnitudes of ei°z means that the high frequency components of the image z(x)are relatively enhanced in its phasor-image zo(x). Since edges in a signal contain very highfrequencies, the edges of the image will be preserved in its phasor image (please refer to Fig.8.4. (h(x)*po(x))9, the phasor-image of (h(x)*po(x)) is shown in Fig. 8.2. Notice that87Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUMthe ridge of (h(x)*po(x)) at x = 0 is greatly enhanced in its phasor-image (h(x)*po(x))9at x = 0 where it appears as a very bright impulse. This very bright impulse will be ofgreat relevance in the discussion below as it will be shown to preserve diffraction-limitedinformation of the object fg(x).The self cross correlation (go(x)*po(x)) can be rewritten as(go(x)*po(x)) (h(x)*po(x)) @ fg(x), (8.6)We notice that the convolution ® in (8.6) is still applicable to the phasor-image i.e.(go(x)-kpo(x))0 = (h(x)*po(x))0 @ fge(x). (8.7)Now since the phasor-image (h(x)*po(x))0 has a strong impulse at x = 0 , the convolutionof this phasor-image with f99(x) preserves fge(x), the phasor-image of the object fg(x,y).Thus (go(x)*po(x))8 preserves the phasor of fg(x,y).We also notice from Fig. 8.2 that the phasor-image (h(x)*po(x))0 has secondary strongimpulses corresponding to the edges el, e2, 4 and 4 at lx1 = a. Actually, the presence ofedges in any truncated object function f(x) will cause the phasor-image (h(x)*po(x))9 tohave strong impulses. These secondary impulses in (h(x)*po(x))9 at 'xi = a when convolvedwith fg,(x) cause ghosts in the self cross correlation phasor-image (go (x)*po (x))9 (see Fig.8.7). If the truncated object f(x) does not contain any sharp edges then these ghosts willnot appear.8.3 Retrieving the Phase from the Self Cross SpectrumSo far we have established that (h(x, y)-Irpo(x, y)) contains a ridge. This ridge causesthe phasor-image (h(x, y)*po(x, y))9 to have an impulse and from (8.7) this impulse causes(go (x, y)*po(x, y))e, the phasor-image of the self cross correlation (go(x, y)*Po(x, Y)),88Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUMto preserve the phase information of the object fg(x , y). In the case that the truncatedobject f (x, y) is smooth i.e. all the stars in f (x, y) have soft edges then the impulse in(h(x, y)*po(x, y))0 dominates i.e. it is very strong relative to the other information inthis phasor-image. In this case (h(x, y)*po(x, y)) 0 can be approximated as an impulsefunction and the phasor-image (go(x, y)*po(x, y))0 may be taken as an approximate estimateof f ge(x ,y), the phasor-image of fg(x,y). This simpler case will be discussed in Chapter 9.The case when f(x, y) have stars or sub-objects with fairly sharp edges is more chal-lenging. These edges will cause secondary impulses (such as el, e2, eC and e12 in Fig.8.2) in (h(x ,y)*po(x , y)) 0 and from (8.7) these secondary impulses will cause ghosts in(go(x ,y)*po(x ,y))0. In the remaining part of this chapter we address the case when allthe stars in the truncated object f (x, y) (containing the part of fg(x ,y) which contributes top(x , y)) have fairly sharp edges. As mentioned earlier this algorithm applies to whether ornot the stars and sub-objects outside f(x, y) have edges. However when all the stars andsub-objects in fg (x, y) have edges the algorithm in [56] is faster and gives better results.In this section we will describe how the estimate of the cross correlation (h(x, y)*po(x, y))(refer to (8.6)) is obtained and then the estimate of the Fourier phasor of the object fg(x, y)is obtained given the estimate of the self cross correlation (go(x, y)*po(x, y)) (or the selfcross spectrum (Go P11) (5.11)).From (8.6) it can be easily shown that the phasor of fg(x, y) isdeig = ei8(go*po)• e—je(h*po (8.8)iewhere e (90*P0), the phasor of (go*po) , is obtained from (5.11) by ignoring the noise (No)).Now in order to find an estimate for 09.fg, from (8.8), an estimate of eje(h*P0), the phasor of(h*p0), must first be obtained. For this purpose we need to obtain :fix, y), the estimate ofthe truncated object f (x, y) from the ghostly phasor-image ((g(x , y)*p(x, y) — Kp6(x , y)))0.89Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUMBy using 1(x, y) and a series of independent measured PSF's h(x ,y), the estimates of thecross correlation (h(x, y)*po(x, y)) and eie(h*P0) by ) are then obtained. From (8.8) we noticethat the phase Ofg at any frequency (wx, wy) is decided by the phases 0 (goIrpo) and 9(h*po) atthe same frequency (cox , wy). Thus, the noise in the estimated phase at one frequency willnot propagate to other frequencies.To find the estimate (x, y), we first extract the edges of f (x, y). These edges are easilyextracted from (g (x , y)*p(x, y) — Kp8(x, y)) 9 since they are brighter than those of the ghosts.If f (x , y) contains only one star or object, we fill the area inside the star with an arbitraryconstant lumino'us intensity value Ca (as shown in Fig. 8.8a). The resulting image f(x, y)in this case is a binary function whose luminous is equal to Ca inside the star and is zerooutside the star. In reality f (x, y) is not a binary function but as long as f (x, y) is smooth inthe regions outside and inside its sharp edges, the difference between the Fourier phasor off (x , y) and that of (x , y) can be ignored. This indicates that f (x, y) is basically determinedby its edges, which is demonstrated by an example. In this example, five different objects= A, B, E, F, G with the same edges are shown in Fig. 8.1 left. The correspondingphasor-images (Fig 8.1 right) show little differences amongst them. In this case the value of90Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUM1. 40 60 80 100 120Figure 8.1 The relation between objects and their phasor -imagesthe constant Ca, is irrelevant since we are interested in estimating the Fourier phasor.If f(x,y) contains more than one star or object (Fig. 8.8b), the area within each objectis filled with a different constant intensity. For example if f(x,y) contains two stars then thevalue of each pixel of f(x, y) will be zero, Ci and C2. The constant Ci (or C2) is taken to beequal to the average intensity of the edges of its corresponding object. This is because brighterstars or objects in fg(x,y) appear with brighter edges in ((g(x,y)*p(x,y) — Kp8(x,y)))0.We assume this relationship is linear. Actually this relationship is not linear and needs adedicated study.Having obtained .7(x, y), we now use another series of measured PSF hi (x,y)' s to formthe series0. 40 60 80 100 12091Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUMgi(x,Y) = hi(x,Y)® (8.9)In real astronomical imaging, a series of PSF's h1 (x, y) are obtained from independentmeasurements from any isolated point source. From (8.9) we form the truncated series770(x, y). We then form the cross correlation (hi(x, y)*A(x, y)). It is easy now to un-derstand that the cross correlation (hi(x, y)*/5-0(x, y)) is an estimate of the cross correlation-b(h(x,y)*po(x,y)).The use of the independent measurements of an isolated point source may cause errorsin the estimate (hi(x,y)*II0(x,y)) due to the inconsistency of the turbulence conditionswhen the observations of the images and the isolated point source are independently made.However, considering the Fourier phase of (h(x, y)*po(x, y)), this inconsistency can bereasonably ignored. The Fourier phase of (h(x, y)*po(x,y)) is basically decided by f (x, y)since the statistics of the equivalent phase of the atmospheric turbulence is circularly symmetric(isoplanatic).Ignoring the irrelevant constant C, the phasor-image (hi(x , y)* Ri(x , y)) 6, is shown inFig. 8.9. The Fourier phasor estimate eiefg of the unknown object is finally obtained from(8.8) by substituting the phasor of (hi(x, y)*A (x , y)) for the unknown e39o*P0).In summary the procedure to obtain the Fourier phasor of fg(x,y) for the model (5.11)is as follows1. Form the self cross correlation (g(x,y)*p(x,y) — Kpb(x,y)) and take it as(go(x,y)*po(x,y)), the estimate of the self cross correlation (go(x,y)*po(x,y)) (re-fer to Fig. 8.7) .2. From (go(x , y)*po (x , y)) 9, extract the edges contained in the truncated object f y).92Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUM3. Fill the area within the extracted edges of each star or object in f(x, y) by a constantintensity equal to the average of the intensities of the edges of that star or object. Thisforms (x, y) (refer to Fig. 8.8).4. Using a series of independent observations of the PSF's, h1(x,y), form the serieshi(x, y) 0 f(x,y). Truncate the latter to extract the series 110(x, y).h5. Form (hi(x, y)*f^ ( —io(x, y)) and obtain its phasor e^*) and use the latter in (8.8) toobtain elefg.The process of extracting edges needs careful image enhancement and thresholding andmay be subjective. More objective edge-extracting techniques are to be sought.In the following section we apply the above procedure to obtain the phase of Fg from(5.11). We also show how to obtain the amplitude of Fg.8.4 Reconstruction Implementation using Self Cross CorrelationAs we are mainly interested in the reconstruction of quite faint objects in the visiblelight region, the photon-address algorithm [7] is used to calculate the self cross correlationsbetween g(x,y) and p(x, y). Referring to (5.11)((GP* — Kr)) = (H.11)Fg (Ngp).^ (8.10a)We note that the LHS of (8.10a) is= FT {((g(x,y)*p(x,y) — Kpb(x,y))))1K9 K= FT • E E 6(x, - x, - x, yk — yi — y)) ,,,k.1 1.1,k0Iwhere the photon-noise bias term Kp is eliminated by omitting all terms for which k = 1, toyield a photon-noise bias-free (unbiased) estimate of the self cross correlation.To obtain the phasor estimate of Fg in (8.10a) we ignore the noise term (Ngp). The steps1 to 6 in the above section are then followed.(5.10b)93(IH12) + Kg Kh(002 — K9))_^A(wx,wg),Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUMHaving obtained the phasor estimate Ofg of fg(x, y) we now obtain the amplitude estimate.Substituting G for P in (8.10a) and using the expression of photon-address operation for SI(1G12 — Kg)= FT{(g(x,y)*g(x,y) — KgS(x,y))}1 Kg Kg= FT E E 6.(xk _ x, - x, yk — yi — y))(k.1 i=i,koi(1.1-112)1Fg12.An optimal filter [10] is used to obtain the estimate of the Fourier modulus of the object(8.11)11; (8.12)where kg and Eh are the mean number of photon events per frame for the object and anindependent isolated unresolved reference source. A(wx,wg) is the ideal diffraction-limitedtelescope transfer function.8.5 ResultsIn this section we first present image reconstruction using computer simulated data andreal data via our self cross correlation algorithm. Two test objects (object 1 and object 2,contained in 128x128 pixels are shown in Figs. 8.3a and 8.3b. We want to recover thephasor eiefg of each of these two figures. We represent the phasor eiefg by (its inverseFourier transform) the phasor-image fgw(x,y) for viewing convenience. Two phasor imagescorresponding to Figs. 8.3a and 8.3b are shown in Fig. 8.4a and 8.4b.The simulation of atmospheric turbulence is described in chapter 7. The simulationswere done at an average of 386 photons/per frame. The self cross spectrum is formed byaveraging the quantity (G(wz,wg,t)P*(wx,wg) — Kr). 3000 image frames are involved in94Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUMthe averaging process. The long exposure images of the two objects are shown in Figs. 8.5aand 8.6a. To show the regions of the truncated sub-image p(x,y), the long-exposure truncatedsub-images are shown in Figs. 8.5b and 8.6b, respectively. These were chosen as the brightestregions of the images and their areas are equal to that of the average PSF. The phasor-images(g(x,y)*p(x,y) — Kp6(x,y))0's are shown in Figs. 8.7a and 8.7b, respectively.In latter two images, the "ghosts" are shown. Comparing Fig. 8.7b to 8.6b it can beseen that part of one of the two stars of the truncated object is located outside the truncationregion. The edges (or profiles) of sub-objects are extracted by enhancing and thresholding.The estimated truncated objects Px, y)'s are formed by filling in a constant intensity insideeach profile. We note that here f(x,y) includes the dominating star (the brightest star) whichcontributes to the truncated sub-image. The dominating star contains almost all the energyin the truncated sub-image. The other star also contributes to the truncated sub-image butwill have much less a little effect on the estimated phase of the object fg(x,y) since thisstar contains much less energy. In fact, it is quite difficult to extract the edges of faintstars which are located either inside or outside the truncated support region T. The aboveapplies to the more general case i.e. when (x , y) contains more than one dominating starand/or more than one faint star. f(x, y)'s are shown in Fig. (8.8). The self cross correlation(hi (x,y)*A(x,y)) is formed by using a series of independent PSF's hi (x, y)'s. The phasor-image (h1(x,y)*5(x,y))9 is shown in Fig. (8.9).Now examine the performance of the algorithm. From Fig. 8.10 we see that the Fourierphasors of the two original objects are well retrieved. The fidelity of the phase recovery wasmeasured by root mean square (RMS) of the phase difference between the recovered 0-f-- and951^ \ 2m2 Le Vilg (4.0x I Wy) ° f g(U) X W Y) ) •Pl<Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUMthe original Ofg, i.e.CrA eig = E 2(8.13)In our two computer-simulated examples, the noise (Nyp(wx,wy)),^< ji in the self crossspectrum phases are 0.4678 and 0.4264 measured in "strehl" value e °ergP , respectively._cr2The estimation errors in the recovered phase in "strehl" value, e 'efg are 0.4582 and 0.4180,respectively. The RMS crAefg is slightly greater than the RMS of the noise (nyy(x,y)) inthe self cross spectrum phase. This slight increase in the RMS results mainly from: (1) theestimation error of the truncated object since our estimate :f(x, y) is simply determined by itsedges and (2) from ignoring the remaining stars (in f(x,y)) which contribute the truncatedsub-image. In reality, stars in the truncated object may not be cylinders. As long as thetruncated object is smooth inside its edges and the dominating stars in the truncated objectare bright, this estimation error has slight effect on the estimate of the phase.At this point we would like to compare our results with that of the least-square phaserecovery from the TC bispectrum phase. For the same two examples, The noise in thebispectrum phase in "strehl" value are 0.5004 and 0.5207, respectively. The estimation errorsin the recovered phase measured in "strehl" value are 0.4624 and 0.4788, respectively after8000 iterations in the least squares two-subplane phase estimationt101 using unweighted leastsquares estimation. The recovered phasor image from the TC bispectrum is shown in Fig.(8.11). The results from our self cross spectrum are better than that from the bispectrum.To further verify the validity of our self cross correlation reconstruction, we presentour reconstructed binary stars # Del from real data. The photon-list data (size 256x256) ismeasured by the Steward Observatory's 2.3m telescope with a Stanford MAMA detector. A96Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUMtotal of 4567 frames were used in the reconstruction. The average photons/per frame kg is650. The wavelength A is 550nm. The long exposure image is shown in Fig. 8.12a. Thereconstructed phasor-image of 1g(x, y) by using the self cross correlation is shown in Fig.8.12b. The reconstructed phasor-image of :6(x, y) by using the bispectrum phase is shownin Fig. 8.12c . Fig. 8.12b shows a well-resolved binary star but Fig. 8.12c does not.8.6 SummaryThe results of the investigations illustrate the effectiveness of our self cross correlationalgorithm in phase retrieval. Our algorithm has the following characteristics:1. The estimation error (measured by the RMS aziefg) of the recovered phase of the unknownobjects is only slightly larger than the RMS of the noise in the self cross spectrum phase.2. The estimation errors of the recovered phase do not propagate from one frequency toother frequencies.3. Like the bispectrum phase, the self cross spectrum phase possesses the space shift-invariantproperty.4. The computation time for speckle data reduction is much shorter than that for datareduction in TC.5. The self cross correlation is a double correlation, as a result, it has lower sensitivity tonoise than triple correlation.The computation time needed for the self cross correlation reconstruction is only 5% thatneeded for TC reconstruction. The result is better than that of TC and KT. Our conclusion isthat the self cross correlation function can be used effectively to reconstruct the object.970<fikrApT> e -2 - - -- - Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUM0.515.5•00.5—100^—50^0^50^100<filx)-Arzl go(4>—100^—50^-a^0^a^50^100-100^—50^-a^0^a 50^100<fiktArrc14>e2 -4r— 1 00 0 50^100Figure 8.2 An example of the Theorem98Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUM(a) The computer simulated object 1 (b) The computer simulated object 2Figure 8.3 Two original objects(a) The phasor-image of object 1 ( Fig.^(b) The phasor-image of object 2 (Fig. 8.3b)8.3a)Figure 8.4 The phasor-images of two original objects in Fig. 8.399Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUM(a) The long-exposure corresponding to^(b) A sub-image truncated from Fig. 8.5aobject 1Figure 8.5 Long-exposure images of object 1(a) The long-exposure corresponding to the^(b) A sub-image truncated from Fig. 8.6aobject 2Figure 8.6 Long-exposure images of object 2100Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUM(a) The phasor-image ((g*p — If p8(x,y))) 9 (b) The phasor-image ((g*p — If p6(x, y)))9corresponding to object 1^corresponding to object 2Figure 8.7 The phasor-images of the self cross spectra(a) The estimated truncated object 1(x, y)^(b) The estimated truncated object 1(x, y)from Fig. (8.7a)^ from Fig. (8.7b)Figure 8.8 The estimated truncated objects101Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUM(a) The phasor-image (h1(x,y)*A(x,y))9 (b) The phasor-image (h1(x,y)*25-0(x,y)) 9corresponding to the sub-object in Fig.^corresponding to the sub-object in Fig.(8.8a)^ (8.8b)Figure 8.9 The phasor-images of the estimated truncated objects in Fig. 8.8(a) The phasor-image of the reconstructed^(b) The phasor-image of the reconstructedobject fg (x, y) by the self cross correlation object fg(x, y) by the self cross correlationmethod^ methodFigure 8.10 The phasor-images of the reconstructed images by using the self cross correlation method102Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUM(a) The phasor-image of the reconstructed^(b) The phasor-image of the reconstructedobject 1 by TC^ object 2 by TCFigure 8.11 The phasor-images of the reconstructed images by using the TC method(a) The long-exposure of the binary star 3 DelFigure 8.12 The example of self cross correlation reconstruction using real astronomical speckle data (Continued . . . )103Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUM(b) The reconstructed phasor-image of the binary star Del by using our self crossspectrumFigure 8.12 The example of self cross correlation reconstruction using real astronomical speckle data (Continued . . . )104Chapter 8 A NON-ITERATIVE ALGORITHM FOR PHASE RECOVERYUSING THE SELF CROSS SPECTRUM(c) The reconstructed phasor-image of the binary star using the bispectrum phase viaunweighted least squares estimationFigure 8.12 The example of self cross correlation reconstruction using real astronomical speckle data105Chapter 9OBJECT RECONSTRUCTION FORTHE ANISOPLANATIC CASE9.1 IntroductionA plane wave propagating through atmospheric turbulence experiences spatial and tem-poral random changes in its phase. Thus atmospheric turbulence is denoted by an equivalent"phase screen". The propagating waves from two point light sources (stars) separated by anangle could meet different phase screens of turbulence. The atmospheric turbulence is re-ferred as to be anisoplanatic if all the point sources in the object meet different phase screens.In such a case, the PSF's are in general space-variant in the image plane. The atmosphericturbulence can be considered as isoplanatic if such differences in the phase screens are neg-ligible, that is, the variance of the PSF's in the image plane can be ignored only when theobject is affected by one isoplanatic patch. The size of the solid angle a telescope spans of anisoplanatic patch is dependent on the wavelengths and seeing condition. In the infrared lightregion, the angle can reach up to 10 arcsecs in good "seeing" conditions. Thus, in infraredregion, it is quite reasonable to assume that all parts of an object are in general affected byone isoplanatic patch. However, in the visible light region (which is to be considered in thischapter), the solid angle of an isoplanatic patch is only about 3 arcsecs. As a result of thesmall isoplanatic patch, the sources in the object are not affected by one isoplanatic patch butby an anisoplanatic patch. This results in space-variant PSF's.We have assumed in chapter 5 that an object fg can be partitioned into several sub-objectsfk's such that each sub-object is affected only by one isoplanatic patch. The non-zero regionof each sub-object is assumed to be located near the center of its corresponding isoplanatic106Chapter 9 OBJECT RECONSTRUCTION FOR THE ANISOPLANA77C CASEpatch (see Fig. 5.2). We assume that the cross correlations in the PSF's (hk*/&/), Vk / issmall i.e. the correlations of any two isoplanatic patches are small.All the traditional reconstruction methods of astronomical images are applicable onlyto the case when the PSF's are space-invariant. In this chapter, we extend our self crosscorrelation method to the anisoplanatic case.9.2 The Fourier Modulus of the ObjectIn chapters 4 and 5, we gave the full definitions of the sub-objects fk's and theircorresponding sub-images gok's. For the anisoplanatic case, each sub-object (correspondingto one isoplanatic patch) must be reconstructed separately. This demands the knowledge ofthe measures or the estimates of power spectra (1Gok12) of the sub-images. Unfortunately,due to spread-out PSF's, the sub-images gk's corresponding to sub-objects fk's are usuallymutually overlapped. Thus, all the measures of the power spectra of the sub-images areunknown a priori. This is unlike the isoplanatic case where the sub-image is the image itselfand the measure of its power spectrum is known a priori. We seek the estimates of the powerspectrum of each sub-image.In order to estimate the squared Fourier modula IFki2,s of the sub-object fk in the gen-eralized projections method or this non-iterative method, we first need to obtain the estimateof the power spectrum (IGok12) of each sub-image gok (equivalently, the autocorrelationfunction (gok*gok)). In the anisoplanatic case, only the estimate of (1G012) which is thesum of the power spectra of the sub-images and their cross spectra is known,NO NO( I Go 12 = E E (Gok%),k=1 1=1where each term on the RHS of (9.1)(Gok%) = (Hk Ht)FkFt.107(9.1)(9.2)Region ofg1Region ofg2Chapter 9 OBJECT RECONSTRUC770N FOR THE ANISOPLANATIC CASEObviously it is very difficult to separate the terms in 9.1 from each other since the sub-imagesformed by different isoplanatic patches are mutually overlapped.The following method is proposed to estimate the ( IGok 12).Without loss of generality we assume the number of isoplanatic patches No = 2. Theregions of two sub-images are illustrated in Fig. 9.1.The overlapped area of g jand g2Figure 9.1 Illustration of overlapped area of two sub-imagesConsider the region bounded by the same boundaries as got. The observation in that regionis got + Ag02 where g oi is the contribution of sub-object fi and Ago2 is the contribution ofsub-object f2. The average autocorrelation function of got + Ago2 is((got + Ago2)*(go1 + go2)) =(goi*goi) + (Ago2Irgo1)(9.3)+ (go1*Ago2) + (Ago2*Ago2)•Since goi and Ag02 are the observations from different isoplanatic patches, the crosscorrelation functions in (9.3)ared to the(Ago2*go1) and (goi*Ago2) are negligible comp108Chapter 9 OBJECT RECONSTRUCTION FOR THE ANISOPIANATIC CASEautocorrelation functions (goi*goi) plus (Ag02*Ago2). Thus (9.3) can be approximated by((goi + 2go2)*(go1 + go2)) = (goi*.goi) + (A902*4o2)•^(9.4)Similarly the average autocorrelation function of go2 + Agoi can be approximated by((go2 + Agoi)*(9o2 + Agoi)) (g02Irgo2) + (Agoi*Agoi)•^(9.5)According to the properties of the autocorrelation, the (goi*9(11) has a maximum value atx = 0 and sharply decreases as the overlapping area decreases as x increases or decreasesfrom x = 0. The same applies to the autocorrelation (Ag02-kAg02). As the stars in eachsub-object are well separated from the stars in the other sub-object, the center (maximumpoint) of (Ago2*Ago2) is usually located far from that of (goilrgoi) and can be easilyidentified. The distribution of (Ago2*Ago2) is concentrated near its maximum point. We canview (Ago2A-Ag02) to be narrow in its spread around its maximum. By space-filtering thisautocorrelation function near its maximum an estimate of the autocorrelation(goi*goi) andtherefore the power spectrum ( I G0112) is obtained. The Fourier modulus IF1 I can then beestimated by the method described in Chapter 6 in the case when the photon-noise is ignored.In the visible light region, the photon noise dominates. An optimal filter [1°3 is used to obtainthe estimate of the Fourier modulus of the object(1G112 ) -^-A(caz, cay).( 111 1 2) +^+^+ ern2 hng(9.6)In general the estimate of the kth sub-object isKiGki2) —^— cy?i,)A(i.vz,wy).^(9.7)+ (kk + + + a2nh109Chapter 9 OBJECT RECONSTRUCTION FOR THE ANISOPLANATIC CASEIn (9.7) 14 are the mean numbers of photon events per frame for the region bounded bygk; 14 is the mean numbers of photon events per frame for an independent isolated unresolvedreference source; an2, is the variance of the white Gaussian noisenh(x,y); A(wx,coy) is theideal diffraction-limited telescope transfer function. In (9.7) we note that we have used(1Hk12) = (1H12) as the turbulence is assumed to be ergodic in time and space.9.3 Phase Recovery From the Self Cross SpectrumObtaining sub-object's support boundsFor the isoplanatic case, we have in chapter 8 established the relation between the phaseof self cross spectrum (G0P) and the object bounds by a theorem. We now show thatthis theorem also applies to the anisoplanatic case. Consider the self cross correlation(go *Pok), k = 1,- • • ,No. The kth truncated image pok can be expressed as superpositionsresulting from different isoplanatic patchesNoPok = Pokk^E poki,^ (9.8)1=1,10kSubstituting (9.8 and 5.12) and into (goIrpok), we haveNo^No^No(go*Pok) = KE901*Pokk) E E (goi-A-pokm).1=i,m,,mokThe terms (901*Pokm), V/ m on the RHS of (9.9) can be ignored due to the low correlationsbetween the different isoplanatic patches. The terms (got*Pokm), V/ = in on the RHS of(9.9) have high correlations since h1 and poki belong to the same isoplanatic patch. Thus(9.9) can be approximated byNo(goIrpok)=-(gookpokk)+ E (goi-kpoki).t=1,10k(9.10)(9.9)110Region of 9 k^Region of giRegion of poi(Chapter 9 OBJECT RECONSTRUCTION FOR THE ANISOPLANA77C CASEAs we have assumed that the stars of each sub-object fk are mutually well isolated from thosein other sub-objects. As a consequence, the center (where the maximum appears) of the selfcross correlations (gol*Pokl), VI k are located outside the sub-region of fk. This will beshown in our simulation examples. Besides, the maximum value of (gorkpoki),usually much smaller than that of (gok*pokk), since the truncated areas of poki,a small part of goi, VI k referring Fig. 9.2.Region of -OkiFig= 9.2 Illustrations of sub-image regionsIn 9.10, the term (gok*Pokk) dominates. This term is(gok*Pokk) = (hk ® fk*Pokk) = (hk*Pokk ® ft).vi k isvi k is(9.11)The object reconstruction problem for the anisoplantic case can be reduced into several sub-object reconstruction problems. Therefore our theorem also applies to the anisoplanatic case.The phasor-image of the self cross correlation (go *p), k = 1,—. ,No also preserves theedges and the phase information of the unknown sub-object fk, k = 1,• • • No. These edgesgive the tight sub-object support domain. Both the generalized projection phase retrieval111Chapter 9 OBJECT RECONSTRUCTION FOR THE ANISOPLANATIC CASEmethod in chapter 7 and the non-iterative phase retrieval method in chapter 8 are applicableto the anisoplanatic case.The object reconstruction case when the truncatedobject has no obvious edgesIn the above, we have assumed that all the stars (or at least one star) must show fairlysharp edges. However, in reality, objects with sharp edges do not always exist.It seems that it is not always possible to obtain the estimate of the sub-object fk from thephasor-image (go*pok)9 since fk may not have no obvious edges.In fact, the phase retrieval problem becomes much easier for the case when the object hasno obvious edges. In this case, the phasor-image (hk —POkk)9 has only one strong impulse atx = 0, y = 0, since the ghosts (which mostly result from the self cross correlations of edgeinformation of go and pok) in the phasor-image (goIrpok)e (referring to chapter 8) will befaint as a result of that fk having no edges. Thus the phasor e29(90*Pok) is an approximation ofthe estimate of the Fourier phasor of the sub-object fk. This is suggested by in our simulationexamples. The object's approximate estimate Ik is then obtained by simply combining theestimate of Fourier modulus and the phasor.This approximate estimate is used as an initial estimate of the generalized projectionsmethod described above but without tight object support bounds given. Since this initialestimate 1k is usually a good approximate of the sub-object fk, the iterations are most likelyto converge to its solution. This is demonstrated in our simulation examples.9.4 Results for the Anisoplanatic CaseIn our simulations, several isoplanatic phase screens coi, i = 1, • • , M are truncated fromthe same large computer simulated phase screen array. The generation of phase screens aredescribed in chapter 7. The correlations between any two isoplanatic phase screens can be112Chapter 9 OBJECT RECONSTRUC770N FOR THE ANISOPLANA77C CASEaltered by changing the distance between their centers. Two isoplanatic phase screens havethe least correlation when. their separation angle is large enough (e.g. >3arcsecs in visiblelight region). The diameter of the telescope D = 2m. A series of PSF's for an isoplanaticphase screen are simulated by the autocorrelation of the exponential of the phase screen :hi(x,y,t)=^i =1,• • • ,M^(9.12)where a is a scale factor such that if hi(x,y,t)dxdy =1, i 1,- •• ,M. The convolutionof the PSF with the objectfg =^ (9.13)i=1produces an anisoplanatic turbulence degraded image90 = E hi® fi.^ (9.14)When hi = hi, i j, it becomes the isoplanatic case. The intensity of each pixel of an imageis then replaced by Poisson distributed random number to simulate the observation g. Thesimulations were done at an average of 386 photons/per frame. Without loss of generality,two truncated isoplanatic phase screens are used to simulate an equivalent anisoplanaticphase screen in our examples. The self cross spectrum is formed by averaging the quantity(G(wx,wy, t)/1(w,, wy) — Ky,), k = 1,2. 3000 image frames are involved in the averagingprocess. The original object fg, its sub-objects fl , f2 are shown in Fig. 9.3. The longexposure image of the object fg are shown in Fig. 9.5a. To show the regions of the truncatedsub-image pok, k = 1, 2, the long-exposure truncated sub-images are shown in Fig 9.5b andc. These are chosen as the bright regions of the images in each isoplanatic region. The areasof pok(x,y), k = 1,2 are equal to that of the PSF. The phasor-images(g(x,y)*pk(x,y)- Kpb(s,y))e, k = 1,2113(c) f2(a) fg (b)Chapter 9 OBJECT RECONSTRUCTION FOR THE ANISOPLANA TIC CASEare shown in Fig. 9.6a and 9.6b, respectively. The phasor-images (hk*--Pok)o, k = 1,2's areshown in Figs. 9.7a and b. Since the object in Fig. 9.3 has no obvious edges, only a verybright impulse is shown in the phasor-image (hk*_pok)19. This explains why the convolutionsof (hk*pok )9 with the sub-object fk, well preserve the information of this sub-object. Theinitial estimates of the sub-objects are shown in Fig. 9.8. Fig. 9.9 are re-scaled images ofFig. 9.8. Fig. 9.9 shows how other isoplanatic imagesgoi,^1^k,1affect the initial estimate of the sub-object fk. The initial estimate is further improved by thegeneralized projections methods. The final reconstructed sub-objects are shown in Fig. 9.10.Figure 9.3 The original object and its sub-objects (notice the absence of sharp edges)114Chapter 9 OBJECT RECONSTRUCTION FOR THE ANISOPLANATIC CASE(a) fg, (c) f2 9Figure 9.4 The phasor-images of the original object and of its sub-objects(a) (g)^(b) (Pt) (c) (p2)Figure 9.5 The long-exposure image of the object of Fig. 9.3a andthe truncated long-exposure images resulting from anisoplanatic patch115604020 4020250-200,160,100-50-012010010080Chapter 9 OBJECT RECONSTRUCTION FOR THE ANISOPLANA TIC CASE(a) (g-kpi)0 (b) (g*P2)eFigure 9.6 The phasor-images of the self cross spectra(a) The self cross correlation (h*poi)Figure 9.7 The phasor-images of the self cross correlations (hi*poi) and (h2Irp02) ^(Continued ... )11612010001 201 008060 806040 4020250-200-150-100--Chapter 9 OBJECT RECONSTRUCTION FOR THE ANISOPLANATIC CASE(b) The self cross correlation (h*p02)Figure 9.7 The phasor-images of the self cross correlations (hi*poi) and (h2*p02)(a) (b) /2Figure 9.8 The initial estimate of the sub-objects from the cross spectrum117Chapter 9 OBJECT RECONSTRUCTION FOR THE ANISOPLANA TIC CASE48.;(a) :f; (b)Figure 9.9 The scaled versions of the initial estimates of the sub-objectsto show the artifacts of overlap between isoplanatic sub-image regions(a) 1;Figure 9.10 The improved estimate of the sub-object by the generalized projections method9.5 SummaryFor an anisoplanatic imaging patch, the object can be partitioned into several sub-objects, each of which corresponds to one approximately isoplanatic patch. The sub-imagesfrom different isoplanatic patches are usually partly overlapped. However, as long as thestars belonging to different sub-object regions are "well separated", each sub-object can befaithfully reconstructed like the case of one isoplanatic patch. The results of the investigations118Chapter 9 OBJECT RECONSTRUCTION FOR THE ANISOPLANA77C CASEillustrate the effectiveness of our self cross correlation object reconstruction algorithm in theanisoplanatic case.119Chapter 10CONCLUSIONS10.1 Advantages of the Self Cross Correlation ImageReconstruction ApproachWe have introduced the concept of the self cross correlation between the observed imagesand truncated part of them. In this thesis it has shown that diffraction-limited reconstructionof stellar images can be effectively performed by using the self cross correlation. The resultsof the reconstruction algorithms show that they are more accurate and efficient than the otherreconstruction methods.The self cross spectrum phase contains some very important and explicit informationabout the object. The self cross spectrum contains the full Fourier phase information of theobject. Phase retrieval of the object from the self cross spectrum phase shows much morenoise tolerance than other methods. This is because the self cross spectrum is: 1) of low-ordercorrelation (second-order) and, 2) shift-invariant. Each of the KT and TC methods possessesonly one of the above two advantages. Besides, unlike the KT and TC methods, the noisein the self cross spectrum phase does not result in noise propagation from one frequency toanother in the retrieved phase of the object. Both the bispectrum phase and the KT spectrumphase give the phase difference between adjacent frequencies. In retrieving the phase fromthe phase differences, noise propagation from one frequency to another occurs. The self crossspectrum phase however has direct relationships with the Fourier phase of the object. Thephase of the object is extracted from the phasor-images of the measured self cross correlationand noise propagation does not occurs.120Chapter 10 CONCLUSIONSThe other advantages of the self cross correlation method are that: 1) it does not showconvergence problems which may be present in the KT, the TC and the 1FT methods; 2) itneeds much fess computation. The computing time saving is huge. Due to the low-ordercorrelation of the self cross spectrum, its formation needs less than 5% of the computationsof one component of a sub-plane of the bispectrum in our examples. Any two sub-planes ofthe bispectrum needs four independent calculations for their components.Another important advantage of the self cross correlation method is that it is not onlyapplicable to the isoplanatic imaging case but also can be extended to the anisoplanatic case.All the previous reconstruction methods are limited to the isoplanatic case. When thesemethods are applied to the isoplanatic imaging case, image truncation is often encountered.Unfortunately, even modest truncation can cause severe resolution lost. Self cross correlationson the other hand allow us to reconstruct the objects beyond the isoplanatic limit.10.2 ContributionsThe main contribution of this thesis is the introduction of a new image reconstructionapproach which is based on the self cross correlation between an image and a part of it. Thespecific contributions are as follows: 1) the novel concept of the self cross spectrum hasbeen introduced; 2) its mathematical model and its signal-to-noise ratio are derived; 3) therelationships of the self cross spectrum phase to the phase of the object are derived from a newtheorem; 4) several effective reconstruction algorithms are developed; 5) the reconstructionalgorithms based on the self cross spectrum are extended to the anisoplanatic case.The theorem provides the tools to analyze the properties of the self cross spectrum phaseand to retrieve the phase of the object from the self cross spectrum. From the theorem therelationships between the self cross spectrum phase and the object's Fourier phase are derived.121Chapter 10 CONCLUSIONSSuch relationships allow us to retrieve the object phase from the self cross spectrum phaseeasily.It has been found that if the stars and sub-objects in the image have fairly sharp edgesthen the phasor-image of the self cross correlation preserves the information about these edgesof the object. The edge information can be used as the tight object support bounds constraintin the 1FT method. This constraint is crucial to the convergence of the 1FT method. Theself cross spectrum phase is the approximate phase estimate of the object if the "truncatedobject" possesses soft edges. The truncated object is the part of the object that contributesto the truncated image. this phase estimate is further improved by the MT method. If theedges of the stars in the truncated object exist, then the phase of the object can be obtainedby using a non-iterative method.The statistical models of the self cross correlation (and the self cross spectrum) forthe general anisoplanatic case are derived. The phase retrieval algorithms are successfullyextended to the aniosplanatic case to which other available methods do not apply.We must point out that the main initiative of this paper is the retrieval of the diffraction-limited Fourier phase which is supposed to be fully contained in a series of short-exposureimages by using the self cross correlation method! Thus our reconstruction is basicallydifferent from the traditional image restoration which requires loss of the resolution in thedata in order to suppress the noise.10.3 Future workIn this thesis, object reconstruction from self cross correlations is developed mainly forstellar objects which usually have simple image structures. It is especially used to reconstruct122Chapter 10 CONCLUSIONSobjects in the presence of random wavefront deformations. It is desirable to extend thismethodology to form a general theory of the self cross correlation reconstructions.As the self cross correlation correlates the image with an arbitrarily truncated part of thisimage, the self cross correlation is not linearly dependent on any of the following parameters:position, size and brightness distributions of the truncated image. For further applications itis very important to establish more strict relations between the self cross correlation and thetruncated images. Also we need to establish more concrete mathematical relations betweenthe self cross spectrum phase and the Fourier phase of the object so that the Fourier phase ofthe object can be predicted directly from the self cross spectrum phase. In such a way, theFourier phase of the object can be iteratively decorrelated (equivalently deconvolved) fromthe self cross spectrum in a manner much-like blind-deconvolution.123Bibliography[1] A. Labeyrie, "Attainment of diffraction limited resolution in large telescopes by fourierspeckle patterns in stars images," Astron. Astrophys 6, 85-87 (1970).[2] K. T. Knox and B. J. Thompson, "Recovery of images from atmospherically degradedshort-exposure photongraphs," Astron. J. 193, L45–L48 (1974).[3] A. W. Lolunann, G. P. Weigelt, and B. Wirnitzer, "Speckle masking in astronomy—triplecorrelation theory and applications," Appl. Opt. 22, 4028-4037 (1983).[4] R. W. Johnson and G. J. M. Aitken, "Image truncation in stellar speckle imaging," J.Opt. Soc. Am. A 9, 1955-1963 (1992).[5] R. H. T. Bates, "Astronomical speckle imaging," Physics Reports (Reviews Sect. of Phys.Lett.) 90, pp. 85-87, 1970.[6] G. R. Ayers, M. J. Northcott, and J. C. Dainty, "Knox-Thompson and triple correlationimaging through atmospheric turbulence," J. Opt. Soc. Am. A 5, pp. 963-985, 1988.[7] M. J. Northcott, G. R. Ayers, and J. C. Dainty, "Algorithms for image reconstruction fromphoton-limited data using the triple correlation," J. Opt. Soc. Am. A 5, pp. 986-992, 1988.[8] H. Takajo and T. Takahashi, "Suppression of the influence of noise in least-squares phaseestimation from the phase difference," J. Opt. Soc. Am A 7, 1153-1162 (1990).[9] H. Takajo and T. Takahashi, "Least-squares phase recovery from the bispectrum phase:an algorithm for a two dimensional object," I. Opt. Soc. Am A 8, 1038-1047 (1991).[10]J. Meng, G. J. M. Aitken, E. K. Hege, and J. S. Morgan, "Triple-correlation subplanereconstruction of photon-address steller images," J. Opt. Soc. Am A 7, 1243-1250 (1990).[11]C. A. Haniff, "Least-squares fourier phase estimation from the modulo-27r bispectrumphase," J. Opt. Soc. Am A 8, pp. 134-140, 1991.124[12]R. H. Hudgin, "Wavefront reconstruction for compensated imaging," J. Opt. Soc. Am 67,375-378 (1977).[13]J. H. Seldin and J. R. Fienup, "Numerical investigation of the uniqueness of phaseretrieval," J. Opt. Soc. Am. A 7, pp. 412-427, 1990.[14]H. Talcajo and T. Takahashi, "Unwrapping algorithm for least-squares phase recoveryfrom the modulus 2r bispectrum phase," J. Opt. Soc. Am., A7, pp. 14-20, 1984.[15]M. S. Scivier, T. J. Hall, and M. A. Fiddy, "Phase unwrapping using the complex zerosof a band-limited function and the presence of ambiguities in two dimensions," OpticaACTA 31, pp. 619-623, 1984.[16]J. C. Marron, P. P. Sanchez, and R. C. Sullivan, "Unwrapping algorithm for least-squaresphase recovery from the modulus 27 bispectrum phase," J. Opt. Soc. Am., A7, pp. 14-20,1990.[17]J. W. Goodman, Statistical Optics, pp. 511-512. A wiley-Interscience Publication, 1985.[18]J. R. Fienup, "Reconstruction of an object from the modulus of its fourier transform,"Opt. Lett. 3, 27-29 (1978).[19]R. W. Gerchberg and W. 0. Saxton, "A practical algorithm for the determination of phasefrom image and diffraction plane pictures.," Optik 35, pp. 237-246, 1972.[20]R. W. Gerchberg, "Super-resolution through error energy reduction," Opitca Acta 21,pp. 709-720, 1974.[21]J. R. Fienup, "space object imaging through the turbulent atmosphere,"," Opt. Eng. 18,529-534 (1979).[22]J. R. Fienup, "Phase retrieval algorithms: a comparison," App!. Opt. 21, 2758-2769 (1982).[23]J. R. Fienup, T. R. Crimmins, and W. Holsztynski, "Reconstruction of the support of anobject from the support of its autocorrelation," J. Opt. Soc. Am. 72, 610-624 (1990).125[24]A. Levi and H. Stark, "Image restoration by the method of generalized projections withapplication to restoration from magnitude," J. Opt .Soc.Am. A 1, pp. 932-943, 1984.[25]B. J. Brames and J. C. Dainty, "Method for determining object intensity distributions insteller speckle interfereometry," J. Opt. Soc. Am. 71, pp. 1542-1545, 1981.[26]T. R. Crirrunins, J. R. Fienup, and B. J. Thelen, "Improved bounds on object supportfrom autocoffelation support and application to phase retrieval," J. Opt. Soc. Am. A 7,pp. 3-13 (1990).[27]T. J. Schulz and D. L. Snyder, "Image recovery from correlations," J. Opt. Soc. Am. A9, pp. 1266-1272, 1992.[28]1. C. Christou, E. K. Hege, J. D. Freeman, and E. Ribalc, "A self-calibrating shift-and-addtechnique for speckle imaging," J. Opt. Soc. Am. A 3, pp. 204-209, 1986.[29]E. Ribalc, "Astronomical imaging by filtered weighted-shift-and-add technique," J. Opt.Soc. Am. A 3, pp. 2069-2076, 1992.[30]J. D. Freeman, E. Ribak, J. C. Christou, and E. K. Hege, "Statistical analysis of theweighted shift-and-add image reconstruction technique," in International conference onspeckle (H. Arsenault, ed.), pp. 279-283, Proc. Soc. Photo-Opt. Instrum. Eng. vol. 556,1985.[31]A. N. Kolmogorov, "Dissipation of energy in locally isoplanatic turbulence," Dokl. Akad.Nauk SSSR 32, pp. 16-18, 1941.[32]T. S. Mckechnie, "Light propagation through the atmosphere and the properties of imagesformed by large ground-based telescopes," J. Opt. Soc. Am A 8, pp. 346-365, 1991.[33]V I Tatarski, Wave Propagation in a Turbulent Medium. McGraw-Hill, New York, 1961.[34]R. E. Hufnagel and N. R. Stanley, "Modulation transfer function associated with imagetransmission through turbulent media," J. Opt. Soc. Am 54, pp. 52-61, 1964.126[35]B. Herman, A. J. LaRocca, and R. E. Turner, Infrared Handbook, ch. Chapter 4,"Atmospheric Scattering". The infrared information and analysis center, Environmentalresearch institute of Michigan, Ann Arbor, Michigan, 1978.[36]"Topical issue on adaptive optics," J. Opt. Soc. Am. 67(entire issue), 1977.[37]E. K. Hege, "Notes on noise calibration of speckle imagery," in Diffraction-LimitedImaging with Very Large Telescopes (D. M. Alloin and J. M. Mariotti, eds.), (Kluwer,Norwell, Mass., 1989), pp. 113-124.[38]K. H. Hofman, "Photon-counting speckle imaging: The photon-counting hole in triplecorrelation," J. Opt. Soc. Am A 10, pp. 329-335, 1993.[391K. H. Hofman, W. Mauder, and G. Weight, "Photon-counting speckle masking," in Opticsin Complex Systems (F. Lanzl, H. J. Preuss, and G. Weight, eds.), pp. 442-443, Proc. Soc.Photo-Opt. Instrum. Eng. 1319, 1990.[401G. Baler, N. Hetterich, and G. Weight, "Digital speckle interferometry of Juno, amphitrideand Pluto's moon Charon," messenger 30, pp. 23-26, 1982.[41]G. Baler and G. Weight, "Speckle interferometric observations of Pluto and its moonCharon on seven different nights," Astron. Astrophys.174, pp. 295-298, 1987.[421B. Wirnitzer, "Reconstruction of turbulence-degraded images using the lumx-thompsonalgorithm," J. Opt. Soc. Am. A 2, pp. 14-21, 1985.[43]J. C. Fontanella and A. Seve, "Reconstruction of turbulence-degraded images using theknox-thompson algorithm," J. Opt. Soc. Am. A 4, pp. 438-448, 1987.[44]P. Niserson and C. Papaliolois, "Effects of photon noise on speckle image reconstructionwith the knox and thompson algorithm," Opt. Commun. 47, pp. 91-96, 1983.[451G. R. Ayers, J. C. Dainty, and M. J. Northcott, "Photon limited imaging throughturbulence," in Inverse Problems in Optics (E. R. Pike, ed.), pp. 19-25, Pro. Soc. Photo-127Opti. Instrum. Eng. 808, 1987.[46]J. C. Christon, J. M. Beckers, J. D. Freeman, S. T. Ridgway, and R. G. Probst, "Two-dimensional infrared astronomical speckle interferometry," in Statistical Optics, pp. 193—202, Proc. SPIE 976, 1988.[47]P. Niserson, R. Stachnik, C. Papahobos, and P. Horowitz, "Data recording and processingfor speckle image reconstruction," in Application of speckle phenomena, pp. 88-94, Proc.SPIE 243, 1980.[48]C. Y. C. Liu and A. W. Lohmann, "High resolution image formation through the turbulentatmosphere," Opt. Commun. 8, pp. 372-377, 1973.[49]D. Korff, "Analysis of a method for obtaining near-diffraction limited information in thepresence of atmospheric turbulence," J. Opt. Soc. Am., 63, p. 971, 1973.[501A. Levi and H. Stark, "Restoration from phase and magnitude by generalized projections,"in Image Recovery: Theory and Application (H. Stark, ed.), (Academic Press, NewYork, 1987), pp. 277-319.[5 UM. Nieto-Vesperinas, R. Navarro, and F. J. Fuentes, "Performance of a simulated annealingalgorithm for phase retrieval," J. Opt. Soc. Am. A 5, 30-38 (1988).[52]M. J. Perez and M. Nieto-Vesperinas, "Phase retrieval of photon-limited steller imagesfrom information of the power spectrum only," J. Opt. Soc. Am. A 8, 908-917 (1991).[53]J. H. Seldin and J. R. Fienup, "Numerical investigation of the uniqueness of phase retrival,"J. Opt. Soc. Am A 7, 412-427 (1990).[54]J. C. Dainty and J. R. Fienup, "Phase retrieval and image reconstruction for astronomy,"in Image Recovery: Theory and Application (H. Stark, ed.), (Academic Press, NewYork,1987), pp. 231-275.[55]G. M. Cochran, "Phase screen generation," tech. rep., tOSC Report No. TR-663, 1985.128[56]X. Shi and R. W. Ward, "Reconstruction of photon-limited stellar images by generalizedprojections using the cross spectrum," J. Opt. Soc. A, (accepted for publication).[57]D. L. Fried, "Optical resolution through a randomly inhomogeneous medium for verylong and very short exposures," J. Opt. Soc. Am. 56, pp. 1372-1379, 1966.[58]J. Fienup, "Space object imaging through the turbulent atmosphere," Opt. Eng. 18,pp. 529-534, 1979.[59]K. D. O'Donnell and J. Dainty, "Space-time analysis of photon limited steller speckleinterferometry," J. Opt. Soc. Am. 70, pp. 1354-1361, 1980.[601J. C. Dainty and J. R. Fienup, "Phase retrieval and image reconstruction for astronomy,"in Image Recovery, pp. 231-275, Academic, New York, 1987.[61]1. R. Fienup and G. B. Feldkamp, "Astronomical imaging by processing steller speckleinterferometry data," in Application of speckle phenomena, pp. 95-102, Proc. SP1E 243,1980.[62}R. K. Ward and B. E. A. Saleh, "Restoration of images distorted by systems of randomtime-varying impulse response," J. Opt. Soc Am. A 3, pp. 800-807, 1986.[63]L. Guan and R. K. Ward, "Deblurring random time-varying blur," J. Opt .Soc.Am. A 6,pp. 1727-1737, 1989.[64]R. K. Ward and B. E. A. Saleh, "Image restoration under random time-varying blur,"Appl. Opt. 26, pp. 4407-4412, 1987.[651.1. R. Fienup, "Reconstruction and synthesis applications for an iterative algorithm," Proc.SPIE 373, 147-160(1981).[66]J. Sebag, J. Arnaud, G. Lelievre, J. L. Nieto, and E. L. Coarer, "High-resolution imagingusing pupil segmentation," J. Opt. Soc. Am. A 7, 1237-1242 (1990).129[67]X. Shi and R. K. Ward, "Restoration of images degraded by atmospheric turbulence anddetection noise," J. Opt. Soc. Am A 9, 364-370 (1992).[68PC. Shi and R. K. Ward, "Reconstruction of photon-limited stellar images by generalizedprojections using the cross spectrum," to be published in J. Opt. Soc. Am.[69]A. V. Oppenheim and J. S. Lim, "The importance of phase in signals," Proc. IEEE 69,529-541(1981).[70]3. K. P. Horn, Robot Vision. The MIT Press, Chap. 8 (1986).[71]A. Glindemann, R. G. Lane, and J. C. Dainty, "Estimation of binary star parameters bymodel fitting the bispectrum phase," J. Opt. Soc. Am. A 9, pp. 543-548,1992.130


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items