Open Collections

UBC Faculty Research and Publications

Curvelet-domain preconditioned "wave-equation" depth-migration with sparseness and illumination constraints Herrmann, Felix J.; Moghaddam, Peyman P. 2004

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
[if-you-see-this-DO-NOT-CLICK]
SEGIM2004.pdf [ 730.02kB ]
[if-you-see-this-DO-NOT-CLICK]
Metadata
JSON: 1.0107385.json
JSON-LD: 1.0107385+ld.json
RDF/XML (Pretty): 1.0107385.xml
RDF/JSON: 1.0107385+rdf.json
Turtle: 1.0107385+rdf-turtle.txt
N-Triples: 1.0107385+rdf-ntriples.txt
Original Record: 1.0107385 +original-record.json
Full Text
1.0107385.txt
Citation
1.0107385.ris

Full Text

Curvelet-domain preconditioned ?wave-equation? depth-migration with sparseness & illuminationconstraintsFelix Herrmann and Peyman Moghaddam, EOS, University of British ColumbiaAbstractA non-linear edge-preserving solution to the least-squares migrationproblem with sparseness & illumination constraints is proposed. Theapplied formalism explores Curvelets as basis functions. By virtue oftheir sparseness and locality, Curvelets not only reduce the dimension-ality of the imaging problem but they also naturally lead to a densepreconditioning that almost diagonalizes the normal/Hessian operator.This almost diagonalization allows us to recast the imaging probleminto a ?simple? denoising problem. As such, we are in the position to usenon-linear estimators based on thresholding. These estimators exploitthe sparseness and locality of Curvelets and allow us to compute afirst estimate for the reflectivity, which approximates the least-squaressolution of the seismic inverse scattering problem. Given this estimate,we impose sparseness and additional amplitude corrections by solvinga constrained optimization problem. This optimization problem isinitialized and constrained by the thresholded image and is designed toremove remaining imaging artifacts and imperfections in the estimationand reconstruction.IntroductionLeast-squares migration and migration deconvolution has been a topicthat received a recent flare of interest [7, 8]. This interest is for a goodreason because inverting for the normal operator (the demigration-migration operator) restores many of the amplitude artifacts relatedto acquisition and illumination imprints. However, the downside tothis approach is that (unconstrained) least-squares tends to fit noiseand smear the energy. In addition, artifacts may be created due toimperfections in the model and possible null space of the normaloperator [11]. Regularization can alleviate some of these problems, butmay go at the expense of resolution, certainly when quadratic penaltyfunctions are used. By imposing sparseness constraints and illuminationnormalization on the imaged reflectivity [see for instance 13, 8, 11],significant progress has been made to improve the signal-to-noise ratio(SNR) and frequency content of seismic images.In this paper, we combine above techniques with recent developments inApplied Computational Harmonic Analysis, non-linear estimation the-ory and global constrained optimization. Our approach is designed to (i)deal with substantial amounts of noise (SNR <=< 0); (ii) use the optimal(sparse & local) representation properties of Curvelets; (iii) exploit theiralmost diagonalization of the normal/Gramm/Hessian operator; (iv) usenon-linear thresholding estimation techniques, supplemented by (v) con-strained optimization on the estimated coefficients. This optimizationnot only imposes additional sparseness on the model but also restoresthe amplitudes within constraints given by the illumination [11].The main motivation for this paper is to derive an imaging scheme thatis robust in cases where there is substantial amounts of noise in the dataspace (SNR <=< 0). These noise-levels are intended to mimic situationswhere the high frequencies are severely deteriorated by the presence ofnoise. Without loss of generality, we will assume the additive noise tobe white Gaussian. Refer for coherent noise removal (such as multi-ples) in the data space to other contributions by the first author to theseProceedings [see also 6].As with adaptive subtraction [6], the key component of our algorithmis the use of basis functions that are local both in position and dip andthat are well behaved under the operators. In that respect, we are in thebusiness of finding the appropriate full preconditioners for the seismicimaging problem. If indeed, recently developed non-separable wavelets,such as Curvelets, can be used as preconditioners for the seismic imagingproblem, then we can use non-linear estimators to significantly improvethe SNR and resolution of noisy seismic images.The paper is organized as follows. First, we briefly discuss the imagingproblem and motivate why preconditioning is important, in particularwhen the preconditioner is defined by a generic transform (non-problemspecific) that is close to an unconditional basis which is local and sparseon the model space. Next, we argue that diagonally weighted Curveletsare close to the ideal preconditioners for seismic imaging. We proceedby introducing a non-linear thresholding estimator that acts on the pre-conditioned image space. In the image space, the normal and covari-ance operators are close to being diagonalized by what one can call the?Curvelet symbol? of the normal operator. Because thresholding on thecoefficients of local and sparse basis functions approaches asymptoti-cally minimax (minimize the maximum error given the worst possibleGaussian prior [see e.g. 9]), we obtain an edge-preserved least-squaresimage with good SNR by simply thresholding and weighting the mi-grated image. This image is subsequently used as a starting point for aglobal optimization scheme that is designed to impose sparseness (viaa global sparseness constraint); remove imaging artifacts related to thebasis-function decomposition, eliminate false thresholding and poor il-lumination. The method will be illustrated by a synthetic example usinga ?wave-equation? depth-migration operator [11].The seismic imaging problem preconditionedIn all generality, the seismic imaging problem can after linearization becast into the following form for the forward model describing our datad = Km + n, (1)where K is the wave-equation demigration operator based on the single-square-root equation [12]; m the model with the reflectivity and n whiteGaussian noise. The pertaining inverse problem has the following gen-eral form [see e.g. 9, 14]? : minm 12bardbld -Kmbardbl22 + ?lambdaJ(m), (2)where J(m) is an additional penalty functional that contains prior in-formation on the model, such as particular sparseness constraints. Thecontrol-parameter ?lambda depends on the noise level lambda.Question now is how can we find the appropriate domain to solve thisinverse problem, whose formal solution in the noise-free case can befound by taking the More-Penrose pseudo-inverse,m =parenlefttpparenleftexparenleftbtPsiDObracehtipdownleft bracehtipuprightbracehtipupleft bracehtipdownrightKasteriskmathKparenrighttpparenrightexparenrightbt-1KasteriskmathbracehtipupleftbracehtipdownrightbracehtipdownleftbracehtipuprightFIOd = K?d. (3)Several approaches exist to compute the solution of this least-squaresproblem. These solutions range from explicit analytical constructionsfor the pseudo-inverse of the normal operator [see e.g. 1, and the ref-erences therein] to iterative solutions with Conjugate-gradient type ofFIO PsiDO d, mFourier radical radical ?Wavelets ? ? ?Curvelets radical radical radicalTable 1: Properties of Curvelet with regard to seismic imaging. As onecan see Curvelets not only score representing the data and model but alsoscore for the operators. Fourier is good for the operators (as the nameFIO suggests) but fails representing the data/model sparsely.algorithms [see e.g. 7, 8]. The first approaches derive from the obser-vation that the leading-order behavior for the (de)-migration operatorscorresponds to that of certain Fourier Integral Operators (FIO?s), whilethe composition of these two operators corresponds, under certain con-ditions, to an invertible elliptic pseudo-differential operator (PsiDO) [1].The FIO?s are responsible for moving the events from model to dataspace and vice versa, while the PsiDO can be interpreted as a general-ized non-stationary filtering operator that does not (re)-move or createevents (read singularities). While the first of these approaches, has thedistinct advantage of being numerically fast, it requires an explicit con-struction of the operators and may suffer from the typical less than idealdata acquisition. The second approach is more flexible, since it only re-quires access to the migration and demigration operators and it may beless sensitive to the acquisition. However, noise, computational cost andinaccuracies in the operators may still be a limiting factor.Therefore, we opt for a ?combination? of afore mentioned methods byconstructing explicit preconditioners that almost (as compared to FIO?sand PsiDO?s, which are purely diagonal in the Fourier domain) diagonal-ize, while also being sparse & local on the model space. In this way,we are not only well positioned for computing iterative solutions to theproblem, but we are also able to use diagonal non-linear estimation tech-niques [9] in combination with global constrained optimization [10] inorder to remove the noise; restore the amplitudes; impose sparsenessconstrains and incorporate the influence of illumination.Our first priority is to find the appropriate preconditioners for the systemin Eq. 3. To be more specific, we would need to construct precondition-ers that use generic local basis functions that allow us to writeBT d = BT Bx + BT n. (4)In this expression, B = KP, BT = PT KT are the preconditioneddemigration and migration operators and the model is related to the newmodel according to the linear functional x = PT m. Successful pre-conditioning, yields an almost diagonalization for the normal equations,which corresponds to the following property BT B approxequal I.Question now is can we find a generic (i.e. not ?depending? on K andKT ) close to orthonormal ?miracle? basis-function transform, C, thatallows us to write the preconditioner in the following form: P =CT G-1, where G-1 is a diagonal matrix, whose entries are defined bythe regularized reciprocal of diag{G}. Provided such a transform ex-ists, we are not only in good shape to iteratively solve the least-squaresproblem, but, more importantly, we also will be able to explore that Cand hence P are close to unconditional bases for the model space. Be-fore going into detail how to construct non-linear estimators, we firstintroduce Curvelet frames that almost diagonalize the normal operator.Curvelets and seismic imagingCurvelets as proposed by [3], are a relatively new family of non-separable Wavelets that effectively represent seismic images withreflectors on piece-wise smooth curves (m element C2). For these type offunctions, Curvelets obtain near optimal nonlinear approximation rates.i.e. the error is O(m-2) as opposed to O(m-1/2) for Fourier, wheremis the number of magnitude-sorted coefficients in the reconstruction.Besides this grealy improved nonlinear approximation rate, Curveletsare local both in position and dip. This locality combined with the highapproximation rate and the fact that Curvelets are close to an uncondi-tional basis (for the above class of functions), allows the definition ofdiagonal estimators that asymptotically obtain minimax for denoising[see e.g. 9]. These non-linear estimators minimize the maximal errorgiven the worst Bayesian prior [9] and are given by? = C?Thetalambda (Cy) . (5)This basis function decomposition, followed by thresholding and subse-quent reconstruction, solves for the model given noisy data, i.e. solvesfor x given y = x + nlambda with nlambda = N(0, lambda2), by applying a hardthresholding Thetalambda (?), with a threshold level that depends on the standarddeviation lambda of the noise. Since Curvelets are local and sparse on themodel, it is relatively easy to understand why this thresholding procedureworks because the edges in the model end up in the large coefficientswhich survive the thresholding. The thresholding depends only on themagnitude of the coefficients and not on their sign or phase, which is adirect consequence of their (close to) unconditional-basis property. Howdo Curvelets obtain such a high non-linear approximation rate? Withoutbeing all inclusive [see for details 3], Curvelets are? multi-scale, i.e. they live in different dyadic corona in the 2DFourier-domain.? multi-directional, i.e. they live on wedges within these corona.? anisotropic, i.e. they obey the following scaling law width proportionallength2.? directional selective with # orientations proportional 1scale .? local both in (x,z) and (kx, kz).? almost orthogonal, they are tight frames.Besides their high non-linear approximation rate, Curvelet are well be-haved under certain (homogeneous) operators [4, see the referencestherein]. This property has led to Quasi-SVD techniques, where the sin-gular vectors of the operators are replaced by generic basis functionsand their images under the operators. These methods work because thebasis functions remain approximately invariant under the operators. Aswe can see from Fig. 1, this property also holds true for imaging op-erators, where not only Curvelets remain Curvelet-like, but where alsoa rapid decay for the Curvelet coefficients is observed. These obser-vations are consistent with recent work by [2], where it is shown thatFIO?s are sparse in the Curvelet domain and, consequently, PsiDO?s canbe expected to be almost diagonal.Non-linear diagonal estimation by thresholdingObviously, above listed properties make Curvelets ideal for seismicimaging. Unfortunately, the background velocity-dependence andshear size of seismic data, makes it prohibitively expensive to usethe Quasi-SVD methods because these techniques require a separate(de)-migration for each Curvelet to compute the quasi-singular values(the G?s, see below). Instead, we settle for a formulation, where thenormal and Covariance operators (for the noise) are approximately di-agonalized. By using Curvelets in the definition for the preconditioners(cf. Eq. 4), where CKT KCT approxequal diag{CKT KCT } = diag{G2},the preconditioned system can approximately be written in the formy = x + n. (6)Not only does the preconditioning take care off the operator, it also?whitens? the noise, so we are set to use thresholding to obtain a min-imax estimate for x, using?0 = Thetalambda (y) . (7)This expression is equivalent to the approximate solution for the modelpresented in [5]? 0 = C? parenleftbigG2parenrightbig? ThetalambdaGparenleftBigCKT dparenrightBig= PThetalambdaparenleftBigPT KT dparenrightBig(8)Fig. 1: Properties of the Curvelet Transform under imaging. Toprow: Curvelet and its demigration for constant velocity Kirchoff. Sec-ond row: Same for ?wave-equation? migration Hessian in Marmousimodel. For both cases, Curvelets remain Curvelet-like. Third row: FK-spectrum and plot of the rapid decay for the ?off-diagonal? coefficientsof the Curvelet-images. Fourth row: Axtrue which should be close toImtrue and C?G, the inverse transform of the Curvelet ?symbol?.in which the thresholded adjoint is corrected by the pseudo-inverse ofCurvelet ?symbol? of the normal operator: parenleftbigG2parenrightbig?. The different stepsthat lead to the above diagonal estimate are summarized in Fig. 1 and 2.From the Fig. ??, it is clear that Curvelets remain more or less curvelet-like under the operators while A approxequal I.Amplitude restoration by global optimizationSo far, we have been able to make the argument that the bulk of thenoise can be removed and amplitudes partly restored. However, theapproach is approximate and one could expect improved results bycomputing the following pseudo-inverse? = A??0, (9)where A = BT B. Even tough this approach may seem very attractive,since we can argue that the preconditioning improves the condition num-ber of the matrix (see other contributions by the authors to the Proceed-ings of this Conference), there remains the problem of left-over noise;high computational cost; shortcomings in the operator and the inclusionof sparseness constraints on the model: x = PT m. To accommodatefor these issues, we formulate the following approximate (the opera-tor A can be included) constrained-optimization problem (this approachwas motivated by [3], who applies a similar technique for denoising)? : minm J(m) s.t. |x - ?0|? <=< e? universal? (10)with the tolerances set according toe? =braceleftBiggI? if | ?0|? greaterequal |lambdaI|?lambdaI? if | ?0|? < |lambdaI|?. (11)In this formulation, the initial guess for the preconditioned model ?0 isupdated such that the additional sparseness constraint is minimized. Theoptimization runs over all the index space ? and the tolerance differsby lambda for those coefficients that have initially been thresholded to obtainthe first estimate and those that survived the thresholding. The coeffi-cients that were perceived to be noise are allowed to vary more as part ofthe optimization. This global optimization procedure is solved using anAugmented-Lagrangian technique [10] with the Lagrangian multipliersset by the thresholded result, ?0. The solution uses a Steepest Decenttechnique and involves in each iteration an inverse Curvelet transformcombined with a weighting that depends on the diagonalized normal op-erator.Above approach, depends heavily on how well the preconditioners areable to diagonalize the normal operator. Since, iterating with A is gen-erally prohibitively expensive one may consider to use (i) the Lanczostri-diagonalization method (see other contributions to these Proceedings)or (ii) the illumination-based normalization method [11], that uses thefollowing diagonal weighting to approximate the normal operatorW-2 = diag{?0} ? [diag{A?0}]-1 approxequal A? (12)for a reference model that is close to the actual model. We use this diag-onal preconditioner in the following constrained-optimization problem? : minm J(m) s.t. |W2x - ?0|? <=< e? universal?. (13)with ?0 = A?0. In this formulation, the initial guess for the referencemodel, which we naturally take to be the result obtained by thresholding,is updated such that the additional constraint is minimized. The eleganceof this formulation lies in the fact that (i) explicit use is made of the al-most diagonalization of the operators, i.e. the weighted Curvelets arenon-diagonal preconditioners; (ii) the algorithm is constrained so that itcan not wander off to far from the initial guess. Because of the mini-max property of thresholding, we can be assured that the initial guesspredominantly contains features with high SNR, rendering the above il-lumination normalization effective.ExampleTo illustrate our proposed method, we included a synthetic exampleof a ?wave-equation? depth-migration for the Marmousi dataset. AMonte-Carlo sampling technique was used to compute the G2 [5, see fordetail]. Fig. 2 includes the results of our imaging algorithm. We appliedthe algorithm to data with a SNR=0 and the noisy image is included inFig. 2 (top). Next, we threshold the noisy image with Eq. 7, followedby applying the G-2-correction (cf. Eq. 8). The threshold parameterlambda = 2.5. As we can see from Fig. 2 (second-third), the thresholdingand subsequent correction have an influence. The thresholding removesthe noise and preserves most of the edges. The correction restores partof the amplitudes and gives us a first approximation to the least-squaresimaging problem. This approximation is used as a starting point forthe constrained-optimization problem presented in Eq. 10. The resultsfor this optimization are plotted in Fig. 2 (bottom). The L1-constraintclearly removes more noise which goes at the expense of some details.Conclusions and discussionThe methodology presented is an example of a divide-and-conquerapproach. First, we obtain a reasonable estimate for the image bythresholding then we continue by restoring the amplitudes and remov-ing the artifacts. Our approach builds on the premise that one standsa much better chance to solve an inverse problem into the appropriatedomain. The combination of migration with the Curvelet transformprovides such a domain, where the energy is collapsed onto a limitednumber of coefficients that correspond to coherent features along whichthe Curvelets align. Since Curvelets are local in space and spatialfrequency, thresholding can be used on the image. By using Curveletsto precondition the inverse scattering problem, we arrived at an elegantformulation that serves as a point of departure for our non-linearestimation procedure. Not only did we almost diagonalize the normaloperator, turning the migration into an almost orthogonal transform,but we also included non-linear thresholding and global optimizationin the formulation. Results, so far, for the Marmousi dataset arecertainly promising but fall short somewhat of being conclusive. Wehope to present compelling results at the Meeting that demonstratethe full potential of the presented method to deal with very noisy data(SNR <=< 0).AcknowledgmentsThe authors would like to thank Emmanuel Cand? and David Donohofor making their Digital Curvelet Transforms via Unequally-spacedFourier Transforms available (presented at the ONR Meeting, Universityof Minnesota, May, 2003). We also would like to thank James Rickettfor his migration code. This work was in part financially supported by aNSERC Discovery Grant.References[1] S. Brandsberg-Dahl and M. de Hoop. Focusing in dip and avacompensation on scattering-angle/azimuth common image gath-ers. Geophysics, 68(1):232?254, 2003.[2] E. J. Cand` and L. Demanet. Curvelets and fourier integraloperators. 2002. URL http://www.acm.caltech.edu/?emmanuel/publications.html. to appear Comptes-Rendus de l?Academie des Sciences, Paris, Serie I.[3] E. J. Cand` and F. Guo. New multiscale transforms,minimum total variation synthesis: Applications to edge-preserving image reconstruction. Signal Processing, pages1519?1543, 2002. URL http://www.acm.caltech.edu/?emmanuel/publications.html.[4] D. Donoho. Nonlinear solution of linear inverse problems bywavelet-vaguelette decomposition. App. and Comp. HarmonicAnalysis, 2, 1995.[5] Felix J. Herrmann. Multifractional splines: application to seismicimaging. In Andrew F. Laine; Eds. Michael A. Unser, Akram Al-droubi, editor, Proceedings of SPIE Technical Conference onWavelets: Applications in Signal and Image Processing X, volume5207, pages 240?258. SPIE, 2003. URL http://www.eos.ubc.ca/?felix/Preprint/SPIE03DEF.pdf.[6] Felix J. Herrmann and Eric Verschuur. Separation of primariesand multiples by non-linear estimation in the curvelet domain. InExpanded Abstracts. EAGE, 2004.[7] J. Hu, G. T. Schuster, and P.A. Valasek. Poststack migration de-convolution. Geophysics, 66(3):939?952, 2001.[8] H. Kuhl and M. D. Sacchi. Least-squares wave-equation migrationfor avp/ava inversion. Geophysics, 68(1):262?273, 2003.[9] S. G. Mallat. A wavelet tour of signal processing. Academic Press,1997.[10] Jorge Nocedal and Stephen J. Wright. Numerical optimization.Springer, 1999.[11] James E. Rickett. Illumination-based normalization for wave-equation depth migration. Geophysics, 68, 2003. URL http://link.aip.org/link/?GPY/68/1371/1.[12] James E. Rickett and Paul C. Sava. Offset and angle-domain com-mon image-point gathers for shot-profile migration. Geophysics,67, 2002. URL http://link.aip.org/link/?GPY/67/883/1.[13] D. O. Trad. Interpolation and multiple attenuation with migrationoperators. Geophysics, 68(6):2043?2054, 2003.[14] Curtis Vogel. Computational Methods for Inverse Problems.SIAM, 2002.Fig. 2: Synthetic ?wave-equation depth-migration example for the Mar-mousi dataset. Top: Noisy image for SNR = 0 on the data space.Second: Image after thresholding (cf. 8). Third: Thresholded after cor-rection for the ?symbol? of the normal operator (cf. 8). As we can see theamplitudes are partly restored. Bottom: Result after applying the con-strained optimization (cf. 10) with lambda = 2.5. The optimization clearlyremoves more noise while enhancing the sparseness. This proceduredoes, however, go at the expense of some details which is to be expectedfor this high noise level.

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
China 15 0
United States 7 0
Germany 3 0
Iran 2 0
France 2 0
Canada 1 0
Japan 1 0
Russia 1 0
City Views Downloads
Beijing 8 0
Unknown 8 0
Shenzhen 7 0
Ashburn 5 0
Ottawa 1 0
Fort Worth 1 0
Nara 1 0
San Jose 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.340.1-0107385/manifest

Comment

Related Items