UBC Faculty Research and Publications

Migration preconditioning with curvelets. Moghaddam, Peyman P.; Herrmann, Felix J. 2004

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
52383-SEGLAN2004.pdf [ 143.9kB ]
Metadata
JSON: 52383-1.0107380.json
JSON-LD: 52383-1.0107380-ld.json
RDF/XML (Pretty): 52383-1.0107380-rdf.xml
RDF/JSON: 52383-1.0107380-rdf.json
Turtle: 52383-1.0107380-turtle.txt
N-Triples: 52383-1.0107380-rdf-ntriples.txt
Original Record: 52383-1.0107380-source.json
Full Text
52383-1.0107380-fulltext.txt
Citation
52383-1.0107380.ris

Full Text

Migration preconditioning with Curvelets Peyman P. Moghaddam∗ and Felix J.Herrmann University of British Colombia, Canada Summary In this paper, the property of Curvelet transforms for preconditioning the migration and normal operators is investigated. These operators belong to the class of Fourier integral operators and pseudo-differential operators, respectively. The effect of this preconditioner is shown in term of improvement of sparsity, convergence rate, number of iteration for the Krylov-subspace solver and clustering of singular(eigen) values. The migra- tion operator, which we employed in this work is the common-offset Kirchoff-Born migration. Introduction The Curvelet transform introduced by Candés[2002] has so far primarily been utilized for image processing and medical imaging. The ability of this non-separable Wavelet transform to preserve the edges in a noisy image is proven to be superior than the performance of the separable 2-D Wavelet transform. Curvelets are multi-scale, multi-directional, anisotropic, local both in space and in Fourier domain and almost orthogonal, (i.e., they are tight frames). Candés and Demanet[2003] explore another interesting property of Curvelets, which makes quite useful for exploration seis- mology. They proved that a Curvelet remains Curvelet- like(i.e., similar to another curvelet) under the action of Fourier Integral Operators (FIO). This property may al- low us to introduce a fast algorithm for the migration operator, which is a class of FIO. To investigate this property numerically, we precondition the migration operator with the Curvelet transform. Sub- sequently, we analyze the properties of the preconditioned and original operators. We also investigate the precondi- tioning property of the Curvelet transform for the normal operator (i.e., migration/demigration operator), which is mathematically a pseudo-differential operator. We inves- tigate (i) the sparsity; (ii) the number of iterations for the conjugate-gradient based solver; (iii) the convergence rate and singular-value clustering for the operator after preconditioning and the operator itself both for migra- tion and normal operator. A common offset Kirchoff-Born migration/scattering op- erator with constant velocity background is utilized for this investigation. Problem Formulation In general form[Rickett 2003], the linear forward scatter- ing problem can be written: Km = d (1) Where d is data, K is the scattering operator and m the model parameters. The solution to equation (1) is of the following form: m̂`2 = (K ∗K)−1K∗d, (2) where K∗ refers to the adjoint (or Hermitian) operator of K. Both K and K∗ are Fourier Integral Operator (FIO) and represent the scatering and migration, respectively. The product (K∗K) is defined as the normal/Hessian/Gramm operator and belongs to the class of operators known as pseudo-differential operators. The computationally ex- pensive part in equation (2) is to invert for the normal operator. This is practically impossible (except for ex- plicit closed-form solutions for the inverse of the normal operator) and the solution for equation (1) is achieved through a limited number of iterations for the Krylov- subspace based method[Saad, 2003]. The success of Krylov-subspace based methods depends on two main characteristics of the operator: first its spar- sity and second the clustering of its singular values. The sparsity means the number of nonzero elements in the matrix representing the operator. Thus, the sparser the operator the fewer calculations are needed to compute matrix-vector products and hence faster computing time and less memory requirements. Clustering of singular values means concentration of almost all of the singu- lar values in a few specific locations on the real axis far away from zero. The denser the clouds of singular val- ues (i.e., more clustered) the more accurate each iteration converges to the exact solution (i.e., faster convergence). Rickett[2003] proposed using a preconditioning matrix for K which results to faster convergence and robust recov- ery of model parameters. He came to the question: what is a good choice for preconditioning matrix (operator)? Candés and Demanet showed that a Curvelet remain Curvelet-like under a FIO. Since migration operator is a FIO, we introduce the following preconditioned system of equations: CKC−1Cm = Cd =⇒ K̃m̃ = d̃ (3) in which K̃ = CKC−1 is the preconditioned K, m̃ = Cm the Curvelet transform of m and d̃ = Cd the Curvelet transform of d. Throughout this paper, C stands for the Migration preconditioning with Curvelets Curvelet transform, C−1 for the inverse Curvelet trans- form and CT is the adjoint (or transpose) of the trans- form. Now our aim is to show that K̃ is improved in its main ma- trix properties compared to K. We compare sparsity and clustering of singular values of K. In addition, we study the number of iterations for Krylov-subspace based meth- ods for solution of the preconditioned system(equation (3)) compared to the original system(equation (1)). In some applications, instead of using a conjugate gra- dient based method to solve equation (1), an easy-to- invert approximation of the normal operator (i.e., K∗K) is used and plugged into equation (2).For example, Claer- bout[1994] used the diagonal elements of the normal op- erator applied on the reference model and replaced the action of the normal operator by this diagonal weight- ing. Preconditioning of the normal operator will also be useful if the normal operator can be better diagonalized (i.e., sparser). We applied this property of precondition- ing to replace the preconditioned normal operator with its approximation[see also other contributions by authors to the Proceedings of this Conference]. In this work, we also investigate the effect of precondition- ing using the Curvelet transform on the normal operator. In this regard, we define a new system of equations, which is based on the normal operator and weighted normal op- erator as follows: ψ = K∗K =⇒ ψ̃ = CK∗KC−1 (4) in which ψ̃ stands for weighted ψ. The sparsity of a matrix is defined as number of nonzero elements divided by the total number of elements in the matrix. Since K and K̃ (and obviously ψ and ψ̃) are extremely large-sized matrices and defined in the form of operators (matrix-vector products), measuring the ex- act sparsity is computationally prohibitive. Nevertheless, sampling a few columns of these operators can result to an approximation of the sparsity. Here is the strategy. Sup- pose we want to estimate the sparsity of a matrix AM×N , we randomly sample a column of A by simply multiply- ing it by a vector ei with random i ∈ [1, 2, ..., N ]. ei is an all-zero vector, except it has a single one in its ith loca- tion. An approximation of sparsity is achieved using the following formula: ŝA = ∑T i=1 M −Nzi TM × 100(percentage), (5) where T is the number of realizations, Nzi the number of nonzero elements in each realization. In addition to sparsity, clustering is also an important property of the matrix structure. To measure how the singular values of the matrix A are clustered, we need to compute the singular values of A. The method of Lanczos bidiagonalization [Golub 1996] can be utilized to decom- pose a non-square matrix A into the following form after k iterations: A = Uk+1BkV T k . (6) The matrices Uk+1 and Vk are orthogonal matrices and k is the number of iterations in the Lanczos method [Golub 1996], Bk is a lower bidiagonal matrix: Bk =  α1 0 · · · 0 β2 α2 · · · 0 0 β3 . . . ... 0 · · · . . . αk 0 0 · · · βk+1  (7) It has been shown [Golub 1996] that the sigular values of Bk are an approximation to singular values of A. A disad- vantage of Lanczos method is that the estimated singular values in this method do not belong to the most domi- nant singular values of A. In other words, the estimated singular values of A using Lanczos method are randomly chosen among all singular values. This makes the process of clustering difficult since we are looking for a distance between adjacent singular values. Larsen [1998] proposed a robust and efficient Lanczos bidiagonalization with partial reorthogonalization (LBD- PRO) method for estimating the most dominant adjacent singular values of a matrix A. Since the migration op- erator K and the preconditioned migration operator K̃ are both non-square matrices, we use LBDPRO method to estimate the singular values of them. Larsen [1998] proposed also the method of Lanczos tridiagonalization with partial reorthogonalization (LTDPRO), when the matrix A is square and symmetric positive definite. In this method A is decomposed into a new form of the ma- trix product: A = QkTkQ T k (8) in which Qk is an orthogonal matrix and Tk is a tridiag- onal matrix that reads: Tk =  α1 β1 0 · · · 0 β1 α2 β2 · · · 0 . . . . . . . . . 0 · · · . . . . . . βk−1 0 · · · βk−1 αk  . (9) The Eigen values of Tk are Ritz values for the matrix A and these are good approximations for its eigen values. The normal operator (ψ as in equation (4)) is symmet- ric positive definite and since the Curvelet transform is a tight frame with unit energy (i.e., |Cx|2 = |x|2,∀x ∈ Rn), we can reliably assume that it is close to orthonormal so that C ≈ C−T and CT ≈ C−1. Under this assumption, the preconditioned normal operator(ψ̃ as in equation (4)) Migration preconditioning with Curvelets is symmetric positive definite too. Therefore, we can uti- lize the LTDPRO method to estimate the eigen-values of the normal operator and the preconditioned normal op- erator. A disadvantage of Partial ReOrthogonalization(PRO) methods is that they need to reorthogonalize the matri- ces Uk, Vk(for LBDPRO) and Qk(for LTDPRO) for each iteration k. Since these matrices are quite large, this op- eration consumes a lot of memory and limits our ability to estimate more then a few tens of singular values. Nev- ertheless, since these values belong to the most dominant singular values, they are good measure for comparison. We also investigated the difference between convergence rate of preconditioned and original operators. We show that residual error decays faster for the preconditioned operator compared to the operator itself. In this regard, we solve for two different systems of equations, first for the migration and then for the preconditioned migration, i.e. for Kx = y Preconditioned =⇒ K̃x̃ = ỹ (10) and for the preconditioned normal operator ψx = y Preconditioned =⇒ ψ̃x̃ = ỹ (11) Since K and K̃ are not square, we use GMRES method in equation (10) and (110 to solve for the two systems of equations. Since ψ and ψ̃ are symmetric positive def- inite, we use the Conjugate-Gradient method. We plot the residual error of these two systems of equations ver- sus iteration number and compare them. Results Table (1) shows the sparsity measurements and the com- parisons between K and K̃, ψ and ψ̃ for common- offset the Born-Kirchhof migration operator with con- stant background velocity. It can be seen from this table that preconditioning significantly improves the sparsity for both migration and normal operators. This verifies the claim of Candés and Demanet(2003) that Curvelets remain curvelet-like under the action of the migration and normal operators. The second row in this table shows the number of iterations, needed to reduce the residual error to a value smaller than a specific tolerance. Figure 1 shows the evaluation of the singular value clus- tering for K and K̃. As it can be seen in this figure, singular values of K̃ are concentrating in and around the abscissa −5, while the singular values for K tempt to concentrate around zero. Figure 2 shows the evaluation for the eigen-values clustering for ψ and ψ̃. We can see similar behavior of the singular values in this figure as well. These figures show that the singular(eigen) values are more clustered for preconditioned operators compared to the unconditioned operators themselves. To measure the convergence rate, we need to compute the residual error. Residual error is defined as rk = |y −Axk|2 |y|2 Preconditioned =⇒ r̃k = |ỹ − Ãx̃k|2|ỹ|2 . (12) A in the above formula can be K, according to equa- tion(10), ψ according to equation(11). For both equa- tions (10) and (11), we apply the above residual error to compare original and preconditioned systems. Figure 3, shows the residual error versus the iteration number for the GMRES method used to solve the system in equation (10). Figure 4, shows the residual error versus the iteration number for the Conjugate-Gradient method for the solution of equation (11). As it can clearly be seen, preconditioning improves the convergence rate. Operator K K̃ ψ ψ̃ Sparsity 52.5 78.2 62.3 91.7 Iteration > 100 44 58 23 Table 1: Sparsity and number of iterations for the solution of equations (10) and (11). First row is sparsity as defined in equation (5). Second row is the number of iterations needed to be taken by the GMRES-method (for K and K̃) and by the CG method (for ψ and ψ̃) for the residual error to be smaller than 10−3. Conclusions In this paper, the effects of the Curvelet transform for preconditioning of migration and normal operator were investigated. This preconditioning can: • improve the sparseness of the operator; The sparsity of both migration and normal operator is increased after preconditioning. • improve the clustering of singular(eigen) values of the operator; For the preconditioned operators, the singular values shifted away from zero and have a tendency to concentrate in a point compared to the clustering of the operator itself. • improve the convergence rate of the operator; The convergence rate for preconditioned normal opera- tors is surprisingly faster than for the normal oper- ator itself. We demonstrated that by employing the Curvelet trans- form as preconditioner, we were able to enhance certain important matrix characteristics for the migration and normal operators. The results for this preconditioning can be exploited to introduce faster and robuster solu- tions for existing migration operators. Acknowledgements The authors thank to Emmanuel Candés for making Migration preconditioning with Curvelets their Curvelet transform code available. We also would like to thank Mauricio Sacchi for his migration code. We are also grateful to Rasmus Munk Larsen and Chen Grief to make us available the PROPACK software. This work was in part financially supported by a NSERC Discovery Grant. −6 −5 −4 −3 −2 −1 0 0 0.5 1 1.5 2 2.5 3 Normalized Singular Value (Logarithmic scale) preconditioned K singular values K singular values Fig. 1: Singular Value Trends for migration and preconditioned migration. −10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 0 0.5 1 1.5 2 2.5 3 Normalized Singular Value (Logarithmic scale) preconditioned ψ singular values ψ singular values Fig. 2: Singular value trends for the normal and preconditioned normal operators. References [1] E. Candés and F. Guo, New multiscale transforms, minimum total variation synthesis: Application to edge preserving image reconstruction. Signal Processing, pages 1519-1543, 2002 [2] E. Cand es and L. Demanet, Curvelets and Fourier Integral Operators ,C. R. Acad. Sci. Paris, Ser. I 336 (2003) 395-398 [3] J.E. Rickett, Illumination-based normalization for wave-equation depth migration, Geophysics, vol.68, No.4, P.1371-1379, 2003 [4] Y. Saad, Iterative methods for sparse linear systems, SIAM, 2003 [5] J. Claerbout and D. Nichols, 1994, Spectral Precondi- tioning: Stanford Exploration Project Report, 82, 183-186 [6] G.H. Golub and C.F. Van Loan, Matrix Computations, The John Hopkins University Press, Third edition, 1996 [7] R.M. Larsen, Lanczos bidiagonalization with par- tial reorthogonalization, Department of computer sci- ence, Aarhus University, Technical Report, DAIMI PB- 357, 1998 0 20 40 60 80 100 120 10−6 10−5 10−4 10−3 10−2 10−1 100 Iteration Number N or m al iz ed  re sid ua l e rro r preconditioned K residual error K residual error Fig. 3: Residual error of GMRES method versus the number of iterations needed for the solution of equation (10) 0 20 40 60 80 100 120 10−4 10−3 10−2 10−1 100 Iteration Number N or m al iz ed  re sid ua l e rro r preconditioned ψ residual error ψ residual error Fig. 4: Residual error of Conjugate-Gradient method versus the number of iterations needed for the solution of equation (11)

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.52383.1-0107380/manifest

Comment

Related Items