UBC Faculty Research and Publications

Seismic imaging and processing with curvelets 2008

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
herrmann07EAGEIMPROC.pdf
herrmann07EAGEIMPROC.pdf [ 10.8MB ]
Metadata
JSON: 1.0102591.json
JSON-LD: 1.0102591+ld.json
RDF/XML (Pretty): 1.0102591.xml
RDF/JSON: 1.0102591+rdf.json
Turtle: 1.0102591+rdf-turtle.txt
N-Triples: 1.0102591+rdf-ntriples.txt
Citation
1.0102591.ris

Full Text

Seismic imaging and processing with curvelets Felix J. Herrmann joint work with Deli Wang Combinations of parsimonious signal representations with nonlinear sparsity promoting programs hold the key to the next-generation of seismic data processing algorithms ... Since they  allow for formulations that are stable w.r.t.  noise  incomplete data  moderate phase rotations and amplitude errors Finding a sparse representation for seismic data & images is complicated because of  wavefronts & reflectors are multiscale & multi- directional  the presence of caustics, faults and pinchouts The curvelet transform Properties curvelet transform:  multiscale: tiling of the FK domain into dyadic coronae  multi-directional: coronae sub- partitioned into angular wedges, # of angle doubles every other scale  anisotropic: parabolic scaling principle  Rapid decay space  Strictly localized in Fourier  Frame with moderate redundancy (8 X in 2-D and 24 X in 3-D) Transform Underlying assumption FK plane waves linear/parabolic Radon transform linear/parabolic events wavelet transform point-like events (1D singularities) curvelet transform curve-like events (2D singularities) k1 k2 angular wedge 2j 2j/2 Representations for seismic data fine scale data coarse scale data 2-D curvelets curvelets are of rapid decay in space curvelets are strictly localized in frequency x-t f-k Oscillatory in one direction and smooth in the others! Obey parabolic scaling relation length ≈ width2 Curvelet tiling & seismic data Curvelet tiling Angular wedge # of angles doubles every other scale doubling! Real data frequency bands              example 1 2 43 8 65 7 Data is multiscale! Seismic Laboratory for Imaging and Modeling Decomposition in   angular wedges 6th scale image Single frequency band       angular wedges 6 Data is multidirectional! 00.5 1.0 1.5 2.0 Ti m e (s ) -2000 0 2000 Offset (m) 0 0.5 1.0 1.5 2.0 T im e  ( s ) -2000 0 2000 Offset (m) Significant curvelet coefficient Curvelet coefficient~0 Wavefront detection Curvelets live in a wedge in the 3 D Fourier plane... Extenstion to 3-D Cartesian Fourier space [courtesy Demanet ‘05, Ying ‘05] Curvelets are oscillatory in one direction and smooth in the others. 3-D curvelets Coefficients Amplitude Decay In Transform Domains Fourier Wavelets Curvelets Partial Reconstruction Curvelets (1% largest coefficients) SNR = 6.0 dB Curvelet sparsity promotion Forward model Linear model for the measurements of a function m0:  inversion of K either ill-posed or underdetermined.  seek a prior on m. y = Km0 + n with y = data K = the modeling matrix m0 = the model vector n = noise Key idea x̃ = arg min x ‖x‖1 s.t. ‖Ax− y‖2 ≤ ! data misfitsparsity enhancement When a traveler reaches a fork in the road, the l1 -norm tells him to take either one way or the other, but the l2 -norm instructs him to head off into the bushes. John F. Claerbout and Francis Muir, 1973 New field “compressive sampling”: D. Donoho, E. Candes et. al.,  M. Elad  etc. Preceded by others in geophysics: M. Sacchi & T. Ulrych and co-workers etc. 00.5 1.0 1.5 2.0 Tim e ( s) 100 200 300 400 500 Inline Linear quadratic (lsqr): • model Gaussian Non-linear       : • model Cauchy (sparse) Problem: • data does not contain point scatterers • not sparse 0 0.5 1.0 1.5 2.0 Tim e ( s) 100 200 300 400 500 Inline x̃ = arg min x ‖x‖1 s.t. ‖Ax− y‖2 ≤ ! x̃ = arg min x ‖x‖2 s.t. ‖Ax− y‖2 ≤ ! Our contribution Model as superposition of little plane waves. Compound modeling operator with curvelet synthesis: Exploit parsimoniousness of curvelets on seismic data & images ... 0 0.5 1.0 1.5 2.0 Ti m e (s ) 100 200 300 400 500 Inline K !→ KCT m0 !→ x0 m̃ = CT x̃ Sparsity-promoting program Problems boils down to solving for with  exploit sparsity in the curvelet domain as a prior  find the sparsest set of curvelet coefficients that match the data, i.e.,  invert an underdetermined system signal =y + n noise curvelet representation of ideal data x0 A x0 P! : { x̃ = arg minx ‖x‖1 s.t. ‖Ax− y‖2 ≤ ! m̃ = CT x̃ y ≈ KCT x̃ Solver Initialize: i = 0; x0 = 0; Choose: L, ‖ATy‖∞ > λ1 > λ2 > · · · while ‖y −Axi‖2 > " do for l = 1 to L do xi+1 = T sλi ( xi + AT ( y −Axi)) end for i = i + 1; end while f̃ = CTxi. Table 1: The cooling method with iterative thresholding. 33 Applications Problems in seismic processing can be cast in to  stable under noise  stable under missing data Obtain a formulation that  explicitly exploits compression by curvelets  is stable w.r.t. noise  exploits the “invariance” of curvelets under imaging Applications include  seismic data regularization  primary-multiple separation  seismic amplitude recovery P! Seismic data regularization joint work with Gilles Hennenfent Motivation Irregular sub-sampling incoherent noise Noisy because of irregular sampling ... Sparsity-promoting inversion* Reformulation of the problem Curvelet Reconstruction with Sparsity-promoting Inversion (CRSI)  look for the sparsest/most compressible, physical solution KEY POINT OF THE RECOVERY * inspired by Stable Signal Recovery (SSR) theory by E. Candès, J. Romberg, T. Tao, Compressed sensing by D. Donoho & Fourier Reconstruction with Sparse Inversion (FRSI) by P. Zwartjes signal =y + n noise curvelet representation of ideal data x0 RCH (P0)   x̃= arg sparsity constraint︷ ︸︸ ︷ min x ‖x‖0 s.t. ‖y−PC Hx‖2 ≤ ! f̃= CH x̃ (P0)   x̃= arg sparsity constraint︷ ︸︸ ︷ min x ‖x‖0 s.t. data misfit︷ ︸︸ ︷ ‖y−PCHx‖2 ≤ ! f̃= CH x̃ P! : { x̃ = arg minx ‖Wx‖1 s.t. ‖Ax− y‖2 ≤ ! f̃ = CT x̃ Original data 80 % missing CRSI recovery  with 3-D curvelets Primary multiple separation Joint work with Eric Verschuur, Deli Wang, Rayan Saab and Ozgur Yilmaz Motivation Primary-multiple separation step is crucial  moderate prediction errors  3-D complexity & noise Inadequate separation leads to  remnant multiple energy  deterioration primary energy Introduce a transform-based technique  stable  insensitive to moderate shifts & phase rotations Exploit sparsity and parameterization transformed domain Move-out error Move-out error The problem Sparse signal model: with  augmented synthesis and sparsity vectors  index 1 <-> primary  index 2 <-> multiple !2-norm penalizes the outliers while the !1-norm promotes the outliers bringing out the primaries. Sparsity-domain primary-multiple separation Motivated by recent results on the stable signal recovery from overcomplete representations (see e.g. Starck et al., 2004; Elad et al., 2005), the primary-multiple separation problem is formulated in terms of a nonlinear optimization problem. The solution of this problem provides simultaneous estimates for the multiples and primaries given predictions for the multiples. As in stable signal recovery, the method exploits sparsity in a transformed domain for both signal components. In that respect our method differs fundamentally from matched filtering (see e.g. Verschuur and Berkhout, 1997), since it exploits a representation that is sparse, i.e., a transform that leads to a rapid decay for the magnitude-sorted coefficients in the sparsity vectors for the two signal components. Sparse signal model: Following the ideas of morphological component analysis (MCA, see e.g. Starck et al., 2004), an augmented sparsity synthesis matrix is defined consisting of an inverse trans- form for the synthesis of each of the two signal components in Eq. 1. Again the data is described as a sparse superposition of now two sparsity matrices one for each signal component, y = Ax0 +n, (13) with A = [A1 A2] nd x0 = [x01 x02]T (14) the augmented sparsity synthesis matrix and sparsity vector, respectively. In this formulation, the subscripts 1 and 2 are reserved for primaries and multiples. The above signal model with the coef- ficients of x0 sparse, forms the basis of MCA. Even though MCA was initially designed to separate signal components that are sparse in different sparsity representations, we show that this method can 12 !2-norm penalizes the outliers while the !1-norm promotes the outliers bringing out the primaries. Sparsity-domain primary-multiple separation Motivated by recent results on the stable signal recovery from overcomplete representations (see e.g. Starck et al., 2004; Elad et al., 2005), the primary-multiple separation problem is formulated in terms of a nonlinear optimization problem. The solution of this problem provides simultaneous estimates for the multiples and primaries given predictions for the multiples. As in stable signal recovery, the method exploits sparsity in a transformed domain for both signal components. In that respect our method differs fundamentally from matched filtering (see e.g. Verschuur and Berkhout, 1997), since it exploits a representation that is sparse, i.e., a transform that leads to a rapid decay for the magnitude-sorted coefficients in the sparsity vectors for the two signal components. Sparse signal model: Following the ideas of morphological component analysis (MCA, see e.g. Starck et al., 2004), an augmented sparsity synthesis matrix is defined consisting of an inverse trans- form for the synthesis of each of the two signal components in Eq. 1. Again the data is described as a sparse superposition of now two sparsity matrices one for each signal component, y = Ax0 +n, (13) with A = [A1 A2] and x0 = [x01 x02]T (14) the augmented sparsity synthesis matrix and sparsity vector, respectively. In this formulation, the subscripts 1 and 2 are reserved for primaries and multiples. The above signal model with the coef- ficients of x0 sparse, forms th basis of MCA. Even though MCA was initially designed to separate signal components that are sparse in di ferent sparsity representations, we show that this m thod can 12 The solution The weighted norm-one optimization problem: be extended to signal components with similar characteristics. Because primaries and multiples are both solutions of the wave equation, we can not expect to find a generic overcomplete signal rep- resentation that separates these two components without providing prior information on the wave arrivals. We argue that these signal components can still be separated as long as there exist reason- able predictions for the signal components. These predictions are used as weights that allow us to recover the two signal components using the same signal representation for each component. The weighted !1-norm optimization problem: If the two signal components permit a sparse representation then the predicted multiples can be used as weights in the sparsity promoting !1 norm. These weights drive the two signal components apart during the optimization and x0 can be recovered to reasonable accuracy‡. The w-weighted optimization problem becomes Pw :  minx ‖x‖w,1 subject to ‖y−Ax‖2 ≤ ε ŝ1 = A1x̂1 and ŝ2 = A2x̂2 given: s̆2 and w(y, s̆2) (15) with w = [w1, w2]T the weighting vectors with strictly positive weights defined in terms of the predicted multiples. The estimates for the primaries and multiples are computed from the sparsity vector that minimizes Pw. During the optimization, the sparsity vector is recovered by minimizing the weighted !1 norm subject to a recovery that is within the tolerance. The weights: The weighting vectors are based on an a-priori prediction for the multiples, ob- tained by SRME prediction (see e.g. Berkhout and Verschuur, 1997) or by other means. The cor- responding prediction for the primaries is obtained through simple subtraction. When the predicted ‡For an orthonormal sparsity representation, this recovery can be expected to be within the noise-level when the two sparsity vectors x01 and x02 are disjunct, i.e. x1,µ = 0 when x2,µ $= 0 or x2,µ = 0 when x1,µ $= 0 for µ ∈M . 13 with w := [ w1, w2 ]T A := [ CT , CT ] s̆2 := predicted multi les s̆1 := S− S̆2 Solution cont’d The weights  during minimization signal components are driven apart  curvelet compression helps  separates on the basis of position, scale and direction { w1 := max ( σ ·√2 log N, C1|ŭ1| ) w2 := max ( σ ·√2 log N, C2|ŭ2| ) with ŭ1 ≈ Cs̆1 ŭ2 ≈ Cs̆2 Synthetic example total data SRME predicted multiples Synthetic example SRME predicted primaries curvelet-thresholded Synthetic example SRME predicted primaries estimated Real example total data SRME predicted multiples Real example curvelet thresholded curvelet estimated curvelet estimated primaries SRME predicted primaries Seismic amplitude recovery Joint work with Chris Stolk and Peyman Moghaddam Motivation Migration generally does not correctly recover the amplitudes. Least-squares migration is computationally unfeasible. Amplitude recovery (e.g. AGC) lacks robustness w.r.t. noise. Existing diagonal amplitude-recovery methods  do not always correct for the order (1 - 2D) of the Hessian [see Symes ‘07]  do not invert the scaling robustly Moreover, these (scaling) methods assume that there  are no conflicting dips (conormal) in the model  is infinite aperture  are infinitely-high frequencies  etc. Existing scaling methods Methods are based on a  diagonal approximation of   .  Illumination-based normalization (Rickett ‘02)  Amplitude preserved migration (Plessix & Mulder ‘04)  Amplitude corrections (Guitton ‘04)  Amplitude scaling (Symes ‘07) We are interested in an ‘Operator and image adaptive’ scaling method which  estimates the action of    from a reference vector close to the actual image  assumes a smooth symbol of     in space and angle  does not require the reflectors to be conormal <=> allows for conflicting dips  stably inverts the diagonal Ψ Ψ Ψ Our approach “Forward” model:  diagonal approximation of the demigration-migration operator  costs one demigration-migration to estimate the diagonal weighting with y = migrated data A := CTΓ AAT r ≈ KTKr K = the demigration operator ! = migrated noise. y = KTKm + ε ≈ Ax0 + ε Solution Solve P :  minx J(x) subject to ‖y −Ax‖2 ≤ ! m̃ = (AH)†x̃ with J(x) = sparsity︷ ︸︸ ︷ α‖x‖1 +β ‖Λ1/2 ( AH )† x‖p︸ ︷︷ ︸ continuity . Example SEGAA’ data:  “broad-band” half-integrated wavelet [5-60 Hz]  324 shots, 176 receivers, shot at 48 m  5 s of data Modeling operator  Reverse-time migration with optimal check pointing (Symes ‘07)  8000 time steps  modeling 64, and migration 294 minutes on 68 CPU’s Scaling requires 1 extra migration-demigration    Migrated data Amplitude-corrected & denoised migrated data Noise-free data Noisy data (3 dB) Data from migrated image Data from amplitude-corrected & denoised migrated image Nonlinear data The combination of the parsimonious curvelet transform with nonlinear sparsity & continuity promoting program allowed us to...  recover seismic data from large percentages missing traces  separate primaries & multiples  recover migration amplitudes This success is due to the curvelet’s ability to  detect wavefronts <=> multi-D geometry  differentiate w.r.t. positions, angle(s) and scale  diagonalize the demigration-migration operator Because of their parsimoniousness on seismic data and images, curvelets open new perspectives on seismic processing ... Conclusions Acknowledgments The authors of CurveLab (Demanet, Ying, Candes, Donoho) William Symes for the reverse-time migration code. These results were created with Madagascar developed by Sergey Fomel.  This work was in part financially supported by the Natural Sciences and Engineering Research Council of Canada Discovery Grant (22R81254) and the Collaborative Research and Development Grant DNOISE (334810-05) of F.J.H. This research was carried out as part of the SINBAD project with support, secured through ITF (the Industry Technology Facilitator), from the following organizations: BG Group, BP, Chevron,ExxonMobil and Shell.

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
Germany 10 1
Japan 4 0
France 4 0
United States 3 0
Netherlands 2 0
China 2 0
Russia 1 0
City Views Downloads
Unknown 17 1
Tokyo 4 0
Ashburn 2 0
Beijing 1 0
Shenzhen 1 0
Dallas 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}

Share

Share to:

Comment

Related Items