UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Digital processing algorithms for bistatic Synthetic Aperture Radar data Neo, Yew Lam 2007

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2007-318980.pdf [ 8.9MB ]
Metadata
JSON: 831-1.0100718.json
JSON-LD: 831-1.0100718-ld.json
RDF/XML (Pretty): 831-1.0100718-rdf.xml
RDF/JSON: 831-1.0100718-rdf.json
Turtle: 831-1.0100718-turtle.txt
N-Triples: 831-1.0100718-rdf-ntriples.txt
Original Record: 831-1.0100718-source.json
Full Text
831-1.0100718-fulltext.txt
Citation
831-1.0100718.ris

Full Text

Digital Processing Algorithms for Bistatic Synthetic Aperture Radar Data by Yew Lam Neo B.Eng., National University of Singapore, Singapore, 1994  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  D O C T O R OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES (Electrical and Computer Engineering)  The University of British Columbia MAY 2007 © Yew Lam Neo, 2007  Abstract The motivations for this thesis are the investigation of bistatic Synthetic Aperture Radar (SAR) image formation and the development of bistatic SAR algorithms to accommodate various bistatic SAR geometries. Traditional monostatic SAR algorithms based on frequency domain methods assume a single square root (a hyperbolic) range equation. In bistatic SAR data, the range history of a target has a Double Square Root (DSR) in the range equation as both the transmitter and receiver can assume different motion trajectories. Thus, monostatic algorithms are not able to focus bistatic SAR data. The key step to many frequency based algorithms is to find an analytical solution for the spectrum of a reference point target. No simple analytical solution exists for the bistatic case because of the DSR in the range equation. Several algorithms have been developed to overcome this difficulty. These algorithms are reviewed and analyzed in this thesis to compare their processing accuracies and the type of operations they require. A solution to the two-dimensional point target spectrum based on the reversion of a power series for the general bistatic case is presented in this thesis. The accuracy of this new point target spectrum is compared with existing analytical point target spectra. Using this spectrum result, a bistatic Range Doppler Algorithm (RDA) is developed to handle the azimuth-invariant, bistatic case. In addition, the algoii  rithm is used to focus real bistatic data acquired with two X-band SAR systems from Forschungsgesellschaft fur Angewandte Naturwissenschaften (FGAN), namely the Airborne Experimental Radar II (AER-II) and the Phased Array Multifunctional Imaging Radar (PAMIR). To handle azimuth-variant cases, the Non-Linear Chirp Scaling (NLCS) algorithm is used. The original NLCS algorithm is developed to focus only short-wavelength bistatic cases with one platform moving and imaging at broadside and the other stationary. It is found that the NLCS is able to cope with the general bistatic case since it is able to handle range and azimuth-variant signals. To exploit this processing capability, the algorithm is extended further to accommodate squinted and long-wavelength bistatic cases where both platforms have dissimilar velocities and flight paths slightly non-parallel.  iii  Contents Abstract  ii  Contents  iv  List of Tables  xii  List of Figures  xiii  List of Acronyms  xvii  List of Symbols  xix  Acknowledgements  xxxii  Dedication  xxxiii  1 Introduction  1  1.1 Background 1.1.1  1  Bistatic Configuration  2  iv  1.1.2  T h e T w o - D i m e n s i o n a l Signal  5  1.1.3  Range and Range Resolution  6  1.1.4  A z i m u t h and A z i m u t h Resolution  7  1.1.5  Processing A l g o r i t h m s  8  1.2  Scope and Thesis Objectives  11  1.3  Thesis Organisation  12  2 Properties of Bistatic SAR Data and Imagery  14  2.1  Introduction  14  2.2  B i s t a t i c S A R Geometry  15  2.2.1  B i s t a t i c S A R Signal M o d e l  16  2.2.2  Demodulated Signal  19  2.3  2.4  Pulse Compression  21  2.3.1  Frequency D o m a i n M a t c h e d F i l t e r  22  2.3.2  P r i n c i p l e of Stationary Phase  23  2.3.3  Compression of a Linear F M Signal  25  Impulse Response  27  2.4.1  Q u a l i t y Measurement for an Impulse Response Function  2.4.2  B i s t a t i c Resolution  30  2.4.3  Range Resolution  31  2.4.4  Doppler and L a t e r a l Resolution  34  v  . . . .  28  2.4.5  Cross Range Resolution  35  3 Bistatic SAR Processing Algorithms  38  3.1  Introduction  38  3.2  Time Domain Matched Filtering Algorithms  39  3.2.1  Time Domain Correlation Algorithm  40  3.2.2  Back-Projection Algorithm  41  3.3  Frequency Domain Algorithms  43  3.4  Numerical Methods  43  3.4.1  Omega-K Algorithm  44  3.4.2  Numerical Method of Handling DSR  47  3.5  3.6  Analytical Point Target Spectrum  49  3.5.1  49  Preprocessing Techniques - Converting Bistatic Data to Monostatic Data 52 3.6.1  3.7  Point Target Spectrum - Loffeld Bistatic Formula  Dip MoveOut Algorithm  52  The Non-Linear Chirp Scaling Algorithm  55  3.7.1  Prelude to the NLCS Algorithm  56  3.7.2  Limitations of the NLCS Algorithm  57  4 A New Point Target Spectrum  59  4.1  Introduction  59  4.2  Derivation of the Signal Spectrum  60  vi  4.3  Verification of the Spectrum Result  64  4.4  The Link Between the Bistatic Spectra  67  4.4.1  Analytical Development  68  4.4.2  Accuracy and Limitations  70  4.4.3  Bistatic Configurations  71  4.5  4.6  4.7  4.8 5  Simulation - Part 1  73  4.5.1  Simulation Parameters  73  4.5.2  Simulation Results  73  4.5.3  Discussion  81  Bistatic Deformation Term  82  4.6.1  Alternate Derivation of the Rocca's Smile Operator  82  4.6.2  Geometrical Proof for the LBF  86  Simulation - Part 2  87  4.7.1  Simulation Parameters  88  4.7.2  Simulation Results  88  Conclusions  94  Bistatic Range Doppler Algorithm  97  5.1  Introduction  97  5.2  Bistatic Range Doppler Algorithm  99  5.2.1  Analytical Development  100  vii  5.3  5.2.2  Compression Example  105  5.2.3  Application to the CSA  106  Bistatic Simulation Example  107  5.3.1  Simulation Parameters  107  5.3.2  Simulation Results  108  5.3.3  Implementation Issues  110  5.4  Focusing a Real Bistatic Data Set  112  5.5  Approximate Bistatic Range Doppler Algorithm  113  5.6  Conclusions  116  6 NLCS Algorithm for the Parallel and Slightly Non-Parallel Cases 118 6.1  Introduction  118  6.2  Extended NLCS  119  6.2.1  LRCMC and Linear Phase Removal  121  6.2.2  Range Curvature Correction  124  6.2.3  F M Rate Equalization  125  6.2.4  Perturbation Coefficient for Parallel Case  127  6.2.5  Azimuth Compression  131  6.2.6  Secondary Range Compression  134  6.3  Image Registration  136  6.3.1  137  Registration Stages  viii  6.3.2 6.4  6.5  Registration of Target between Ground Plane and Image Plane  Limitations of the Algorithm  138 139  6.4.1  The Effects of Applying Perturbation before Residual RCMC . . 139  6.4.2  Invariance Region Analysis - L R C M  142  6.4.3  Invariance Region Analysis - Residual R C M  143  6.4.4  Invariance Region Analysis - QPE  144  6.4.5  Invariance Region Analysis - Higher-Order Phase Errors  . . . . 145  Simulation Example  146  6.5.1  Simulation Parameters  146  6.5.2  Analysis of Invariance Region and Limitations  146  6.5.3  Simulation Results  148  7 NLCS Algorithm for the Stationary Receiver Case  152  7.1  Introduction  152  7.2  Improved NLCS for the Stationary Receiver Case  153  7.2.1  SRC and RCMC  155  7.2.2  F M Rate Equalization  156  7.2.3  Perturbation Coefficient for the Stationary Receiver Case . . . . 157  7.2.4  Azimuth Compression  161  7.3  Registration to Ground Plane  162  7.4  Limitations of the Algorithm  162  ix  7.5  8  7.4.1  The Effects of Applying Perturbation before Residual RCMC . . 163  7.4.2  Invariance Region Analysis - SRC  163  7.4.3  Invariance Region Analysis - Cubic Phase Error  164  7.4.4  Invariance Region Analysis - Residual RCM  164  7.4.5  Invariance Region Analysis - Quartic Phase Errors  165  Simulation Example  165  7.5.1  Simulation Parameters  165  7.5.2  Analysis of Invariance Region and Limitations  166  7.5.3  Simulation Results  167  Conclusions  171  8.1  Summary  171  8.2  Contributions of this Dissertation  174  8.3  Further Work  176  A Resolution  179  A . l Vector Gradient Of Bistatic Range  179  A. 2 Vector Gradient Of Bistatic Doppler  180  B Loffeld's Bistatic Formulation  182  B. l A Geometric Result on Two Quadratic Functions  182  B.2 Loffeld's Bistatic Formulation  183  x  C  R e v e r s i o n  o f  S e r i e s  F o r m u l a t i o n  1 8 4  C.l Reversion of Series Formula D  B i s t a t i c S A R  E  P r o o f  F o r  184  I m a g e R e c o n s t r u c t i o n U s i n gO m e g a - K A l g o r i t h m•  D M O  1 9 1  B i b l i o g r a p h y  • 1 9 3  xi  • 1 8 6  List of Tables 2.1  Broadening factors for various windowing functions  26  4.1  Simulation parameters for verification of point target spectrum  65  4.2  Simulation parameters for experiments to compare LBF and TSPP. . . 76  4.3  Simulation Parameters  5.1  Simulation parameters for an azimuth-invariant case  109  5.2  Point target quality measurements  110  5.3  Geometry and flight parameters for bistatic SAR system  114  6.1  Simulation parameters for a slightly non-parallel flight geometry  147  6.2  Azimuth impulse response for simulated non-parallel case  151  7.1  Azimuth impulse response for simulation on stationary case  166  7.2  Simulation results for point target azimuth impulse response  170  89  xii  List of Figures 1.1  Basic SAR imaging modes  3  2.1  Imaging geometry of bistatic SAR  16  2.2  Practical implementation of bistatic imaging  17  2.3  A general bistatic configuration  18  2.4  Matched filtering of a linear F M signal  24  2.5  Impulse responses of broadside imaged targets  29  2.6  Defining the bistatic cross range resolution  36  3.1  Block diagram of BPA  42  3.2  Block diagram of uKA  44  3.3  Signal model to derive the wKA  45  3.4  Bistatic system geometry for computing the DMO operator  53  3.5  Original NLCS algorithm for the monostatic configuration  56  3.6  Phase coupling between range and azimuth. . .*  58  4.1  Point target spectrum and image before and after the shear operation. . 66 xiii  4.2  Point target focus using the new, two-dimensional point target spectrum 67  4.3  Case I: Focusing using LBF  77  4.4  Case I: Focusing using TSPP (expanded up to second order)  77  4.5  Case I: Focusing using TSPP (expanded up to third order)  77  4.6  Case II: Focusing using LBF  78  4.7  Case II: Focusing using the TSPP (expanded up to second order). . . .  78  4.8  Case II: Focusing using the TSPP (expanded up to third order)  78  4.9  Case III: Focusing using LBF  79  4.10 Case III: Focusing using TSPP (expanded up to sixth order)  79  4.11 Case III: Focusing using the MSR directly  80  4.12 Comparison of the solutions to stationary phase  81  4.13 Bistatic geometry for the constant offset case  83  4.14 Illustration of squint-dependent delay  84  4.15 Case IV: Focusing using DMO  90  4.16 Case IV: Focusing using LBF  90  4.17 Case IV: Focusing using MSR  90  4.18 Case V: Focusing using DMO  92  4.19 Case V: Focusing using LBF  92  4.20 Case V: Focusing using MSR  92  4.21 Case VI: Focusing using DMO  95  4.22 Case VI: Focusing using LBF  95 xiv  4.23 Case VI: Focusing using MSR  95  4.24 Case VII: Focusing using DMO  96  4.25 Case VII: Focusing using LBF  96  4.26 Case VII: Focusing using MSR  96  5.1  Functional block diagram of bistatic RDA  99  5.2  How the target trajectories are changed by SRC and RCMC  106  5.3  Geometry of bistatic simulation example  108  5.4  Point targets focused using the bistatic RDA  109  5.5  Measurement of point target focus for point target A  110  5.6  Measurement of point target focus for point target D  Ill  5.7  A simple illustration of shearing in the focused map  112  5.8  The bistatic configuration used in real bistatic data capture  113  5.9  A bistatic SAR image of Oberndorf (Lech), Germany  115  5.10 Functional block diagram of approximate RDA  116  6.1  Functional block diagram of extended NLCS algorithm (without SRC). 120  6.2  Illustration of LRCMC in NLCS  123  6.3 Illustration of the effects of perturbation in parallel case  126  6.4  Impulse response using matched filter with second-order phase  132  6.5  Impulse response using matched filter with third-order phase  133  6.6  Illustration of the effects of a SRC filter on range IRW  135  xv  6.7 Illustration of effects of range varying perturbation function  141  6.8  148  Flight geometry for a slightly non-parallel flight path  6.9 Focused map for the simulated non-parallel case  149  6.10 Point targets focused using NLCS and registered to ground plane. . . . 150 6.11 Impulse response of corner Target 5  150  6.12 Impulse response of corner Target 21  151  7.1  Functional block diagram of NLCS algorithm (stationary receiver case). 154  7.2  Geometry to illustrate F M rate equalization for stationary case  157  7.3 Illustration of the effects of perturbation for the stationary receiver case. 158 7.4  Flight geometry for moving transmitter with stationary receiver  167  7.5  Focused map for the stationary receiver case  168  7.6 Focused map registered to ground plane  169  7.7 Impulse response of corner point target 1  169  7.8 Impulse response of corner point target 21  170  D. l Bistatic geometry for azimuth-invariant case  187  E. l Bistatic system geometry to find relation between 9^, t\_ and t  m  xvi  . . . . 192  List of Acronyms  AER  Airborne Experimental Radar.  BPA  Back-Projection Algorithm.  CAT  Computer Aided Tomography.  CSA  Chirp Scaling Algorithm.  DEM  Digital Elevation Model.  DMO  Dip and Move Out.  DSR  Double Square Root.  FGAN  Forschungsgesellschaft fur Angewandte Naturwissenschaften (G Research Establishment for Applied Natural Sciences).  FM  Frequency Modulated (or Modulation).  FT  Fourier Transform.  GPS  Global Positioning System.  IFT  Inverse Fourier Transform.  INS  Inertial Navigation System.  IRW  Impulse Response Width (resolution).  ISLR  Integrated Side Lobe Ratio.  LBF  Loffeld's Bistatic Formulation. xvii  LHS  Left Hand Side.  LRCM  Linear Range Cell Migration.  LRCMC  Linear Range Cell Migration Correction.  MSR  Method of Series Reversion.  NLCS  Non-Linear Chirp Scaling.  PAMIR  Phased Array Multifunctional Imaging Radar.  POSP  Principle of Stationary Phase.  PRI  Pulse Repetitive Interval.  PSLR  Peak Side Lobe Ratio (ratio to main lobe).  QPE  Quadratic Phase Error.  QRCM  Quadratic Range Cell Migration.  QRCMC  Quadratic Range Cell Migration Correction.  RDA  Range Doppler Algorithm.  RCM  Range Cell Migration.  RCS  Radar Cross Section.  RFM  Reference Function Multiply.  RHS  Right Hand Side.  SAR  Synthetic Aperture Radar.  SNR  Signal-to-Noise Ratio.  SRC  Secondary Range Compression.  TBP  Time Bandwidth Product.  TDC  Time Domain Correlation.  TSPP  Two Stationary Phase Points.  ZESS  Zentrum fur Sensorsysteme (Center for Sensor Systems, University of  ,  Siegen, Germany).  xviii  List of Symbols  a  Perturbation coefficient for the parallel case.  ai,a  Arbitrary perturbation coefficients.  a  d  Non-linear perturbation coefficient.  a  m  Perturbation coefficient for the monostatic case.  2  "ref  The perturbation coefficient at the reference range (parallel case).  ttref.st The perturbation coefficient at the reference range (stationary case)  at  Perturbation coefficient for the stationary receiver case.  V  Vector gradient operator.  'Taz  Broadening factor due to weighting in Doppler.  7i  Half the difference between  7n  Semi-bistatic range of an arbitrary point Target.  7o  Semi-bistatic range of reference point Target P .  s  7?TOP  and RR P0  Q  Broadening factor due to weighting in range frequency. r 1  xy  Operator to project a vector to ground plane. Lateral resolution. Cross range resolution.  Sp  r  Range resolution. The amount of LRCM to compensate in range time domain. xix  Aa  The change in magnitude of the perturbation coefficient between two consecutive range cells.  Arjn  Two-dimensional spectrum representing the difference between 77b and  A7]T  Two-dimensional spectrum representing the difference between rjy_ and  Aw  The size of invariance region in azimuth time.  A</>3  Third order phase component of 02df-  ^<^3rd  Third order phase error for parallel case.  A04  Fourth order phase component of ifodf-  A04th  Fourth order phase error for parallel case.  A(j)  The upper bound phase error as a result of applying F M equalization  y  a  before residual RCMC (parallel case). A0  Q e r r  The phase error for applying several discrete perturbation coefficients for the same azimuth aperture (parallel case).  A</>  ast  The upper bound phase error as a result of applying F M equalization before residual RCMC (stationary receiver case).  A(p  Higher order phase component in the approximate smile operator.  s  A</3  src  Phase error resulting from the use of a bulk SRC filter for the processing block.  Ar  A small change in bistatic range.  AR  Sum of AR  ARR  Difference in range between  A-RRXPI  Receiver component of AR .  ARf  Difference in range between  Ai?TxPi  Transmitter component of AR .  L  T  and AR . R  A  and  i?R  -RicenA  and  i?TcenB-  i?R  c e n  c e n  B-  X  X  xx  AR  The size of invariance region in slant range.  77  Azimuth time or slow time.  ?7d  The time delay of the beam crossing time of Target C to Target B.  771  Dummy azimuth time variable.  %  Solution to the bistatic stationary point.  %  Approximate solution to the bistatic stationary point. (LBF)  ?7R  Solution to the stationary point of the receiver.  X  77R  0  The azimuth time when the receiver reaches the closest range of approach.  ?7T  Solution to the stationary point of the transmitter.  77x0  The azimuth time when the transmitter reaches the closest range of approach.  9^  Dipping angle.  9  Angle between u and v .  0\  Incidence angle.  6  The squint angle of an equivalent monostatic system to point Target P  g  n  g  g  for the azimuth-invariant case. 0  O  The squint angle of an equivalent monostatic system to reference point Target P for the azimuth-invariant case. Q  9  V  0  r  Time domain phase of a wideband signal (before applying POSP). Fequency domain phase of a wideband signal (after applying POSP).  9  Squint angle of a monostatic system.  #sqR  Squint angle of receiver.  #sqT  Squint angle of transmitter.  #sqRi  Squint angle of receiver to point Target Py.  #sqTi  Squint angle of transmitter to point Target P i .  sq  xxi  #s R2  Squint angle of receiver to point Target P2.  # qT2  Squint angle of transmitter to point Target P2.  K  A numerical constant.  A  Wavelength.  p  Sinc-like pulse envelope in range.  q  S  T  p  Sinc-like pulse envelope in azimuth.  az  p  A sinc-like pulse in the range Doppler domain.  a  Reflectivity of an arbitrary point target.  <T  Estimate of reflectivity a.  ?  Dummy time variable.  r  Range time or fast time.  r  T,R  Coordinate of an arbitrary point in the image plane.  02df  Phase of two-dimensional spectrum of the demodulated SAR signal.  03 d  The peak-to-peak, third-order phase of a point Target, (parallel case)  04th  The quartic phase of a point Target, (parallel case)  04th,st  The quartic phase of a point Target, (stationary case)  0  Phase of smile operator in two-dimensional frequency domain.  n  n  r  a  0az  Azimuth modulation phase in azimuth frequency domain.  0azA  Azimuth modulation phase for point Target A in azimuth frequency domain (parallel case).  0azC  Azimuth modulation phase for point Target C in azimuth frequency domain (parallel case).  0 E az  Azimuth modulation phase for point Target E' in azimuth frequency domain (stationary receiver case).  0  n p  Phase of h . np  xxii  (f>  Phase of an arbitrary point target after linear phase correction.  (f)  Phase of a wideband signal.  0  Phase for RCM in two-dimensional frequency domain.  p  v  r c m  <p  Residual phase.  </>  Phase for range modulation in range frequency domain.  <^>  Receiver component of the phase of the two-dimensional demodulated  res  gr  R  SAR signal. <f>  Phase of h .  oi  Phase for SRC in two-dimensional frequency domain.  st  0  st  src  s r c  Phase for SRC in two-dimensional frequency domain for point Target  A  A.  4>i  Transmitter component of the phase of the two-dimensional demodulated SAR signal.  (p  The azimuth frequency dependent term in 4> m-  ip  The phase of the Fourier integrand in K — K domain.  ip  A phase component of the Fourier integrand in K — K domain after  rcm  TC  v  v  expansion about VQ. $I  Quasi-monostatic phase term.  V&2  Bistatic deformation term. Time domain azimuth envelope.  w  Time domain range envelope.  A  Models the backscattering coefficient, range attenuation and antenna  T  Q  pattern in elevation. a, b,c,d,x'  Constants (used in Appendix B.l).  a' , a'  Coefficients used by LBF to determine bistatic grade,  0  2  ao, ai, a2, as Coefficients for the forward function of the reversion of power series.  xxiii  Ai,A , 2  A  B\,B2,B3  3  Coefficients for the inverse function of the reversion of power series. Coefficients of the power series representation of azimuth time rj? in terms of azimuth frequency /,,.  B  Bandwidth of a linear F M signal,  c  Speed of light.  T  c  Speed of seismic wave in a homogenous medium.  e  f  A constant in frequency.  K  Azimuth frequency. f  Mean azimuth frequency (Doppler Centroid).  f  Range frequency.  fd  Doppler frequency.  f  Carrier frequency.  /shift  A small shift of the signal in azimuth frequency caused by the linear  Vc  T  0  phase term in the NLCS algorithm (parallel case). /shift.st  A small shift of the signal in azimuth frequency caused by the linear phase term in the NLCS algorithm (stationary receiver case).  F  FT operation.  F~  Inverse FT operation.  Fk  An scaled value of cos (# ).  FR  An scaled value of  FT  An scaled value of cos (# x).  g  A two-dimensional differentiable time function.  G  Two-dimensional FT of g.  h  Half the baseline of a constant offset case.  h  Frequency-domain matched filter of  l  np  2  sq  cos (# R). 2  SQ  2  sq  xxiv  5 Aa z  h  Time-reversed conjugate of p.  hst  Frequency-domain matched filter of S^D-  H  FT of signal h.  p  H  Smile operator in two-dimensional frequency domain.  H  Smile operator in terms of range frequency and cos(0 ).  11  Focused image in two-dimensional time domain.  12  Image in flat-Earth plane.  IFT  Inverse FT.  a  s  sq  k\- • • &4  Derivatives of the slant range, the number in the subscript represents the order of the derivative.  ^AI • • • &A4  Derivatives of the slant range for point Target A, the number in the  subscript represents the order of the derivative. &BI • • • &B4  Derivatives of the slant range for point Target B, the number in the  subscript represents the order of the derivative. &D2,^D4  Derivatives of the slant range for point Target D, the number in the subscript represents the order of the derivative.  &E2,&E4  Derivatives of the slant range for point Target E, the number in the subscript represents the order of the derivative.  &m • • • ^R4  Derivatives of the transmitter slant range, the number in the subscript  represents the order of the derivative, fori • • • ^T4  Derivatives of the receiver slant range, the number in the subscript represents the order of the derivative.  K  Wavenumber related with range frequency f . T  K  Spatial frequency of 7.  K  A modified F M chirp rate that accounts for bulk SRC.  1  m  xxv  KA M  A modified F M chirp rate that accounts for bulk SRC for point Target A.  K  F M chirp rate.  K  F M rate that accounts for bulk SRC.  T  STC  K A STC  F M rate that accounts for bulk SRC for reference point Target A (parallel case).  -fCsrcD  F M rate that accounts for bulk SRC for reference point Target D (stationary receiver case).  K Q STC  F M rate that accounts for bulk SRC for edge point Target Q (stationary receiver case).  K  Spatial frequency of u.  K  Spatial frequency of v.  U  V  M  The number of range cells caused by R C M (parallel case).  M  The number of range cells caused by R C M (stationary receiver case).  p  Wide-bandwidth signal (typically the Linear Frequency Modulated sig-  a  aiSt  nal). P  FT of a wideband signal p.  P\  FT of a linear F M signal.  R  Instantaneous range of an arbitrary point target.  R_  Instantaneous range of reference point (Target D) in the stationary  {m  receiver case. Ri  Instantaneous range after removing linear range term.  R  The SUm of i?Tcen and -fifteen-  cen  J^cenA The SUm Of -RxcenA and -RrtcenAi? enB The SUm Of -RxcenB and -RRcenBC  xxvi  .ftcenc  Bistatic range of point Target C at rj =  77a.  This range is approximately  equal to i? Bcen  Rcmv  Range curvature in range Doppler domain.  ^lrcmA  The instantaneous slant range of Target A after removing the linear term.  r  The one-way slant range from the equivalent monostatic system to ref-  Q  erence point Target P for the azimuth-invariant case. Q  r  The one-way slant range from the equivalent monostatic system to point  n  Target P for the azimuth-invariant case. R  The common closest range of approach of the transmitter and receiver  0  in the constant offset case. .RR  Instantaneous range from receiver to an arbitrary point target.  RR  The receiver closest range of approach.  i?ToP  The receiver closest range of approach to point Target P for the azimuth-  0  invariant case. •RRcen  The range from receiver to an arbitrary point target at azimuth time 77 =  0.  ^RcenA  The range from receiver to point Target A at azimuth time 77 = 0 .  -RRcenB  The range from receiver to point Target B at azimuth time 77 = 0 .  -RRcenP2  The range from receiver to point Target  P2  at beam center crossing  time. Rs  One-way slant range for the monostatic case.  R_  The slant range a point target will be focused to (stationary receiver  s  case). R  T  Instantaneous range from transmitter to an arbitrary point target.  xxvii  ^?Tcen  The range from transmitter to an arbitrary point target at azimuth time 77  = 0.  -^TcenA  The range from transmitter to point Target A at azimuth time  77  = 0.  -RTcenB  The range from transmitter to point Target B at azimuth time  77  = 0.  •RTcenP2  The range from transmitter to point Target P at beam center crossing 2  time. RT  The transmitter closest range of approach.  PL  The transmitter closest range of approach to target at the edge of the  0  TO  range swath for the stationary case. R-YoP  The transmitter closest range of approach to edge Target P for the azimuth-invariant case.  RTOQ  The transmitter closest range of approach to edge Target Q for the stationary case.  R  The instantaneous range can be expressed in terms of 7 and y .  s\  Range compressed demodulated signal after LRCMC and linear phase  V  n  correction. S[  Signal after applying range FT to s\.  Si  Two-dimensional FT of S i .  S  FT of s .  2df  rc  s  Baseband demodulated signal.  s  Focused point target signal.  SazA  FT of signal  SauD  FT of signal s  SA  Signal of point Target A after LRCMC and linear phase correction.  SApert  SA  sc  Signal of point Target C after LRCMC and linear phase correction.  s  A p e  rt-  Dpert  .  after applying perturbation.  xxviii  S  Two-dimensional spatial frequency signal after applying R F M .  scpert  «c after applying perturbation.  •SDpert  The signal for reference point target D after RCMC and F M equaliza-  c  tion. •SEpert  The signal for edge point target E' after RCMC and F M equalization.  5a  Range compressed signal in the range Doppler domain.  Sircm  Signal used for compensating linear phase.  s  Input to a matched filter.  r  nn  5  nn  FT of s . nn  Sout  Output to a matched filter  s  r  Received signal.  s  rc  Range compressed demodulated signal.  St  Transmitted signal.  s  Range compressed signal in range time and azimuth spatial units.  u  S  Two-dimensional FT of s .  S  Two-dimensional spatial frequency signal after applying "change of vari-  u  x  u  able". Ti,T2,Ts  Azimuth time boundaries between different perturbation coefficients.  T  Integration time or exposure time.  a  ib  Total time for a pulse to travel from a source/transmitter to a receiver in a bistatic setup.  Td  Azimuth time interval.  ^DMO  Time delay between t\> and t .  t  Total time for a pulse to travel from a source/transmitter back to the  m  m  receiver, which is collocated with the source/transmitter. T  p  Pulse width of a Linear F M signal.  xxix  u  Arbitrary unit vector in the ground plane.  u  Unit vector in the ground plane that points to the largest rate of change  g  of bistatic range. u  r  Unit vector from point target to the receiver.  u  t  Unit vector from point target to the transmitter.  u  Spatial unit.  v  Spatial unit with an offset of y from u.  v*  Numerical solution to stationary phase.  VQ  Numerical solution to stationary phase with 7 = 70 and y = yo-  v  Unit vector in the ground plane that points to the largest rate of change  n  n  g  of Doppler. V  Speed of the monostatic system/equivalent monostatic system.  VR  Speed of the receiver.  VRX, VRy, VRZ  Velocity vector components of the receiver.  VT  Speed of the transmitter.  r  VTX, Vr , VTZ  Velocity vector components of the transmitter.  Vr  Instantaneous velocity vector of the transmitter.  VR  Instantaneous velocity vector of the receiver.  VT  Instantaneous velocity vector of the transmitter.  Waz  Frequency domain azimuth envelope.  W  Frequency domain range envelope.  xi, yi, z\  Defines the offset to the equivalent monostatic system for the azimuth-  y  T  invariant case. X, Y , Z  Cartesian coordinate of the reference point.  XR, YR, ZR  Cartesian coordinate of the antenna phase center of the receiver.  0  0  0  xxx  X_, Y_, Zf x ,y n  Cartesian coordinate of an arbitrary point in the ground plane.  n  Coordinate of an arbitrary point in the 7 — y plane.  7,y n  n  X, Y c  Cartesian coordinate of the antenna phase center of the transmitter.  c  Cartesian coordinate of a reference point in the ground plane.  xxxi  Acknowledgements I would like to thank my supervisors, Prof. I. G. Cumming and Dr F.H. Wong, for help and guidance. Prof. Cumming introduced to me the concepts of SAR data processing. Dr. Wong suggested to me the topic of bistatic SAR data processing, the Non-Linear Chirp Scaling algorithm and the series reversion method, which form the basis of the research work in this thesis. I would also like to express my appreciation for the scholarship and support from my company DSO National Labs. I am very grateful to Prof. Loffeld from Center for Sensorsystems (ZESS), University of Siegen, Germany for giving me the opportunity to work on real bistatic SAR with his wonderful team at ZESS. I would like to extend a heartfelt, sincere expression of gratitude to the following colleagues and friends, the foremost of countless people who have helped me along the way, for their encouragement and unwavering support. First of all, Catriona Runice who has patiently proofread countless versions of my thesis; Kaan Erashin for being such a warm, caring friend, and helping out on innumerable occasions. Flavio Wasniewski and Ali Bashashati who kept my spirits high throughout the difficult times. Then there are my friends at ZESS, Holger Nies, Koba Natroshvili and Marc Kalkuhl all of whom toiled with me on the real data set. Thanks go out to my colleagues Yeo Siew Yam, Tong Cherng Huei, Leonard Tan and Lau Wing Tai for helping me in invaluable ways. I would also like to thank Carollyne Sinclaire for always being there when my family needed help. Her sincerity and steadfast support have allowed me to concentrate on my work. A special thanks also goes out to Sim Sok Hian for being such a supportive friend. Her sincerity, love and kind words of wisdom goes a long way. Last but not least, I would like to thank my wife, Susan, for her countless sacrifices and endless patience and understanding during this challenging period. YEW  xxxii  L A M NEO  To ray wonderful wife, Susan and two lovely kids - Keane and Jazelle.  xxxiii  Chapter 1 Introduction 1.1  Background  A radar is an active imaging sensor that emits a microwave radiation and uses an antenna to measure the reflected energy in order to recreate an image of the object that the radiation impinges on. It takes advantage of the long-range and cloud penetration capabilities of radar signals to provide imagery under all weather conditions, day or night. An important application of radar is in the area of remote sensing. Remote sensing can be broadly defined as the collection of information of an object or phenomenon by a recording device that is not in physical or intimate contact with the said object. Just as in an optical system where a wider lens or aperture would obtain a sharper beam and a fine imaging resolution, the resolution capability of a radar is inhibited by the physical size of the emitting antenna aperture. Generally, the larger the antenna aperture, the sharper the beam and hence the finer-resolution. However, because a radar utilizes a longer-wavelength than optical systems, even for moderate resolutions,  1  it would require an antenna length several hundred meters long. An antenna of such a size presents an impractical payload for an airborne platform. Synthetic aperture radar (SAR) is a form of radar in which sophisticated postprocessing of radar data is used to produce a very narrow effective beam. A SAR improves the resolution by synthetically creating a large aperture with the help of a spaceborne or airborne platform. The SAR system emits a stream of microwave radiation pulses at a series of points along a flight path. Each emitted pulse is carefully controlled so that the radiation is coherent and always in phase upon transmission. Each echo is then collected, digitized, phase adjusted and coherently added in a digital processor. Essentially, the processor generates a synthetic aperture much larger than the physical antenna length and hence creates a high-resolution SAR image. For this reason, the length of this flight path is called the synthetic aperture length. The quality of modern-day, fine-resolution SAR imagery approaches that of the optical imagery to which we are naturally accustomed to. Such SAR imagery may complement or even exceed the optical imagery capabilities because of the inherent differences in the backscatter characteristics of the imaged object-to-radar frequencies. The processing speed of modern digital electronics and relatively inexpensive digital memories enable the synthesis of high-resolution imagery in or close to real-time.  1.1.1  Bistatic Configuration  SAR has several imaging modes.  The two basic SAR data-collection modes are  stripmap mode and spotlight mode as shown in Figure 1.1. In the stripmap mode, the antenna footprint sweeps along a strip of terrain parallel to the sensor's flight. In the conventional stripmap mode, the antenna is pointed perpendicular to the radar flight. This is known as the broadside stripmap imaging mode. ' However, in some  2  situations, the antenna is pointed either forward or backward. This is the squinted stripmap imaging mode. In the spotlight mode, the antenna footprint is continuously steered to illuminate an area of the terrain.  (a)  (b)  Figure 1.1: Basic SAR imaging modes, (a) stripmap mode (b) spotlight mode. The antenna platform can be operated in different configurations. A monostatic system is one where the transmitter and receiver are collocated. A bistatic SAR has separate transmitter and receiver sites. A multistatic SAR has more than two platforms, serving as a transmitter, a receiver or both. A multistatic SAR can often be analyzed as a collection of bistatic systems. The bistatic setup provide many advantages. One of the most important is the cost reduction achieved by allowing several simple and cheap passive receivers to share the more expensive active transmitting component located on one platform [1]. By using this configuration, the observation geometries are multiplied without increasing the cost associated with using several monostatic systems. Bistatic configuration is also advantageous in remote sensing as more information on the ground scatterers can be collected by using different incidence and scattering angles. This gives bistatic con3  figuration an anti-stealth capability since target shaping to reduce monostatic Radar Cross Section (RCS) generally does not reduce bistatic RCS. In a hostile environment, a high-powered transmitter can stay at a distance, which is out of reach of enemy fire, while a covert passive receiver can be located close to the scene and yet remain virtually undetectable by enemy radar. A bistatic system has considerably more versatility than a monostatic system since each platform can assume different velocities and different flight paths. Furthermore, a bistatic platform may involve a combination of spaceborne, airborne and stationary ground-based platforms [1-3]. These systems may involve the teaming up of several bistatic receivers with existing monostatic platforms in order to save developmental costs [1,4,5]. Another interesting configuration is known as passive coherent location [6-9] or hitchhiking [10] mode. It makes use of broadcast or communications signals as illuminators of opportunity. Despite all these advantages and the fact that bistatic radar has been around longer than monostatic radar [10], operating in a bistatic SAR configuration presents many technical challenges that are either not present or are more serious than in a monostatic SAR. Major technical challenges like time synchronization of oscillators, flight coordination, motion compensation, complexity of adjusting receiver gate timing, antenna pointing and phase stability have traditionally been stumbling blocks for developing practical bistatic SAR systems [3,10-12]. Recent advances in navigation hardware, timekeeping, communication and digital computing speed have resulted in a resurgence in research and development in bistatic SAR [3,4,10,13]. Advances in the last two decades have made it possible to address some of these age-old issues and make bistatic SAR a viable option. Many European radar research institutes, like the DLR, ONERA, [14] QinetiQ [15] and FGAN [16] have embarked on bistatic airborne experiments. Most of these experiments use two 4  existing monostatic sensors to synthesize bistatic images. An important area of research is in focusing high-resolution bistatic i.e., SAR image or converting raw radar signal into focused images. Although there are many different approaches to bistatic SAR processing, the processing of bistatic radar data has still not been sufficiently solved [16]. In the next section, a brief description of how a collection of raw radar signals can be processed into a SAR image and the problems in processing a bistatic image is given.  1.1.2  The Two-Dimensional Signal  A SAR system emits pulses of radio waves into the imaged scene and collects echoes along the flight path. The imaged scene can be imagined as consisting of many infinitesimally small points or point targets, each with its own complex scattering reflectivity. Each point target reflects the signal back to the SAR system. The strength of the reflected signal or backscatter is dependent on the reflectivity. The delay time of each echo is dependent on roundtrip distance or slant range from transmitter to point target back to the receiver of the SAR system. The echo signal consists of a superposition of all the reflected signals from the illuminated scene. At the receiver antenna, this echo signal is demodulated from its carrier signal and downconverted to baseband in the receiver chain. A two-dimensional raw radar echo signal is created by stacking each demodulated echo one after another in digital memory [17]. The dimension where the echo signal is digitized and recorded is called slant range or simply range. The roundtrip range of each point target is determined by precisely measuring the time from transmission of a pulse to receiving the echo from a target. Range resolution is determined by the transmitted pulse bandwidth, i.e. narrow pulses 5  yield fine range resolutions. The digitized echo of each pulse is placed consecutively as it arrives along the other dimension called azimuth. Without further processing, the azimuth resolution is simply the angular beamwidth of the antenna. By carefully controlling each emitted signal along the path, such that the radiation is coherent and always in phase upon transmission, we can demodulate and coherently sum the return signals as if they are emitted from a physically long antenna. Thus, a very narrow beam is synthesized, resulting in a finer azimuth resolution. The two-dimensional raw radar signal consists of echoes from many point targets. In fact, the received data look very much like random noise. Focusing is the processing step that transforms two-dimensional raw signal data from a point target into a point target impulse response. The impulse response is a sinc-like response in both range and azimuth [17]. The focused image consists of a superposition of all the impulse responses. A final step, called registration, is needed, which will interpolate the focused image with slant range and azimuth coordinates to ground plane with spatial coordinates. After image registration the SAR image will resemble a map-like optical image except for terrain height effects.  1.1.3  Range and Range Resolution  The ability for a SAR system to achieve high-resolution in range and azimuth lies in the phase modulation in range and azimuth. In the case of range, the phase modulation is achieved through deliberate phase coding in the transmitted pulse. In azimuth, the phase modulation is as a result of motion of the platforms. The phase modulation allows a processing technique called pulse compression to form a narrow pulse in both 6  range and azimuth. Pulse compression is further described in Section 2.3. To achieve high-resolution in range, a short pulse is required. At the same time, a high Signal-to-Noise Ratio (SNR) is required to achieve a good quality image. Thus, a high-power short pulse is required to achieve both requirements. However, such requirements put stringent demands on the peak power of the transmitter. The solution is to transmit a long phase-encoded pulse with a large bandwidth [17,18]. This phase-modulated pulse can be pulse compressed to produce a narrow pulse with high power.  1.1.4  Azimuth and Azimuth Resolution  In azimuth, the phase modulation is dependent on the variation of slant range with azimuth time. Assuming a linear flight path, the slant range from a platform to an arbitrary point target is a hyperbolic function of azimuth time. In addition to this phase modulation, the range position of the echo signal varies with azimuth time and, if the integration time is long enough, it will migrate over several range samples. This phenomenon is called the Range Cell Migration (RCM). To achieve high-resolution in azimuth, the SAR processor must also deal with this RCM effect, as this causes the point target energy to spread over several cells and causes a degraded point target response. The RCM effect becomes even more severe in squinted SAR where the antenna is pointing at an angle from broadside. As this angle increases, the RCM spreads over even more range gates and may eventually cause the range and azimuth modulations to be coupled (dependent on each other). This coupling effect greatly increases the complexity of the focusing [19].  7  1.1.5  Processing Algorithms  The most exact and most direct solution for focusing a SAR image is to use a twodimensional replica for each point on the imaged area [20]. This replica will take care of the range migration and has an accurate phase for each point target. A twodimensional correlation of this replica with the collected SAR data will focus each point target accurately. However, this two-dimensional replica must be recreated for each point in the imaged region, since each point has its own unique range history with the platforms. Performing this two-dimensional correlation for each point would be computationally intensive. Thus, the goal of all SAR processing algorithms is to make suitable approximations and perform this focusing task in a more efficient way without causing too much degradation in the image. One way to achieve efficiency is to operate in the frequency domain. For monostatic configurations, operating in the Doppler domain allows the algorithms to achieve efficiency in matched filtering using fast convolution techniques. Also, point targets with the same closest range of approach collapse to the same trajectory in the Doppler domain for a monostatic configuration. This is known as the azimuth-invariant property. This stationarity property is important to many popular and efficient monostatic algorithms such as the Range Doppler Algorithm (RDA) [21-23], Chirp Scaling Algorithm (CSA) [24] and u — k Algorithm [25-27]. These algorithms operate mainly in the range Doppler domain or the two-dimensional frequency domain. The analytical solution for a point target spectrum is the starting point of many of these frequency based algorithms [17]. While the point target spectrum for the monostatic case has been derived [28], a simple analytical solution for the signal in the azimuth frequency domain does not exist for the general bistatic case [29-31]. Very often, the azimuth phase modulation for the monostatic configuration assumes 8  a hyperbolic range equation. In the case of a bistatic configuration, the slant range history is the sum of two independent hyperbolic range functions giving a so-called flat-top hyperbola [29,30,32]. It is the DSR function that makes it hard to invert the phase function [30]. This inversion is required in order to derive an analytical function for the bistatic point target spectrum [33]. Nevertheless, several bistatic algorithms have been developed to overcome this difficulty. The first approach is to solve the problem numerically [34-37]. These algorithms make use of numerical methods to calculate the double square root phase term. Bamler and Boerner [31] proposed a focusing algorithm that replaces the analytical SAR transfer functions with numerical equivalents. The second approach is to transform the bistatic data to a monostatic equivalent. In [32], a convolution phase term called the Rocca smile operator can be used to perform this step. It is based on Dip MoveOut (DMO) [38] used in seismic data processing. This method was limited to processing the bistatic case where receiver and transmitter have identical velocities and flight paths. A recent extension to this article [30] was able to reduce a general bistatic configuration to a monostatic configuration by using a space-varying transfer function. However, such a reduction to monostatic configuration may not be applicable for more extreme bistatic cases [30]. An alternate method to transform an azimuth-invariant bistatic configuration to a monostatic equivalent is to approximate the bistatic range equation by a hyperbolic range function with a modified velocity parameter. This solution is well-known for the accommodation of curved orbits in the monostatic case [39]. However, this equivalent velocity approach becomes increasingly inaccurate with increasing separation between the transmitter and receiver [31]. The third approach is to solve for the two-dimensional spectrum directly using the method of stationary phase [17,40]. An approximate analytical solution for the 9  general bistatic two-dimensional frequency spectrum has been proposed in [29]. This analytical formulation has two phase components in the spectrum: a quasi-monostatic phase term and a bistatic deformation term. Such a formulation calls for a step to remove the bistatic deformation term followed by a quasi-monostatic focusing step. This is similar to the method of using a Rocca's smile operator to transform data from bistatic to monostatic. In [41], it was also shown that the DMO method for the stationary case [32], was a special case for the more general approach derived in [29]. Most of the bistatic algorithms have a common drawback, they restrict their focusing to the azimuth-invariant case [31,32,34,35,42]. In a bistatic configuration, the bistatic system can remain azimuth-invariant by restricting the transmitter and receiver platform motions to follow parallel tracks with identical velocities. This would place stringent requirements on the performance of the flight path of bistatic platforms [43]. The Non-Linear Chirp Scaling (NLCS) [44] was shown to be an innovative way to focus SAR images. It is able to focus monostatic images and has been demonstrated to focus the bistatic configurations where the transmitter is imaging on broadside and the receiver is stationary. For monostatic cases, the NLCS uses a linear range cell migration correction (LRCMC) step to align trajectories with different Frequency Modulation (FM) rates along the same range gates. This makes the signal both rangeinvariant and azimuth-invariant as in the case of bistatic SAR signals. The algorithm uses chirp scaling [17,24] to make the azimuth phase history azimuth-invariant during the processing stage The NLCS is inherently able to cope with image formation of bistatic signals since it is able to handle range and azimuth-variant signals. Furthermore, this algorithm has the potential to process high-squint SAR data as it also eliminates most of the range Doppler coupling. Clearly, developing this algorithm further to handle other bistatic 10  configurations would be advantageous. At the moment, the properties and limitations of this relatively new algorithm are not well understood.  1.2  Scope and Thesis Objectives  The problem addressed in this thesis is the processing of bistatic stripmap SAR data acquired in squint mode. The objectives of this thesis are as follows: • Review bistatic SAR imaging and bistatic SAR processing algorithms, and describe the NLCS algorithm. • Derive an accurate analytical solution for the two-dimensional frequency signal and compare with some existing analytical point target spectrums. • Develop a bistatic processing algorithm based on the two-dimensional frequency signal. • Investigate the NLCS algorithm and extend it to handle other bistatic geometry configuration. • Find the limitations of the extended NLCS algorithm and investigate how registration can be done. The flight geometries investigated includes the following: • Stationary receiver with moving transmitter. • Both platforms moving in a parallel track with the same velocity. • Platforms moving in non-parallel tracks with different velocities. The results of this thesis will be useful to a number of agencies working on bistatic SAR development, including the author's sponsoring company, DSO National Labs in Singapore. 11  1.3  Thesis Organisation  The organization of the thesis is as follows: • Chapter 1 — Introduction The thesis begins with an overview of bistatic SAR concepts and bistatic SAR image formation. It gives an overview of advantages of the bistatic configuration and also the problems facing bistatic SAR image formation. These problems provide the motivations for this thesis. • Chapter 2 — Properties of Bistatic SAR Data and Imagery This chapter provides a theoretical basis for understanding bistatic SAR processing. It discusses the bistatic signal model, pulse compression, the point target impulse response and point target quality measurements to characterize the bistatic SAR system. • Chapter 3 — Bistatic SAR Processing Algorithms This chapter gives a short overview of all existing bistatic SAR processing algorithms and describes the strengths and weaknesses of each algorithm. • Chapter 4 — A New Point Target Spectrum The material presented from Chapter 4 to Chapter 7 is new, and constitutes the contributions of this thesis. Chapter 4 discusses an accurate point target spectrum based on the method of series reversion (MSR) and compares the accuracy with existing point target spectra. • Chapter 5 — Bistatic Range Doppler Algorithm A new bistatic Range Doppler Algorithm, based on the MSR, is derived for the fixed baseline azimuthinvariant case. This method is applied to focus real bistatic data.  12  Chapter 6 — NLCS Algorithm for the Parallel and Slightly Non-Parallel Cases The improvements of the NLCS algorithm for the stationary receiver and moving transmitter case are described here. Chapter 7 — NLCS Algorithm for the Stationary Receiver Case The improvements on the NLCS algorithm for the stationary receiver and moving transmitter case is described here. Chapter 8 — Conclusions The thesis concludes by giving a summary of the contributions of this thesis and suggesting a few areas for future work.  13  Chapter 2 Properties of Bistatic SAR Data and Imagery 2.1  Introduction  This chapter provides a theoretical basis for understanding bistatic SAR image formation and introduces the notations used in this thesis. First, the bistatic SAR imaging geometry is described. Using this geometry model, the demodulated two-dimensional bistatic SAR signal for an arbitrary point target is formulated. The entire illuminated scene can be modeled as a superposition of many point target signals and the imaged scene can be reconstructed using a matched filter approach. Next, a brief review of the topic of matched filtering to show how each point target in the image is reconstructed as a two-dimensional impulse response is given. The reconstructed impulse response is a sinc-like response in both range and azimuth since the received signals are bandlimited in the range and azimuth domains. Many important SAR image quality parameters can be estimated from this impulse response and the SAR system can be characterized using these measured parameters. These quality measures are used to 14  determine the accuracy of a processing algorithm. For a bistatic setup, the resolution is dependent on the geometry of the bistatic configuration as well.  2.2  Bistatic S A R G e o m e t r y  A bistatic system consists of separate transmitter and receiver sites, whereby each platform can assume different velocities and different flight paths [Figure 2.1]. The angle between the line of sight of the transmitter and the line of sight of the receiver forms the so called bistatic angle (3. The baseline is the line joining the transmitter and the receiver. Generally, the baseline is continuously changing when the velocity vectors of the platform are different. In the configuration illustrated in Figure 2.1, one platform works in stripmap mode while the other platform steers its antenna footprint to match the antenna footprint of the former. This mode of operation has been described as pulse chasing [45] or footprint chasing [46,47]. In practice, it may be expensive to employ a system where accurate steering of the antenna is required during the integration time [47]. This problem could be addressed with a configuration wherein both transmitter and receiver are working in stripmap mode, with both antennas using a fixed squint angle. However, one of the platforms should have a wider beam footprint as shown in Figure 2.2. Steering of the antenna would still have to be done to coincide the beam footprints, but at a much less stringent update rate as compared to using two platforms with equally small beam footprints. The footprint of the antenna beamwidth is a product of the antenna footprint of the receiver and the transmitter. This is also known as the composite antenna footprint [43].  15  Receiver  Figure 2.1: Imaging geometry of bistatic SAR.  2.2.1  Bistatic SAR Signal Model  The area being imaged can be modeled as a collection of point targets with different reflectivities. It is sufficient to analyze the scene by using the signal response of an arbitrary point target in the scene since the raw signal of the scene is a two-dimensional signal that consists of superposition of echo signals from each point target in the imaged patch. In the SAR signal model in Figure 2.3, a flat Earth model is assumed, with the area to be imaged in the xy plane. The transmitter has a velocity of Vr and receiver has a velocity of VR. The axes x, y and z make a right hand Cartesian coordinate system with the y direction parallel to velocity vector of the transmitter and the z axis pointing away from the Earth. Accurate position measurements of SAR platforms during flight are essential to avoid significant deterioration in image quality. Accuracy in estimating the differential  16  Receiver  Figure 2.2: Practical implementation of bistatic imaging. range position should be in the order of a fraction of the wavelength [48-50]. This places great constraints on the measurement units, especially for short-wavelength systems with long apertures. Platforms usually adopt a linear flight path with constant velocity as it is most convenient and it relaxes the requirements on motion measurement units [51]. Autofocus and motion compensation are techniques that can help to estimate the phase errors and help refocus the image [48,51]. Assuming linear flight paths, the instantaneous range equation, R(rj), consists of the sum of two hyperbolic range functions, R{ ) V  = R {r)) T  +  RT(T])  and Rn.(r]),  R{) R  V  (2.1) where 77 is azimuth time, V is the scalar velocity of the platform, R is the instantaneous 17  range to the point target, and the subscripts T and R refer to the transmitter and receiver, respectively. The subscript, cen, refers to the geometry at time, 77 = 0, that is, when the ranges to the target are  i?Tcen  and  i?R nc e  The sum of  i?Tcen  and  i?R  C e  n  is  given by R cen •  The geometry of the bistatic SAR data collection is illustrated in Figure 2.3. (2.1) expresses how the two-way range to the point target is given by the sum of RT and Rn, as a function of the azimuth time, 77.  0 T s q  is the squint angle of the transmitter, and  6 n is the squint angle of the receiver at this time. The receiver clock is synchronized sq  Figure 2.3: A general bistatic configuration of transmitter and receiver at 77 = 0. with the transmitter clock. Synchronizing the timing of the transmitter and receiver is not a trivial task, especially for a long and varying baseline system [13,16]. Poor time synchronization leads to image blurring. Phase stability of the local oscillators is also an important criterion for good image quality [11].18  As the transmitter travels, it emits pulses of radio waves at regular intervals called Pulse Repetitive Interval (PRI). Each echo is downconverted and digitized in the receiver within the PRI interval. The digitization takes place in the range dimension and the sampling rate of the digitized signal must be greater than the signal bandwidth, B , to prevent aliasing [52]. Each echo is stacked, one after another, in memory at the T  PRI interval as the antenna sweeps over the imaged region. The radar pulse travels at the speed of light, c, which is much faster than the platform velocity. Therefore, the platform can be assumed to be stationary during transmission and reception. This is also known as the "start-stop" approximation. Thus, a two-dimensional signal s(r, 77) is recorded in memory where the echo is digitized in range time or fast time r at the sampling rate and the echoes are recorded in azimuth time at PRI intervals in azimuth time denoted by 77. The azimuth time is also known as the slow time because of the lower platform speed compared to the speed of light.  2.2.2  D e m o d u l a t e d Signal  At each PRI, the SAR system creates a wide-bandwidth signal p(r). This signal is then unconverted by the transmitter to carrier frequency f . The transmitted signal Q  is given by s (r) t  = ne{p{r)  exp(j2TT/ T)} 0  (2.2)  A complex wide-bandwidth signal can be written as p(r) =  Wr  (r)expO'</» (r)) r  (2.3)  where (fi is the phase of the signal. An example of a wideband signal is a linear T  F M signal, where the signal's instantaneous frequency is a linear function of time. 19  This achieves a uniformly filled bandwidth giving a rectangular function in the range frequency domain. An F M signal with a chirp rate, K , has a phase given by R  <P (T) = ITK T  (2.4)  2  T  T  and the envelope w is given by r  w (r) = r e c t ( ^  (2.5)  r  where the width of the F M signal is given by T . p  The echo signal is obtained by convolving the transmitted signal with a point target that has a bistatic slant range of R(r]) and a beam center crossing time of 77 = 0. The received signal from a single point target can be represented by the complex signal Sr(r,v) = ^ ( V ) p ( r - ^ j  The envelope  (2.6)  is the composite antenna pattern in azimuth [14,53,54]. The antenna  pattern determines the strength of the returns at each azimuth interval as the antenna footprint sweeps across each point target. It also determines the integration time or exposure time of each individual target. The echo signal is demodulated to baseband to reduce the demand on the digitizer and memory requirements. After downconversion, the demodulated signal becomes  s(r, 77) = A w [T 0  r  w (?7) az  exp < —j  +J7TA  R( ) V  r  (2.7)  where A models the complex backscatter coefficient a including the range attenuation a  and the antenna pattern in elevation. This demodulated signal is also known as SAR raw signal or SAR signal data. Note that this signal is only baseband in the range direction but not in the azimuth 20  direction. This signal is captured in a two-dimensional space known as the signal space. This signal is then processed and recorded in two-dimensional space known as the image space. The image space will have resemblance to the original imaged patch. For an airborne platform operating in stripmap mode, the nominal velocity of aircraft is equal to nominal velocity of the beam footprint. For a satellite case, the geometry is more complicated. The satellite orbit, the Earth's curvature and rotation have to be taken into account [47,55,56]. The analysis can be simplified by using an "effective radar velocity", V , that varies with range and slowly varies with azimuth as r  the satellite orbit and Earth rotation change with latitude [57]. The important thing to note is that the effective radar velocity for each point target is approximately a constant for the target's exposure time [17,39]. The magnitude of V lies in between the satellite tangential velocity, V , and the r  s  velocity of the beam footprint, V . For SAR processing, the effective radar velocity g  must be calculated from the satellite/Earth geometry [55]. A simplified approach to determining the velocity is found in [39]. Typical effective radar velocity varies about a few percent for the entire range swath. For instance for the case of RADARSAT, the effective velocity varies by about 1% for a range swath of 300 km [17,58]. Before continuing, we would like to look into the topic of pulse compression to appreciate the image reconstruction that follows.  2.3  Pulse Compression  A SAR system requires the use of a narrow pulse to have good resolution capability and the confliction requirement of a high transmit peak power to produce good ranging capabilities. Pulse compression is a solution to minimizing transmit peak power while  21  achieving good resolving capability and a high SNR. Pulse Compression is a matched filtering technique that compresses a long, phase-encoded, wide-bandwidth pulse into a narrow pulse. A phase encoded signal such as the popular linear F M signal is transmitted in the range domain.  2.3.1  Frequency Domain Matched Filter  Pulse compression is achieved through a matched filtering operation. If a desired signal, is buried in a noisy signal, such as the transmitted chirp pulse in the echo signal, it can be found by cross-correlating this signal with a conjugate replica of the desired signal. That is +oo / -co  where s  out  Sn„(?)P*(?-T)c??  is the matched filter output signal and s  (2.8)  is the input signal to the matched  nn  filter and it represents a desired signal p, corrupted by noise, q is a dummy time variable, p* denotes the complex conjugate of the complex variable p. The matched filter can also be viewed as a convolution filter, by time-reversal of the filter kernel where the filter is given by  h (r) p  =  p*(—r)  +CO  SnnMM " 7  / •oo  _  S)dc.  (2.9)  Matched filtering can be implemented in time domain using convolution or it can be implemented efficiently using frequency fast convolution as shown in (2.10), Sout(r) = F - ( S ( / ) • 1  nn  where  S (f ) nn  T  and  H(f ) T  r  (2.10)  H(f )) T  are the Fourier Transform (FT) of the signal s  nn  and the  convolution filter h respectively. f is the range frequency. The matched filter is T  22  designed in the frequency domain using the Principle of Stationary Phase (POSP) [40] and [59], which is described in the next section.  2.3.2  Principle of Stationary Phase  The analytical form of the spectrum of a wide-bandwidth signal can be derived using the POSP. The FT of a wideband signal is given by P(fr)  (2.11)  = Jp(T)exp{-j2nf T)dT T  The analytical form of the integral is difficult to derive. However, the approximate FT may be obtained by using the POSP. It is based on the fact that the main component of the integral comes from around the stationary point of the wide-bandwidth phase signal. The rest of the components oscillate rapidly so their contributions cancel out. A stationary point is defined as the point where the gradient of a function is zero [60]. In the POSP context, the function is the phase on the Right Hand Side (RHS) of (2.11), 6>(r)), and the stationary point can be found by setting the derivative of this r  phase to zero, <W (T) r  d(Mr)-2nf r)  =  T  dr  =  '  Q  dr  From this equation, the relation between frequency, f , and time, r can be deterT  mined. This equation has to be inverted to get an analytical function for r expressed in terms of f , denoted by r ( / ) . T  Stating'the result of the derivation, which are  r  detailed in [40,55,59], the spectrum of the signal is given by P{fr)  = C W (f )exp(jQ(f )± ^J 7  1  r  T  T  • C\ is a constant and can usually be ignored.  23  (2.13)  (a) Real part of linear FM signal  (b) Phase of linear FM signal and matched filter  30 20 f 10 ~Z ° £Q. -10  linear FM/pr  s  -20 • -30 t/  s  MF phasex  -0.5  0 time (us)  N  0  0.5  time (us)  (d) Expanded compressed signal  (c) Compressed signal  CD  0  -0.04  time (us)  -0.02  0  time (us)  0.02  0.04  Figure 2.4: Matched filtering of a linear F M signal with a signal bandwidth of 100 MHz, T = 1.28 us giving a TBP of 128. p  • W is the frequency domain envelope and it is a scaled version of the time domain T  envelope w . T  W (f ) T  = W [r(fr)}  r  T  (2.14)  • 0 ( / ) is the frequency domain phase, which is also a scaled version of the time domain phase,  9 (T). t  Q(fr)  = 0[r(f )} T  (2.15)  The POSP is an approximation. However, it is accurate if the signal has a "time bandwidth product" (TBP) around 100 or more [17]. 1  ^ B P is an important parameter for a signal, it is simply the product of the pulse width with the F M signal bandwidth. It is also proportional to the compression ratio.  24  2.3.3  Compression of a Linear F M Signal  It is instructive to derive the spectrum for an F M chirp signal and apply matched filtering on the signal to see how a long, wide-bandwidth F M signal is pulse compressed to produce a narrow impulse response. Applying the POSP to a baseband F M signal [see Figure 2.4(a)], the spectrum of the signal is given by (ignoring the effects of amplitude modulation), f P\fm{fr) =  rect (  ) exp  •  fr  (2.16)  The spectrum of the signal is also a complex F M signal in frequency domain and the envelope is preserved between the two domains. The phase is approximately quadratic in frequency domain as well [see Figure 2.4(b)]. The matched filtering operation essentially cancels out the phases between the signal spectrum of the original signal and the signal spectrum of the conjugate signal PUfr)PL(fr)  =  r  e  C  t  ( ^ )  (- ) 2  17  After IFT, a narrow compressed sine pulse results, as shown in Figure 2.4(c). The signal is given by s (r) = sinc(K T r) lfm  r  (2.18)  p  Figure 2.4(d) shows the expanded compressed pulse. The width of the pulse or resolution is measured between the 3dB points. The sine pulse has a resolution, 5p , r  which is inversely proportional to the bandwidth, B , of the transmitted pulse. r  0.886 5 p r  =  ^ ~  0.886 KJ"  =  ( 2 P  Thus, matched filtering has compressed a signal with a bandwidth of K T r  p  '  1 9 )  and pulse  width of T into a narrow pulse of width 1/B . The compression ratio is the ratio of p  r  25  the pulse width of the original pulse to the compressed pulse and is approximately the TBP, K_T^. The peak sidelobe ratio (PSLR) is the smallest difference between the main lobe and largest sidelobe. For a sine pulse the peak sidelobe (PSLR) ratio is -13.3 dB. This sidelobe ratio is usually too high for most applications where a nominal ratio of -20 dB or less is often desired. Windowing is applied to the frequency matched filter to improve the sidelobe ratio, the tradeoff is the broadening of the resolution cell. The resolution of the pulse is given by 5p  r  where j  Tg  (2.20)  = 7rg  is the amount of broadening due to window weighting. Table 2.1 gives some  broadening factors of commonly used windowing functions and their corresponding peak sidelobe ratios [52]. Table 2.1: Broadening factors for various windowing functions. Window  broadening 7  r g  PSLR (dB)  Rectangular  1.00  -13.3  Hamming  1.33  -43  Hanning  1.30  -40  Kaiser  1.18  -25  Comments  weighting parameter = 2.5  The demodulated signal (2.7) can now be compressed in the range direction. If the range window w (.) is compressed to a sinc-like window of p (-), the range compressed r  r  demodulated signal can be written as, S C(T,v) = PT[TT  ^jr^  w*z(v) exp | _ j  2  7  r  ^ ^  |  (2.21)  after ignoring the effects of amplitude modulation and backscattering coefficient. 26  In azimuth, a similar synthetic phase-encoded signal exists due to the way the slant range changes with the azimuth time in the azimuth phase [see (2.7)]. Azimuth compression is more complicated as the slant range trajectory causes RCM, as discussed in Section 1.1.4. Azimuth compression is performed only after RCMC. It is desired to perform azimuth compression in the azimuth frequency domain due to the efficiency in the processing [see Section 1.1.5]. However, it is difficult to apply the POSP in the azimuth direction due to the existence of the DSR in the range equation. There are several ways to overcome this difficulty and they are discussed further in Section 3.3. Nevertheless, focusing the data in either range or azimuth direction would produce a sinc-like function since the raw data is bandlimited in range by the F M signal range bandwidth and bandlimited in Doppler by the Doppler bandwidth of the point target. The product of sine functions in both range and azimuth would produce a two-dimensional sinc-like pulse [17] called the impulse response.  2.4  Impulse Response  A SAR system is a linear system that can be characterized by its impulse response. The impulse response is the output of a system when an impulse is supplied at the input. For a SAR system, the ground can be considered to consist of an infinite number of infinitesimal small point targets, each with a different amplitude and phase. The acquired data are the sum of the signals from all of the targets. Each infinitesimal small point target can be thought of an impulse. SAR processing essentially produces a twodimensional, sinc-like pulse that is an estimate for each point target. The SAR system is characterized by the quality of the impulse response. In this section, some important quality measurements for the point target response for the bistatic configuration are 27  discussed.  2.4.1  Quality Measurement for an Impulse Response Function  It is informative to examine a two-dimensional representation of the FT of the raw signal, 52df(/  ri  /T?),  and the point target response in time domain. Figure 2.5(a) shows  the two-dimensional frequency response of a point target imaged at broadside and Figure 2.5(b) shows its focused impulse response. Figure 2.5(c) shows the two-dimensional frequency response of a point target imaged at a squint angle and Figure 2.5(d) its focused impulse response. In both cases, the region of support in the frequency domain is bandlimited by the range bandwidth of the range pulse and the Doppler bandwidth. Focusing the impulse response by matched filtering would produce an impulse response with a two-dimensional, sinc-like response. For configuration where antennas are at broadside, the region of support of the image spectrum is approximately rectangular and the sidelobes of the impulse response are parallel to the range and azimuth direction. For bistatic configurations where the antennas are squinted, the region of support of the image spectrum is approximately a rotated rectangle. This means the range and azimuth sidelobes are at an angle and the pulse quality parameters or quality metrics are measured along different directions from the broadside case. Figure 2.5 shows a typical impulse response for both cases. The following are some important quality metrics that can be measured from an impulse response [17]: • Impulse Response Width (IRW) - The impulse response width defines the width of the main lobe of the impulse response. The width is measured 3 dB below the 28  broadside target point spread function  2D spectrum of broadside target p  120 100  range frequency samples  100 f  20 40 60 80 100 120 range frequency samples t  120  squinted target point spread function  2D spectrum of broadside target  20 40 60 80 100 range frequency samples f  20 40 60 80 100 range frequency samples T  120  (au  120  Figure 2.5: Impulse responses of broadside imaged targets. peak value. In SAR processing, this is referred to as image resolution. The units for image resolution are samples although it can also be expressed in spatial measurement. Section 2.4.2 discusses this in more detail. i Peak Sidelobe Ratio (PSLR) - The peak sidelobe ratio dB is the difference between the main lobe and largest sidelobe. A high PSLR will contribute false targets and sidelobes of a target with stronger returns may mask the returns of weaker targets. Without weighting, a uniform spectrum will produce a PSLR of -13dB. This could be too high in practice. Generally a PSLR of -20dB would be required. A tapered window can be applied on the processed spectrum in exchange for a lower resolution [61]. 29  • Integrated Sidelobe Ratio (ISLR) - The integrated sidelobe ratio is the energy in the sidelobes of the point spread function to the energy in the main lobe. The ISLR is often measured as two one-dimensional parameters in range or azimuth direction. ISLR is an important metric in a low contrast scene. Typical ISLR is about -17dB with the main lobe limits defined as between the null-to-null region. ISLR should be kept low to prevent the sidelobe energy from the stronger target from spilling over and masking out weaker targets.  2.4.2  Bistatic Resolution  Resolution is defined as the 3dB IRW of the impulse response of the system, measured in a physical dimension, such as angle, time or spatial units. In this thesis, the resolution is defined in spatial units in the range direction or in the azimuth direction, projected to the ground plane. In monostatic SAR, the azimuth direction is aligned with the relative platform velocity vector. Thus a display of focused SAR image can be converted to a twodimensional image of the target by a trivial rescaling of the Doppler coordinates to cross range coordinates using a simple rescaling [5]. Resolutions in slant range and azimuth are stated in spatial units. Because there are two platform velocities in a bistatic case, the azimuth direction becomes a more ambiguous term. Therefore, resolution in spatial dimensions becomes difficult to define. A simple way to resolve this problem is to measure the resolution in slant range time and azimuth time. This gives a straightforward way to measure the quality measures of the bistatic point target response. The definitions of bistatic range resolution and bistatic azimuth resolution in spatial units are still important from a user's point of view. Several papers have been 30  written about the bistatic resolution. Earlier works by various authors dealt with the traditional bistatic radar resolution defined in range, Doppler and angle using geometrical methods [5,10,62,63]. A vectorial gradient method to define bistatic SAR range resolution and bistatic SAR Doppler resolution is given in [64]. In [65], a similar approach was used and similar results were derived. The vectorial gradient method provides a more consistent approach to derive the resolution without the need for approximations used in earlier works. A more generalized approach to resolution analysis using ambiguity function is discussed in [66]. Nevertheless, the gradient analysis approach is sufficient for the general bistatic SAR geometry and is used in our discussion in Section 2.4.3 to Section 2.4.5.  2.4.3  Range Resolution  Resolution is highly dependent on geometry of a bistatic configuration. The range resolution for a bistatic configuration can be defined by using vector gradient differential calculus [60]. The instantaneous slant range is the sum of range from transmitter to an arbitrary target and from the target back to the receiver. Iso-range contours are the loci of point targets with the same range. For a bistatic configuration, the iso-ranges are ellipsoids with the transmitter and receiver as foci. These contours satisfy the following bistatic range equation R(r)) =  R (?7) + R.R{v) = constant T  (2.22)  where RT(T?) and RR(T?) are vectors from point target to the transmitter and receiver positions respectively. The slant range can be treated as a scalar and a level surface. The vector gradient  31  VR(r]) is defined as  Geometrically, Vi?(ry) is vector passing through the angular bisector of the bistatic angle defined in Figure 2.1. It can be shown [see Appendix A . l ] that the vector gradient VR(r]) is given by, VR(rj) = -(u + u ) t  (2.24)  r  where u is the unit vector from point target to the transmitter and u is the unit t  r  vector from point target to the receiver. The vector gradient of the slant range gives the direction of the maximum change in slant range and V/?(??) defines where the direction of range resolution. The equivalent range resolution in the ground plane is given by the projection of this vector onto the ground plane (xy plane). The unit vector is given by r vfi(7?) \T VR( )\  (2.25)  xy  xy  where r  = I — zz  1  x y  V  is operator for the projection of a vector into a plane and I is  the identity matrix. As shown in (2.20), a signal with bandwidth B can be compressed into a pulse r  inversely proportional to the signal bandwidth. The width of this pulse in time units is 6r = | i where j  rg  (2.26)  is the broadening due to spectrum weighting. The ground range resolution  5p is the distance between two point targets in the direction of u that causes difference r  g  in the arrival time of 8T and the slant range difference of SR = C5T. 32  Assume an arbitrary unit vector in the ground plane u. The rate of change of the bistatic range at any point along the unit vector u is given by the directional derivative [60] and is defined by A/?  ^  = u-VR(r,)  (2.27)  where the rate of change is given by the change in the bistatic range 5R over the change in distance in the direction of the unit vector u. The rate of change is greatest when u = u . 9  The 3dB range resolution is given by 5R = C&T. Rearranging, we get  5 p r  u •  =  g  Vi?(n)  =  B \T yVR(r,)\ T  ( 2  X  '  2 8 )  The nominal monostatic range resolution in the slant range plane can be derived from the bistatic range resolution formulation given in (2.28) by setting u = u in t  r  (2.24). The range resolution for the monostatic case is given by 5o  =  = 7r (  C7rg  g  B \VR( )\ T  V  77=0  2J3,  )  (2.29)  The monostatic range resolution is consistent with that derived in [17]. An important observation to make is that range resolution depends only on directions from the scatterer to the platforms and not the slant range distances. This also implies that the range resolution is dependent on the bistatic angle, /?. A larger bistatic angle would mean the denominator is smaller in (2.28) and hence a poorer resolution. The best range resolution is obtained in the monostatic case, where (5 = 0.  33  2.4.4  Doppler and Lateral Resolution  The Doppler resolution can be arrived at using a similar method as the derivation for range resolution. The Doppler contours satisfy the following equation /d(") = ~\ (v (n) • u + V (n) • u ) = constant T  t  R  (2.30)  r  where f_ is the Doppler frequency, A is the wavelength, VT(T?) and VR(T?) are instantaneous velocity vectors of the transmitter and receiver respectively. The gradient of this scalar is given by v  m  .  ? m  t  +  ? m  i  +  ? m  h  (  M  1  )  As shown in Appendix A.2, the vector gradient can be written as 1 / 1 V/d = -r - ~ - V - ( V • u )u T  A  T  t  1 t  V - ( V • u )u R  R  r  (2.32)  r  VlR/T  The Doppler resolving capability depends on the integration time/exposure time T . The width of the compressed pulse in frequency units is a  <5/ = ^  (2-33)  d  where 7 ^ is the broadening due to Doppler spectrum weighting. The Doppler resolution is measured along the vector gradient V / d and the equivalent direction on the ground plane is given by the projection of this vector onto the ground plane (xy plane). The unit vector is given by v  = g  F x y V / d ( ? ? )  |r v/d(r?)| xy  (2 34) [ Z  6 4 )  The Doppler resolution can be defined using the directional derivative along the unit vector v . The directional derivative is given by g  = VV/ fo) d  34  (2.35)  For two point targets separated i n Doppler by Sfd, the ground separation of 8p  az  along the Doppler direction is given by v . g  T h e ground separation is referred  to as lateral resolution i n [65] and Doppler resolution i n [64].  Generally, Doppler  resolution is referred to i n frequency units, therefore lateral resolution seems to be a more appropriate term i n the spatial domain. C o m b i n i n g (2.35) and (2.33), the lateral resolution is defined as  T h e nominal monostatic azimuth resolution found i n [48] can be derived from the bistatic azimuth resolution formulation given i n (2.36) by setting u = u and V t  r  T  =  V R i n (2.32). T h e azimuth resolution i n the slant range plane is given by r„  _  A7az-R  7a:  r |v/ (»7)| a  r  a  (2.37)  S  2V T sin(f7  )  ?7=0  d  where V is the velocity of the monostatic platform such that V_ = V = VR, R is the T  T  monostatic slant range such that R = S  i?Tcen= ^Rcen  and f?  sq  S  is the squint angle of the  monostatic system.  2.4.5  Cross Range Resolution  W h e n a S A R image is registered to the ground plane, normally a rectangular grid is used and the ground resolution is described using orthogonal axes. T h e problem w i t h using lateral or Doppler resolution i n a bistatic case is that ground range resolution direction u  g  and ground lateral resolution direction v  g  are not always orthogonal. (It  is orthogonal for the special case of monostatic configuration i n the S A R image plane) C a r d i l l o [64] got around this problem by introducing the bistatic cross range resolution.  Its direction is orthogonal to the bistatic range direction and produces 35  Iso-range lines  Iso-Doppler lines Figure 2.6: Defining the bistatic cross range resolution. an equivalent area to the cell size described by the slant range resolution and lateral resolution vectors as shown in Figure 2.6. The unit cell area formed by ground range resolution vector Sp u and ground r  g  lateral resolution vector 5p v is shown in Figure 2.6. The cross range 5p w vector az  g  cr  g  is orthogonal to the ground range vector and forms a rectangular unit cell with the same area (shaded area). The unit area is given by 5A  ar  5p 8p  =  T  az  (2.38)  sin(0 ) g  where 6 is the angle between u and v , and is given by 0 = cos ^vig • v ). The g  g  g  g  g  cross range resolution is given by sin(0 )  (2.39)  B  For the monostatic case where u = u and V T = V R , the product of the gradients t  VR(rj)  r  and V / d is null. Thus, the gradients of range and range rate are perpendicular 36  to each other. The lateral resolution and the cross-range resolution are identical since sin(t9 ) = l . g  37  Chapter 3 Bistatic SAR Processing Algorithms 3.1  Introduction  In this chapter, a review of several existing bistatic SAR processing algorithms is presented. The first few algorithms are time-domain-based algorithms. They have excellent phase preservation and can be used for any bistatic geometry. However, these algorithms have high computational loads, as they form the image by processing one point at a time. Efficiency can be improved by processing in the frequency domain. Many accurate monostatic SAR algorithms achieve block efficiency by processing in the frequency domain. Such algorithms are usually derived from the point target spectrum [21-27]. It has been shown by several authors that the bistatic case cannot be focused by simply assuming a monostatic equivalent in the middle of the baseline, especially for bistatic configurations where there is appreciable baseline separation [16,31]. Thus,  38  new bistatic algorithms have to be derived in order to focus these bistatic configurations. There are three groups of frequency based algorithms available to focus the bistatic configuration. The first group uses numerical methods to solve for the double square root function. The second group has a pre-processing stage that transforms bistatic data to equivalent monostatic data so that the image can then be processed using traditional monostatic methods. The third group of algorithms formulates an approximate bistatic point target spectrum and uses this spectral result to derive an algorithm to focus the data. Most available algorithms are limited to focusing the case where the data is azimuth-invariant. This restricts the platforms to flight paths with equal velocity vectors. The NLCS algorithm distinguishes itself as an algorithm that is able to handle azimuth-invariant cases. The algorithm was developed initially to handle highly squinted, short-wavelength monostatic cases and bistatic cases where a transmitter is imaging at broadside with a stationary receiving platform [44]. The NLCS algorithm is investigated in this chapter and the concept introduced here provides the framework for discussion in later chapters.  3.2  Time Domain Matched Filtering Algorithms  The most direct way of forming the image from the raw signal is to make use of a two-dimensional replica of the echo signal and do a correlation of the point target return for every point in the imaged scene [20]. Alternatively, one can use the BackProjection Algorithm (BPA), a faster method of implementing the two-dimensional correlation. The key advantages of these methods are their accuracy and ability to produce an accurate image under any bistatic configuration. Each point is formed  39  individually and maps directly to the ground coordinates without the need for an additional registration step. Furthermore, computation speed can be improved by parallelizing the process without any loss in phase accuracy or resolution. However, such methods suffer from being computationally intensive. These methods do not have any stages where azimuth signal is available, since each point target is formed directly. This makes it difficult to incorporate autofocus stage into these algorithms for real-time operations since autofocus algorithms often apply phase error compensations in the azimuth direction [48]. Without this autofocus stage, the motion accuracy of the navigation unit needs to be very accurate, making the system expensive. One way to apply the autofocus is to perform an inverse azimuth FT, estimate the phase error and apply the phase correction, and reapply the azimuth FT. However, this further increases the computational burden of the algorithm. Nevertheless, these algorithms are often very useful as they can be used as benchmarks to compare the accuracies of processing algorithms. They are also used for offline data products that do not need real-time processing.  3.2.1  Time Domain Correlation Algorithm  The mathematically ideal solution for bistatic image formation is a two-dimensional matched filtering process. The Time Domain Correlation Algorithm (TDC), [20] and [67], is a direct matched filtering of the baseband signal. The matched filter is the conjugate of the exact replica of the echo signal and therefore the algorithm gives the optimum reconstruction [51]. For each point (x ,y ) the estimated reflectivity is given n  40  n  by a{x ,y ) n  n  = J Js{r,rj)p j  =  ll  /  s{T,rj)p  S { T  '  V y  ^ ) exp  -  (  R{ri;x ,y ) n  T  n  R{r);x ,y ) n  n  -J2TT/0 R{v,Xn,y )  ) dr exp  n  drdrj dr]  (3.1)  where u is an estimate of the reflectivity of the point target at (x,y), ignoring the amplitude effects. This process scales with an order of 0(N ), where N x N is the 4  number of pixels in the image. The Back-Projection Algorithm (BPA) is often used in place of the TDC as it is a more efficient implementation of the matched filtering.  3.2.2  Back-Projection Algorithm  The BPA is derived from a computer-aided tomography (CAT) technique used in medical imaging [68]. Earlier works on the BPA were used to focus data in monostatic or bistatic spotlight mode  The BPA has also been applied successfully on  [68-72].  bistatic stripmap SAR data  BPA has many of the attributes of the TDC as  [14,73].  the BPA can be derived directly from the TDC algorithm  [48,51].  To show how the BPA can be derived from the time domain correlation operation, a range compression of the signal is first performed on the signal to give Src(r,r?) =  (3.2)  S{T,TI)®P*{-T)  Expanding this convolution term gives Src(r,r?) = J s(c,n)p*(c - r) dc  Replacing r with R^ry;x ,y ^/c in n  s /R{n;x y ,y ) n  vc  n  n  (3.3),  \ ,r]j =  f  (3.3)  the integral becomes ,  / s(c,r?)p 41  (  IT  ^(r?;x ,y )\ Id? n  n  (3.4)  where R(?7;x ,y ) is the sum of slant range from the transmitter to a point target at n  n  position (x ,y ) and the slant range from the same point target back to receiver. Note n  n  that R(rj;x ,y )/c n  changes for each pulse, therefore  n  obtain the expression s (R(rj; x ,y )/c,rj) TC  n  n  s (T,r]) RC  must be interpolated to  at each slow time rj. Substituting (3.4) into  the time domain algorithm, (3.1) gives the reconstructed reflectivity. -/ N f o-{x ,y ) = J s n  n  (R(v>*n,yn) \ |\0 R{r];x ,y ) ^ ,ryjexp -J2TT/ °-- t f  rc  n  (3.5)  n  0  The diagram below shows the steps in the BPA  s(x,n)-  s (i:,rO  Select point target ( x , y ) to focus, interpolate to get  rc  Convolve with  P*K)  n  Range Compression  n  sJR(H)'c,n]  Repeat for next  - exp{-j2nf R(n )/c}  point  ^( n.Vn) X  0  Integrate overn  Figure 3.1: Block diagram of BPA. The BPA is able to maintain a high accuracy and phase coherency because it is derived directly from time domain matched filtering method. In terms of computational load, the BPA requires ./V operations to do each interpolation for each pixel. Therefore, for N x N pixels, the computational load has an order of 0(JV ). Although 3  it is faster than the time domain method, an 0(N ) algorithm may not be practical 3  for real-time implementation for many applications.  42  3.3  Frequency Domain Algorithms  The existence of a DSR function in the range equation of general bistatic case makes it difficult to find a simple analytical solution that expresses the azimuth time as a function of azimuth frequency. Hence, it is difficult to derive an analytical solution to the bistatic point target spectrum. Without an analytical point target spectrum, it is difficult to derive the bistatic equivalent for some of the more popular accurate focusing algorithms found in monostatic SAR. Despite this difficulty, various techniques have been developed within the last few years to focus bistatic data, each with their own merits and limitations. Most of the frequency based algorithms can be classified under, a few general methods based on how they handle the DSR term: • Numerical methods, where the DSR is solved numerically; • Analytical point target spectrum, wherein the DSR is solved directly to give an analytical point target spectrum, usually with some approximations; and • Pre-processing techniques, wherein a pre-processing procedure is used to convert bistatic data with a DSR range history to one with a single hyperbolic range history. The signal can then be focused using a traditional monostatic algorithm. In the next few sections, each of these approaches is described and some of the more relevant papers for this thesis are discussed further.  3.4  Numerical Methods  The first approach solves for the DSR numerically. A number of papers are based on the popular omega-K algorithm (wKA) [16,34-37]. This algorithm processes the raw 43  signal in the two-dimensional frequency domain. Bamler and Boerner [31] proposed a focusing algorithm that replaces the analytical SAR transfer functions with numerical equivalents. However, their algorithm is restricted to handling the azimuth-invariant case and becomes fairly computationally intensive when there is higher bistatic degree in the data [31]. Most efficient is the  LUKA,  which is excellent for processing wide-  aperture and highly squinted monostatic cases. In the next section, the key ideas of wKA are discussed.  3.4.1  Omega-K Algorithm  In order to review the wKA, both the monostatic wKA and the bistatic  LOKA  discussed. The block diagram in Figure 3.2 shows the processing steps in the  will be CJKA.  The uKA first does a two-dimensional FT to transform the SAR signal data into the two-dimensional frequency domain. This is followed by two key steps of the algorithm: a reference function multiply and Stolt interpolation [25]. Finally, a two-dimensional IFT is performed to transform the data back to the time domain, i.e., the SAR image domain.  Raw signal data  Compressed image data  Range Azimuth  Range Azimuth  FT  Reference Function  FT  IFT IFT  Multiply  Stolt Interpolation  Figure 3.2: Block diagram of wKA.  44  The wKA is often described in the spatial frequency domain because of its origin in seismic image reconstruction [26,27,48,74,75] and because it is a more compact way of representing the processing analytically [it can also be explained using signal processing principles as well [17]]. To derive the analytical monostatic spatial spectrum, consider •  Point Target ( n.y ) x  n  platform flight path  Integration Time  0  Figure 3.3: Signal model to derive the u>KA. the signal model in Figure 3.3. The two-dimensional range compressed spatial domain signal for an arbitrary point target located at (x ,y ) in the slant range plane can be n  n  written as r  I  \  s {r,u) = p \T u  T  u(u)\  2R  ,  _  /  • 4TT/ 0  w ( i t ) exp <^ -j az  R (u) u  (3-6)  where the range equation (one-way) is given by (3.7) This equation is similar to (2.7), except that the slant range equation is expressed in azimuth spatial units, u. This spatial unit has a corresponding spatial frequency, 45  K . Performing a range FT and an azimuth FT, it can be shown that the monostatic u  spectrum can be written in the "w — k domain" as [67], S (K,K ) U  U  = f exp [-j 2Ky/x* + (y - u) -  jK u du  2  n  u  (3.8)  ignoring the effects of the range and azimuth envelope and defining K and K (wavenumu  ber) as  A K  (3.9)  = 27r(A±A c  )  =  ^  (3.10)  c  Applying the POSP, the spatial unit, u, can be expressed as  -  4K  2  (3.11)  Kl  Using this relation, the spectrum in the u — k domain can be derived, S (K, K ) U  u  = exp \-j  - K_ x + K n  u  (3.12)  y) n  A reference signal of a point signal (X , Y ) is then applied to the spectrum. This c  c  operation can be viewed as a shift of the origin [37,76], or it can be viewed as a reference function multiply (RFM) or bulk focusing step [17] or a "matched filtering" term [35]. This R F M step reduces the bandwidth requirement of the signal. This step also causes a bulk compression, i.e., any point target with the same closest range of approach as the reference point are correctly focused. Other points are partially focused. The signal after R F M is given by S (K, K ) = exp {-j [ V 4 K - Kl (x - X ) + K (y - Y )] } 2  C  u  n  c  u  n  c  (3.13)  The next operation is a frequency mapping step called the Stolt interpolation [25] or a change of variable step [77]. The main idea of this step is to linearize the range and 46  azimuth spatial frequency using a "change of variable" [78]. The spectrum becomes S (K , K ) = exp {-j [K (X - x ) + K (Y - y )}} X  X  y  x  c  n  y  c  (3.14)  n  with a change of variables given by K = ^fAK - Kl  (3.15)  K = K  (3.16)  2  x  y  u  In the to — k domain, u> and K terms are equally spaced samples as they are the F T u  pair of the equally sampled spatial domain fast time r and the spatial unit u terms respectively. The spatial frequency K is equally sampled but the square root term u  in (3.13), \/4K — K , is not equally sampled. The Stolt interpolation step creates a 2  2  linear frequency map in K and K and fills the map by interpolating the phase signal x  y  from ui — k domain. This linearization of the phase is the key step of the algorithm and it effectively focuses the remainder of the targets. In other words, it performs the differential focusing step. A geometrical interpretation of the phase mapping is given in [79]. Once the phase is linearized, the focused point target in the spatial domain can be obtained by performing a two-dimensional IFT of (3.14), giving e(x ,y ) = p {X - x )p (Y - y ) n  n  x  c  n  y  c  (3.17)  n  where the effects of the finite region of support give the sinc-like envelope p (-) x  a n  d  Py(-). The shape and orientation of the two-dimensional sidelobes is dictated by the  region of support in the K K domain [67]. x  3.4.2  y  Numerical Method of Handling DSR  The main difference between the bistatic ukA and monostatic ukA is the difficulty in solving the double square root equation. In other words, for the general bistatic 47  geometry the inversion step in (3.11) cannot be performed since an analytical closedform solution of u(K, K ) cannot be found. Existing bistatic ukA relies on numerical u  methods [80] to resolve this inversion step. The solution for each u is solved numerically. In the bistatic configuration, there are two sets of slant range and azimuth parameters. These have to be transformed into one set of spatial parameters so that each point target can be represented as one pair of spatial parameters. A fixed baseline ensures the azimuth parameter can be represented by one azimuth parameter. Indeed, the fixed baseline restriction is adopted in the existing wkA [35] and [81]. Both methods use fairly similar processing except for how the range parameters are presented. In Appendix D, a more detailed treatment of bistatic uKA [35] is given. The wKA requires interpolation and numerical iteration, both of these operations tend to be fairly time consuming [81]. Since the algorithm operates in the twodimensional frequency domain, the algorithm is not able to cope with rapid changes iii Doppler centroid as easily as range-Doppler-based algorithm such as the RDA. The existing bistatic, uKA [35] and [81], are only able to handle the azimuthinvariant case. In [36], a method that can focus the general bistatic case was presented. It relies on two sets of mapping instead of the usual one. This makes the algorithm even more computationally intensive. Finally, it is often necessary to evaluate and analyze the performance, accuracies and limitations of algorithm under different geometries. Analytical methods would be able to determine these requirements more readily compared with numerical methods. In the next two sections, analytical-based methods are investigated.  48  3.5  Analytical Point Target Spectrum  The second approach is to solve for the point target spectrum analytically. The point target spectrum is an important result as it is the precursor to deriving other imagefocusing algorithms [17]. Few papers discuss the point target spectrum for the general bistatic case [29,30,33]. An approximate point target spectrum is derived in [29]. This spectrum, also known as the Loffeld Bistatic Formula (LBF), is used to derive a few bistatic algorithms [42,82-84]. Evidently, the accuracy of these algorithms is limited by the accuracy of the analytical spectrum. The derivation of this point target spectrum is discussed in the next section. In [33], a power series method for the general bistatic case was formulated. The spectrum is derived based on the method of series reversion [85] and it gives the exact formulation of stationary point in the form of a power series. The accuracy of this method is "scalable", as it depends on the number of terms used in the power series. This point target spectrum is discussed in Chapter 4.  3.5.1  Point Target Spectrum - Loffeld Bistatic Formula  The LBF is derived by applying a Taylor series [86] expansion on the phase function. The phase terms are expanded around the individual stationary points of the transmitter phase and the receiver phase. An approximate solution to the bistatic stationary point can be determined by considering the expansion up to the second-order phase term. Starting with the signal after range compression,  49  s (T,r}), RC  in (2.21), a FT of the  signal gives SMT,T))  = W (f ) t  T  (v)  exp^ -J2TT  Waz  (  /  O+  /  R  )  [R ( ) + R (v)} \ T  V  (3.18)  R  Next, an azimuth FT is performed to get the two-dimensional bistatic spectrum, 52df(/r, fr,)  =  W r ( / r )  J W^V) ^  -  [(fa(v)  j  + Mv)} } d  (3.19)  V  where Mv) = 2TT  {fo+ fT)  Mv) = 2TT  {  c  f  RT(w  o+  R (r))  fT)  R  (3.20)  + *fvV  (3.21)  + irfo  The monostatic phase functions are expanded about their individual monostatic stationary phase up to the second-order phase term and the integral in (3.19) becomes  S df(/r,/,) 2  ~  W (f ) WUfr,) T  T  ^  ~ J [Mm) + Mm)}  I  dr)  exp< — j <h(vr)(v - m) + (f>R(Vn)(v ~ VR)  exp<  MVT){V  dv  ~ VT) + 4>R{VR)(V ~ VR? 2  (3.22) where VR and ifr are the solutions to the individual stationary phases of the transmitter and receiver phase histories, respectively. The terms 0R(?7R) and 0T(TT)  a  r  e  both zero  since VR and rjr are stationary phases of the receiver and transmitter phase histories, respectively. Thus, the first integral vanishes. Using the fact that the sum of two quadratic functions is a shifted and scaled quadratic function [Section B.l], the remaining integral in (3.22), denoted by I(f ), v  50  can be shown to be  2 j 4> (rJT) + 0 (77 ) R  T  R  (3.23)  iv-Vb)  2  <h(rjr) + 4>R(VR)  where =  4>T(VT) • m  + <PR{VR)  • VR  <M»7T) + 0R(?7R)  (3.24) Applying the POSP to (3.23), the approximate stationary phase is given by r?b and the spectrum is given by (ignoring any amplitude variations) S (fr,fr,)^W (f )W (fr,) 2d{  r  T  a2  exp { -  j*_(f , r  /„)} exp [ - | * ( / 2  T I  /„)]  (3.25)  where ^i(frJr,)  * 2 ( / T , fr,) =  = [Mrh)  +  .„ • , •: .„ Am <PT(VT) + <PR{VR)  Mrk)]  (3.26)  - 7?R)  (3.27) The first exponential term, ^ i , is known as the quasi-monostatic term while the second exponential term, \P , is known as the bistatic deformation term. The expanded form 2  of the results are given in Appendix B.2. The existence of the quasi-monostatic and bistatic phase terms in (3.26) and (3.27) suggests a possible two-step focusing approach: the removal of the bistatic deformation followed by the application of a quasi-monostatic focusing step [29]. Incidentally, this method overlaps with the pre-processing technique discussed in the next section. 51  3.6  Preprocessing Techniques - Converting Bistatic Data to Monostatic Data  This approach converts bistatic data to monostatic data using some pre-processing. The simplest approach to converting a bistatic configuration to a monostatic equivalent is to replace the two platform with a monostatic platform right in the middle of the baseline. However, such a method has been shown to be inadequate in scenarios where there is an appreciable baseline separation [31,87]. Another technique involves using hyperbolic range functions with a modified velocity [31,88]. Such techniques have been used to accommodate orbital curvature in monostatic configurations through the use of an effective radar velocity [39,89]. The bistatic range equation is first expanded up to the second derivative and then a hyperbolic range equation is fitted over this range equation. These methods are valid for satellite-based cases where there is usually no appreciable squint. However, it would not work well for higher squint cases. So far these techniques are only shown to be applicable to bistatic configurations where the baseline is fixed so that the azimuthinvariance property holds. In the next section, an interesting pre-processing method using the so-called Rocca's "smile operator" is introduced. It is applicable in cases where both platforms are traveling on the same flight path with a constant baseline.  3.6.1  Dip MoveOut Algorithm  The Dip MoveOut (DMO) method introduced in [32] transforms bistatic acquisitions into monostatic ones. This processing technique is a well-known method in the field  52  of seismic reconstruction [38], where a typical source is accompanied by thousands of geophone receivers. Essentially, the elliptical locus of the reflectors is transformed into a monostatic one using a so-called Rocca's "smile operator". To derive the smile operator, consider the bistatic survey given in Figure 3.4. It consists of a source (transmitter) at point S that sends a seismic pulse, a geophone (receiver) at point R and a dipping or sloping layer represented by X Y [90]. The total time required for the pulse to go from the transmitter to the dipping layer back to geophone receiver is £ (#d) and the travel path is given by line SY to D  line YR. If there is an.equivalent monostatic source and receiver located at the middle of the baseline, the time required for the pulse to go from the dipping layer back to source is given by t (8_) and the travel path is given by line M X plus line X M . Using m  / /  Transmitter  Equivalent Monostatic  Receiver R  \ \  \  V  "  ^  — '  •  >^^'  Y  ^  Dipping Layer  Figure 3.4: Bistatic system geometry for computing the DMO operator.  53  the geometry given Figure 3.4, it can be shown that Ah  2  tM  = ti(9 ) + — cos (e )  2  (3.28)  2  d  d  where c is the speed of the pulse in the homogenous medium. e  For this setup, the bistatic survey has a longer time delay than the monostatic survey, i.e. tb(9_) > i (^d)- The small difference in time or negative delay between m  ^b(#d) and t (9d) is known as the DMO in seismic terminology, m  t»MO )  = t | 1 - ^1 -  d  |  h  (3.29)  Thus, by applying different negative delays as a function of dipping the bistatic survey can be transformed to a monostatic survey. To transform the seismic representation to a bistatic SAR model, the dipping layer is removed from the diagram and replaced with a point target at position Y. It can be easily observed that the bistatic survey is similar to the bistatic SAR system with a fixed baseline of 2h, with a transmitter at point S and a receiver at point R. While the bistatic path is same for both cases, the monostatic path is given by the travel path line M Y plus line Y M . If the baseline is short compared to the bistatic range, the length of M Y and M X will be quite close. Thus, we can approximate Ah  2  *b(0s ) « q  & ( 0 e q ) + ~J  CO (9 ) 2  S  sq  (3.30)  Naturally, c is now replaced by the speed of light and the angle of the dip is replaced e  by the squint angle of the bistatic SAR system. The smile operator can be decomposed into two terms: a migration operator and a phase operator [32], Both operators are slowly range-varying operators, which depend on bistatic range time. The migration operation is performed in the twodimensional frequency domain or in the range Doppler domain. This process causes 54  the shifting of the range cell of the point target from the bistatic trajectory to the monostatic trajectory, while the phase operator compensates the difference in phase between monostatic and bistatic. While this method is innovative in the sense that it allows monostatic algorithm to be used, it does have a restrictive geometric configuration, i.e., it can only focus the bistatic configuration where both transmitter and receiver are flying in along the same track with the same velocity. Thus, limiting its usage.  3.7  The Non-Linear Chirp Scaling Algorithm  The NLCS algorithm has been shown to be an innovative way to focus monostatic and bistatic SAR images [44]. It is able to focus highly squinted monostatic images and has been demonstrated to work on the bistatic configuration where the transmitter imaging on broadside and the receiver is stationary. However, the NLCS algorithm is only effective for shorter wavelength systems where the effects of quadratic RCM (QRCM) on impulse response broadening can be ignored. The initial step of the algorithm involves a linear range cell migration correction (LRCMC) process. The LRCMC causes point targets with different F M rates to be aligned in the same range gate. Thus, this makes the signal both range-variant and azimuth-variant as in the case of bistatic SAR signals. An F M rate equalization step is performed using nonlinear chirp scaling to make the F M rate (the dominant phase component) azimuth-invariant during the processing stage. A matched filter is then applied to focus the data. Clearly, the NLCS is able to cope with image formation of bistatic signals since it is able to handle range-invariant and azimuth-invariant signals. The next section explains the principles behind the NLCS and the limitations of the original algorithm. 55  3.7.1  Prelude to the NLCS Algorithm  Figure 3.5 illustrates the main steps taken in the monostatic NLCS algorithm. The first step of the algorithm is range compression. After range compression, R C M of target trajectories can be observed. In squinted geometries, the dominant term of the RCM is the linear component. The quadratic and higher-order components are usually negligible, especially for short-wavelength systems. LRCMC using linear interpolation is applied after range compression. The LRCMC step eliminates most of the R C M and the range/azimuth phase coupling, which facilitates the processing of high-squint cases. After LRCMC, targets with different F M rates (since they have different ranges of closest approach) fall into the same range gate. NLCS is applied to equalize the F M rates along each range cell by using a perturbation function. The perturbation function for a monostatic configuration is found to be a cubic function in azimuth time [44]. Once the azimuth F M rate is equalized for all range gates, azimuth compression can be carried out to focus the image.  Baseband SAR data  Range Compression  Focused Image  Azimuth Compression  w  Linear RCMC  Non Linear Chirp Scaling  Figure 3.5: Original NLCS algorithm for the monostatic configuration. The NLCS is able to focus the bistatic configuration where one platform is stationary and the other platform is moving in a straight path and imaging at broadside. 56  The bistatic data for this case are both range-variant and azimuth-variant. Processing is similar to the monostatic NLCS except that there is no LRCMC step required for this case there is no LRCM for this case of broadside imaging. A quartic perturbation function of azimuth time is used to equalize the F M rate for each target lying in the same range gate.  3.7.2  Limitations of the NLCS Algorithm  The existing NLCS assumes negligible residual RCM, the most dominant of which is the QRCM. For X-band and shorter wavelength systems, typical QRCM is less than one range cell and the point target broadening is less than 10% [48]. However, in C-band and longer-wavelength QRCM is several range cells and QRCM becomes significant, leading to degradation of the point target impulse response. To process higher resolution and longer-wavelength cases, the NLCS must be able to handle the residual RCM, especially the QRCM. In NLCS, only the F M chirp rate is equalized. For squinted geometries, the thirdorder phase term becomes significant. This phase term does not affect the broadening of the resolution but produces asymmetrical sidelobes [48], which may lead to the detection of false point targets. A complicated form of range Doppler coupling occurs at higher squints or widerapertures. This cross-coupling can be illustrated with the raw signal of a point target as shown in Figure 3.6. RCM has caused the range chirp to be shifted along each azimuth time. Inspecting the phase change along any particular range line as shown, there is an azimuth phase modulation. This azimuth phase modulation is as a result of the RCM, this is on top of the azimuth modulation from the demodulation process. Thus, the range Doppler coupling affects the azimuth F M rate. This effect is more 57  pronounced when there is more RCM.  Range  >•  Range phase  Azimuth phase  W\y^vA/V Figure 3.6: Phase coupling between range and azimuth. Illustration adopted from [17]. This range Doppler coupling can be handled with a processing method called Secondary Range Compression (SRC) [19,55]. For NLCS, SRC can be ignored as LRCMC eliminates most of the effects of range Doppler coupling. This assumption is valid for shorter wavelength systems where the residual RCM is only a few range resolution cells. However, in higher resolution and longer wavelength (L-band) cases, the residual RCM can stretch over several range resolution cells and the range/Doppler coupling becomes significant. To focus longer-wavelength and high-resolution cases, this extra range Doppler coupling term must be accounted for. NLCS performs the matched filtering by creating the azimuth matched filter as a function of azimuth time and performing the azimuth compression by using fast correlation techniques. The efficiency of the algorithm can be improved by using a frequency-domain matched filter instead, which eliminates the use of the extra FT for the time domain matched filter. This frequency-domain matched filter can be derived from an analytical bistatic point target spectrum. In the next chapter, an accurate point target spectrum is derived. This new analytical result can be used to find the frequency-domain matched filter for the NLCS algorithm. 58  Chapter 4 A New Point Target Spectrum 4.1  Introduction  Chapter 4 to Chapter 7 present new material and are the main contributions of this thesis. In this chapter, a new two-dimensional point target spectrum is derived for the general bistatic case, based on the reversion of a series approximation. In addition, the relationship between this power series method, LBF and the DMO method is shown. The spectral results here will be useful for developing efficient bistatic algorithms operating in the two-dimensional frequency domain or the range Doppler domain. The discussion begins with a derivation of the new point target spectrum and a simulation is performed to verify its accuracy. Following that, the relationship between this spectrum and the L B F are shown algebraically. Simulations are performed to compare the focusing accuracies of each spectrum. Finally, a geometrical proof is used to show how the bistatic deformation term in the LBF is related to the DMO smile operator.  59  4.2  Derivation of the Signal Spectrum  The derivation of the signal spectrum begins with the range compression signal s (r, 77) rc  in (3.6). The first step is to remove the linear phase and the LRCM. The reason for this step will become apparent when the series reversion is applied at a later step. After these terms are removed, the point target signal in the time domain is si(r,ry) = p ( r  —  r  w (r,) exp<^ -j 2vr —j-*-  (4:1)  &z  where R\(V) = Rcen + k V + h 77 + k 2  4  is the range after removing the linear term and R  is the sum of  cen  (4.2)  +  3  2  i?Tcen  and  i?R  c e n  ,  and the coefficients _  1 (dR_( ) V dr?  dRj(ri) dr,  V  2!  2  (4.3)  2  7]=0  1 (dR_( ) 3! V dr] V  3  3  dRJ( ) dr,  (4.4)  v  ^  3  rj=0  fc4  =  1 (dR_{rj) , + 4! dr, 4  dRi( ) dr,  (4.5)  V  A  T?=0  60  are evaluated at the aperture center. The derivatives of the transmitter range are given by V$ cos 9 2  d Rr(v) 2  (4.6)  sqT  dr]  R•Teen  2  r/=0  dR()  3 V$ cos f5 x sin 0  3  T  2  V  sq  dn  (4.7)  sqT  3  -^Tcen  T]=0  3 V4 cos 9 2  d R (v) 4  (4 sin g  - cos 6> )  2  sqT  T  2  sqT  (4.8)  sqT  R  dr]  3  4  •"-Teen  T]=0  Similar equations can be written for the derivatives of the receiver range, RR(T]). Applying range FT on (4.1), the spectrum is given by S[(fr,v) = W {fr) T  ™M  e X  P  ( - J  2TT  (  /  °  |  +  (4.9)  where W (.) represents the spectral shape (i.e., bandwidth) of the transmitted pulse, T  f corresponds to the center frequency and f is the range frequency. Next, an azimuth 0  T  FT is applied to get the signal in the two-dimensional frequency domain. Using the method of stationary phase [40], azimuth frequency is related to azimuth time by  fo + fr  /,  = 2k r] + 3fc 7? + 4A; r? + . . . 3  2  2  3  4  (4.10)  where f is the azimuth frequency. An expression of 77 in terms of f can be derived by v  v  using the series reversion (refer to Appendix C.l). Replacing x by 77, and replacing y by (—c/(/ + /r))/?? in the forward function (C.l) and substituting the coefficients 0  of x by the coefficients of 77, a power series is obtained. Inverting this power series, the desired relation is obtained, v(fv)  =  M  +A  3  fo + fr  /<> + /,  fv)  'fr,  + 2 A  + • 61  fo + fr  fv  (4.11)  and the coefficients are given by A = 2  9kj -  A  3h 8k  3 2  Ak k 2  4  3  (4.12)  The rationale for removal of the linear phase term and LRCM becomes clear at this step. In order to apply the series reversion directly in (4.10), the constant term in the forward function is removed since the constant term is absent in the forward function (C.l). Both the linear phase term and the LRCM term are removed so that there is no constant term left after applying azimuth FT to (4.9). An alternate approach is to move the constant term to the Left Hand Side (LHS) of (4.10) and treat the whole term on the LHS as y. The same result is still obtained (4.21). Using (4.11) with (4.9), the two-dimensional spectrum of s\(r,r)) can be obtained  (4.13) where W (.) represents the shape of the Doppler spectrum and is approximately a az  scaled version of the azimuth time envelope, w (.). To get the two-dimensional point az  target spectrum for s(r,77), the LRCM and linear phase are reintroduced into  S\(T,77)  in (4.1)  (4.14) 62  where fci  =  dRrriv) |  dRn(y) \ dv  +  (4.15)  I T;=0  IT7=0  The derivatives (4.15) at the aperture center are given by dR {v)  -VT sin0  T  dr]  (4.16)  sqT  Tl=0  dR {v)  -V sin6>  R  R  dr]  (4.17)  sqR  17=0  To derive the two-dimensional point target spectrum for s(r, 77), the FT skew and shift properties are applied [17] g{T,ry) <—> g{T,r])  exp{-j 2n f rj} g(r  T  <—• G(f ,  K  T  -KTi,r))  (4.18)  G(f ,f )  <—> G(f f + T> v  f  v  v  f)  (4.19)  K/ )  (4.20)  +  K  T  where g is a two-dimensional time function, G is its corresponding frequency function, and K and /„ are constants. Applying these FT pairs to (4.13) and (4.14), the desired two-dimensional point target spectrum is obtained, 5 2df(/r, ft]) ,  —  Si  / r , / , + (/o + / r ) |  (4.21)  This new spectrum formulation is known as the method of series reversion (MSR). The accuracy of the spectrum is limited by the number of terms used in the expansion of (4.21). In general, the uncompensated phase error should be limited to be within ±7r/4,  in order to avoid significant deterioration of the image quality.  63  4.3  Verification of the Spectrum Result  To prove the validity of the MSR, a point target signal is simulated in the time domain and matched filtering is carried out in the two-dimensional frequency domain. Processing efficiency is achieved by focusing point targets in an invariance region with the same matched filter. The size of the invariance region is dependent upon the radar parameters and the imaging geometry. The simulation uses airborne SAR parameters given in Table 4.1. An appreciable amount of antenna squint is assumed, as well as unequal platform velocities and nonparallel tracks. The axes are defined in a right hand Cartesian coordinate system with the flight direction of the transmitter parallel to the y direction and z being the altitude of the aircraft. The oversampling ratio is 1.33 in range and azimuth. Rectangular weighting is used for both azimuth and range processing. If expansion up to fourth-order azimuth frequency term is kept in (4.21), the two-dimensional point target spectrum can be written as  exp {j (f>2d{(fr, fr,)}  (4.22)  where the phase is given by 02df(/r, fr,) « — 2TT(  fo + fr C  )R, -cen  (4.23)  64  Table 4.1: Simulation parameters for verification of point target spectrum. Simulation parameters  Transmitter  Receiver  Velocity in x-direction  0 m/sec  20 m/sec  Velocity in y-direction  180 m/sec  220 m/sec  Velocity in z-direction  0 m/sec  0 m/sec  Center frequency  5.00 GHz  Range bandwidth  50 MHz  Doppler bandwidth  150 Hz  Altitude  3000 m  1000 m  Range to point target at r, = 0  16532 m  10444 m  Squint angle at r, = 0  30°  60.2°  Distance between airplanes at 77 = 0  8351 m  Minimum distance between airplanes  8445 m  Maximum distance between airplanes  8261 m  The magnitudes of the cubic and quartic terms in (4.23) are  A0  2TT  3  8fc / 3  (4.24)  2  2  c {9kl-4k k ) 3  A0 « 2TT 4  2  4  (BA*  (4.25)  64^/3  where B is the Doppler bandwidth. For this simulation case, B = 150 Hz, k = 1.31 a  a  2  m/s, k = 0.0146 m/s and A; = 0.000184 m/s . The phase component A 0 is more 2  3  3  4  3  than TT/4 and A ^ is much less than 7r/4. Therefore, it is sufficient to retain only terms 4  up to the cubic term in the phase expansion (4.23) for accurate focusing in this radar case. Matched filtering is performed by multiplying the two-dimensional spectrum of the point target by e x p ( - j 0  2  df(/r,  /,,)).  The point target spectrum after matched filtering has a two-dimensional envelope 65  given by W and r  in (4.22), as shown in Figure 4.1(a). Note that the spectrum has  a skew as a result of the range/azimuth coupling. This results in skewed sidelobes as shown in Figure 4.1(b). However, in order to measure image quality parameters such as the 3dB impulse response width (IRW) and the peak sidelobe ratio (PSLR), it is convenient to remove the skew by shearing the image along the range time axis by the amount 5 =  'V sm{6 ) + V sm(6 y T  T  R  R  r  (4.26)  The deskewed sidelobes are seen in Figure 4.1(d). The deskewing operation is equivalent to deskewing the spectrum, as seen in Figure 4.1(c). (a) Spectrum after matched filtering  (b) Point target after matched filtering  a. E E E  -50  -20 0 20 Range frequency/MHz  Range time samples  (c) Spectrum after shear operation  (d) Point target after shear operation  -20 0 20 Range frequency/MHz  Range time samples  Figure 4.1: Point target spectrum and image before and after the shear operation. The quality of the focus can be examined using the one-dimensional expansions 66  shown in Figure 4.2. The excellent focus is demonstrated by the IRW, which meets the theoretical limits in range (1.184/1.33 = 0.89) and in azimuth (1.188/1.33 = 0.89) for rectangular weighting. Furthermore, the sidelobes agree with the theoretical values of -lOdB and -13.3dB for the ISLR and PSLR respectively. In addition, the symmetry of the sidelobes is another indication of correct matched filter phase. Range compressed target  252  254  256  258  Range (samples) -»  260  Azimuth compressed target  262  1020  1022  1024  1026  1028  Azimuth (samples) ->  1030  Figure 4.2: Measurement of point target focus using a matched filter derived from the new, two-dimensional point target spectrum.  4.4  The Link Between the Bistatic Spectra  In this section, the relationship between three independently-derived bistatic point target spectra are established. The first spectrum is Loffeld's Bistatic Formula (LBF), which consists of a quasi-monostatic phase term and a bistatic phase term (see Section 3.5.1). The second spectrum makes use of Rocca's smile operator, which transforms bistatic data in a defined configuration to a monostatic equivalent (see Section 3.6). The third spectrum is the new analytical spectrum derived in Section 4.2 using the method of series reversion (MSR). The MSR spectrum is the most general of the three. This section shows that this spectrum can be reduced to the same 67  formulation as the former two when certain conditions are met. In addition, a new approximate spectrum is derived using a Taylor series expansion about the two stationary phase points of the transmitter and the receiver. We also give an alternative geometrical proof of the relationship between Rocca's smile operator and Loffeld's bistatic deformation term.  4.4.1  Analytical Development  From (4.21), the two-dimensional spectrum can be written as 5  2 d  f(/r, /„)  = W {f ) W^fr, + (fo + r  fr)~)  T  exp( -  J27T ( / ,  Wr(fr)W (fv  +  0  + / )|)rk) exp j T  + Uo +  w  exp( -j2nf 7j ) v  (f  -  M  f  3  R_(rj )  fr)  h  fr)^)  Ak±JA  2  exp  h  ° ^  (4.27) where rjb is the solution to the stationary point and is given by Vb = V  /IJ  +  (/O  +  / T ) ^  Cfr,  fo + fr '  h  )  +  A  + ...  \  f .f 0+  T  (4.28)  At this juncture, it is important to observe that the accuracy of the solution to the stationary point, r%, is limited by the number of terms in the expansion. This is unlike the approximate solution in (3.24), i%, where the accuracy is restricted. Using  68  (4.27) and the definitions in (3.20) and (3.21), the two-dimensional spectrum can be rewritten as  SMfr,  fv) =  W r ( / r )  W  &z  (/„ + (/ + f )^j G  T  exp  j  [<t> (Vb) + MVb)} T  j  Performing a Taylor series expansion on the phase term 0r('7b) about rfc and expansion of the phase term (fairjb) about T/R, the phase in the MSR in (4.29) becomes <fa{rh>) + <h.(Vb) = (hivr + Af/x-) + (f>n(r} + AT7R) R  = <h(rh) + 2 +  + li  <PR(VK)  {&VT$T(VT)  + AT$"<6r(»7R))  (4.30) where A77T =  rjb - Vr  A ^ ? = rj, - ^R R  (4.31) (4.32)  The terms on the right hand side of (4.31) and (4.32) are azimuth time measured from the respective stationary phase points. Note that both </>R(T?R) and 0T(^T) are zero. As a result, they do not appear in (4.30). The phases on the left hand side of (4.30) represent the MSR in (4.29). The expansion on the right hand side of (4.30) is the formulation leading to the link with the LBF. This formulation is new and we refer to it as the Two Stationary Phase Points (TSPP) method. The TSPP formulation of the bistatic spectrum has a pair of quasi-monostatic phase terms the same as the quasi-monostatic phase terms in the LBF (3.26). If we 69  (4-29)  approximate rjb by rfb and consider only the quadratic terms in (4.30), the sum of the quadratic phase terms becomes A?7  2 X  4> {rj ) + Ar] T  T  2 R  4>R(VR)  (rjb ~ VT)  <PT{VT)  2  +  (Vb ~ VR)  (J>R(VR)  (4.33) . Using the results given in Appendix B . l , the sum of the quadratic phase terms in (4.30) is equivalent to the bistatic deformation term in (3.27) when the condition 77b ~ % holds, (Vb ~ VT)  4>T(VT) + (Vb -  VR) (PR{VR) 2  1 <^T(?7T) •  4>R(VR)  2</> (77 ) +  (J)R{VR)  T  T  (4.34)  (VT-VR)  2  The expression on the right hand side of (4.34) is proportional to ^ 2 given in (3.27). Thus, the LBF is shown to be a special case of the point target spectrum formulation given in (4.29).  4.4.2  Accuracy and Limitations  Like any Taylor series, for large magnitudes of  A77T  and  ATJR  in (4.30), more terms  are required in the expansion to ensure convergence. Therefore, the bistatic point target spectra in (3.25) and (4.30) are only accurate when % is close to the individual monostatic stationary points,  VT  and  77R.  For large values of  A77T  and  A77R,  more  terms are required. The use of more terms makes the point target matched filter inefficient, as each additional term involves the computation of a pair of two-dimensional frequency terms. In such a case, it is generally more efficient to make use of the MSR bistatic spectrum in (4.30) to focus the target, as it needs fewer expansion terms to meet the required accuracy.  70  4.4.3  Bistatic Configurations  In the LBF method, a truncation of the azimuth phase before applying the method of stationary phase would cause phase degradation at wider aperture and longer wavelength cases, when higher phase terms are significant and therefore cannot be disregarded. This limitation has been discussed in [29]. Another necessary condition for the LBF to be valid is 77b ^ % (see Section 4.4.1). This condition determines the type of bistatic configurations that the LBF is able to focus. Due to the complexity of (4.28) and (3.24) and the wide range of configurations available for bistatic platforms, it is difficult to determine this condition analytically. However, we can simplify this condition further by considering rjb{fr, ) ~ Vb(fvc)> c  where  is the mean azimuth frequency (the Doppler centroid). The Doppler centroid  is given by  Substituting f  Vc  for f  v  in (4.28) causes all the terms in the brackets to become zero  and the mean value of the bistatic stationary point, rjbifvJ, ^ a  from (3.24), and assuming that  rj {fri ) D  c  ~  so  becomes zero. Thus,  Vb(fr] ), C  ~  0T(??T)7?T + 0R(?7R)7?R  0  (4.36)  fr)—fri  c  Similarly, the stationary point, rTr, can be derived from (4.28) by setting the receiverbased derivatives to be equal to the transmitter derivatives in (4.5), giving VT = B i f - T ^ T T - - Ti)  + B (—^P-r  2k  2  fo + fr  + M-^TT  Jo + Jr  -  2 f c  ' \  Ti) J  71  + •••  fo + fr  - 2k i  2  T  (4-37)  where the coefficients are given by 1 4 k<T2 B  2  =  3 &T3 32 A'p2 r  B  3  =  4fc k-T4  9 A/'po  T2  128/4 'T2  (4.38)  A similar expression can be written for 77a. The kr terms are given in (4.3) to (4.5). Substituting this pair of stationary points into (4.36) and considering only the first two terms in the power series, it can be shown that the condition, rfbifnc) ~ V^ifvc)^ simplifies to (4.39) Using the condition (4.39), the bistatic configurations where the LBF would work well can be determined. This condition is satisfied when the value inside either bracket is approximately zero. Consider the case where the value of the second bracket in (4.39) is zero. A trivial case that satisfies this condition is the monostatic configuration where k\\\ = fcri. Bistatic cases that have a short baseline relative to the slant ranges and have  transmitter and receiver squint angles pointing in roughly the same angle would also fall into this category, since fcju pa fcpi. This condition is also satisfied when km « 0 and fcri ~ 0, i.e., when both antennas are pointing roughly at broadside. The value in the first bracket is approximately zero when the platforms are flying with the same velocity in the same flight path with a fixed baseline, and with # T ~ sq  —f? R. In such a case, from (4.3), (4.4), (4.6) and (4.7), the condition is satisfied as sq  kR3 ~ -A>r3  a  n  d  R2 ~  fc  k . T2  72  4.5  Simulation - Part 1  In this section, three equal-velocity, parallel-track cases, are simulated to compare and verify the accuracy of the point target spectra between the LBF, TSPP and the MSR methods.  4.5.1  Simulation Parameters  In each case, a single point target is simulated using the airborne SAR parameters given in Table 4.2. The three cases differ in the squint angles simulated.  4.5.2  Simulation Results  Figure 4.3 to Figure 4.11 show the point target responses of the simulations. Figure 4.12 plots the magnitudes of the solutions to stationary points with azimuth frequency. These values (•%, i%, rfr and T)R) are evaluated at the range center frequency f . Rectangular weighting is used for both azimuth and range processing to simplify 0  the interpretation of results. The ideal range resolution is 1.06 cells in both range and azimuth. The ideal PSLR (Peak Sidelobe Ratio) is -13.3dB and the ideal ISLR (Integrated Sidelobe Ratio) is -lO.OdB. Cases of low, moderate and high squint are discussed in the next three sub-sections.  Case I: Low Squint (5°) Figure 4.3 shows the point target focused using the LBF. Figure 4.4 shows the same point target focused using the TSPP spectrum given in (4.30), expanded up to the quadratic term. 73  The first simulation is a bistatic formation with both antennas pointing near broadside. The linear phase terms A/pi and k_i are small in such a case, therefore the condition in (4.39) holds and rjb ~ rjb • Thus, the focusing results in Figure 4.3 and Figure 4.4 do not differ significantly. Figure 4.5 shows the point target focused using the TSPP spectrum, expanded up to the cubic term, showing a distinct improvement over Figure 4.4. From Figure 4.12(a), we observe that the difference in the nominal values (evaluated at f = f ) of 7% and v  is small, at about 0.009 s. The nominal  Vc  value of ATJT = 1.62 s and the nominal value of  AT]R  = -0.83 s.  Case II: Moderate Squint (10°) In the second simulation, both antennas are squinted to a point. The difference between the nominal values of fjb and rjb is now larger at about 0.07 s. The difference between monostatic stationary point solutions with the bistatic solution are further apart as well; the nominal values of  ATJT  = 3.19  S  and the  Arju =  -1.76 s. Thus,  the conditions (4.39) and rjb « rjb no longer hold. The approximate solution rjb is not accurate and therefore, the point target focused using LBF [see Figure 4.6] has a poor response. Figure 4.7 shows the same point target focused using the TSPP spectrum given in (4.30), expanded up to the quadratic term. While there is less phase degradation in Figure 4.7 compared with Figure 4.6, an improved result can be obtained by including the cubic phase term in the expansion—as shown in Figure 4.8.  Case III: High Squint (20°)  Finally, for cases with a more extreme bistatic configuration, there is a large difference in the location of the stationary phase points between rjb with rjr and rj_. The difference between the nominal values  rjb  and  rjb  is 0.5 s and the nominal values of ATJT = 6.06 s 74  and the  ATJR =  -3.18 s. These large differences would cause the a slow convergence in  the Taylor's expansion in (4.30). Thus, more higher order terms would be needed in the TSPP approach in order to focus the point target. This makes such an approach inefficient. Figure 4.9 shows that the LBF is unable to focus the point target properly. Figure 4.10 shows that even with a TSPP expansion up to the sixth order, the target is still poorly focused. However, the point target can be focused by expanding up to the quartic term in (4.28) and using the MSR spectrum in (4.27) directly, as shown in Figure 4.11.  75  Table 4.2: Simulation parameters for experiments to compare L B F and TSPP. Simulation parameters  Transmitter  Receiver  Velocity in x-direction  0 m/sec  0 m/sec  Velocity in y-direction  98 m/sec  98 m/sec  Velocity in z-direction  0 m/sec  0 m/sec  Center frequency  10.17 GHz  Range bandwidth  50 MHz  Doppler bandwidth  660 Hz  Altitude  1000 m  Distance between airplanes at 77 = 0  1000 m  2000 m  Case I Range to point target at 77 = 0  3751 m  1915 m  Squint angle at 77 = 0  5°  9.83°  Doppler Centroid f  857 Hz  Vc  Case II Range to point target at 77 = 0  3794 m  1999 m  Squint angle at 77 = 0  10°  19.25°  Doppler Centroid f  1627 Hz  Vc  Case III Range to point target at 77 = 0  3976 m  2326 m  Squint angle at 77 = 0  20°  35.78°  Doppler Centroid /,,,.  3079 Hz  76  R a n g e c o m p r e s s e d target  124  126  128 130 132 T i m e ( s a m p l e s ) ->  A z i m u t h c o m p r e s s e d target  134  2045  2050 Time (samples)  2055 ->  Figure 4.3: Point target response focused using LBF. R a n g e c o m p r e s s e d target  124  126  128 130 132 Time (samples) - »  A z i m u t h c o m p r e s s e d target  134  2045  2050 Time (samples)  2055 ->  Figure 4.4: Point target response focused using TSPP, expanded up to quadratic term. R a n g e c o m p r e s s e d target  124  126  128  130  132  A z i m u t h c o m p r e s s e d target  134  Time (samples) - »  2044  2046  2048  2050  Time (samples)  2052 ->  Figure 4.5: Point target response focused using TSPP, expanded up to cubic term. 77  R a n g e c o m p r e s s e d target  124  126  128 130 132 T i m e ( s a m p l e s ) ->  A z i m u t h c o m p r e s s e d target  134  2050  2052 2054 Time (samples)  2056 -»  2058  Figure 4.6: Point target response focused using the LBF. R a n g e c o m p r e s s e d target  124  126  128 130 132 Time (samples) - »  A z i m u t h c o m p r e s s e d target  134  2048  2050 2052 2054 2056 Time (samples) - »  2058  Figure 4.7: Point target response focused using TSPP, expanded up to quadratic term. R a n g e c o m p r e s s e d target  124  126  128  130  132  A z i m u t h c o m p r e s s e d target  134  T i m e ( s a m p l e s ) ->  2044  2046  2048  2050  2052  T i m e ( s a m p l e s ) —»  Figure 4.8: Point target response focused using TSPP, expanded up to cubic term. 78  R a n g e c o m p r e s s e d target  124  126  128 130 132 T i m e ( s a m p l e s ) ->  A z i m u t h c o m p r e s s e d target  134  2050  2055 Time (samples)  2060 ->  Figure 4.9: Point target response focused using the LBF.  R a n g e c o m p r e s s e d target  124  126  128  130  132  A z i m u t h c o m p r e s s e d target  134  T i m e ( s a m p l e s ) ->  2044  2046  2048  2050  Time (samples)  2052  2054  ->  Figure 4.10: Point target response focused using TSPP, expanded up to the sixth order term.  79  R a n g e c o m p r e s s e d target  124  126  128  130  132  A z i m u t h c o m p r e s s e d target  134  T i m e ( s a m p l e s ) ->  2044 2046 2048 2050 2052 2054 Time (samples)  ->  Figure 4.11: Point target response focused using MSR directly.  80  4.5.3  Discussion  The TSPP method in (4.30) is introduced to show the relation between the methods of deriving the spectra. However, the results of this section show why the TSPP is not recommended to be used in the general bistatic case. Instead, we recommend that the MSR be used directly. In Section 4.4.3, it was assumed that  fjb(fri ) c  ~ 0. This assumption is consistent  with the result in Figure 4.12, as can be seen from Figure 4.12. Plots of solutions to Stationary Points with Azimuth Frequency 6r  (a) Case I  (c) Case  (b) Case II  oh  400  600 800 1000 1200  Azimuth frequency f /Hz -  -10 -10 1200 1500 1800 2100 2600 2800 3000 3200 3400 Azimuth frequency f /Hz -> Azimuth frequency /Hz ->  Figure 4.12: Comparison of the solutions to stationary phase. Note that for all three cases, when = f , r] {f ) « 0. Vc  b  nc  81  4.6  Bistatic Deformation Term  The existence of the quasi-monostatic and bistatic phase terms in (3.25) and (4.30) suggests a two-step focusing approach: the removal of the bistatic deformation followed by the application of a quasi-monostatic focusing step [29]. Such a method is similar to the DMO algorithm put forward by D'Aria et al. in [32]. In this section, a geometrical proof will be given to show how the bistatic deformation term is linked to the Rocca's smile operator for the "constant offset case" [32].  4.6.1  Alternate Derivation of the Rocca's Smile Operator  A geometrical method [32] borrowed from seismic reflection surveying [38] is used to transform a bistatic configuration to a monostatic one. The bistatic platforms are restricted to traveling on the same path with constant and equal velocities. This is also known as the constant offset case [32] or the tandem configuration [87]. An illustration of the tandem configuration is shown in Figure 4.13. For this case, Rocca's smile operator transforms the bistatic data to a monostatic equivalent, which is located at the mid-point of the two bistatic platforms. To do this transformation, a range shift and phase compensation are required — the shift corresponds to the time difference between the two geometries. The time difference is denoted by £DMO, given by *DMo(0sq)  = *b(0sq)-U08q)  ( - ) 4  40  where t\_ is the round-trip travel time from the transmitter to the point target back to the receiver and t is the round-trip travel time between the equivalent monostatic m  antenna and the point target. The bistatic range to an arbitrary point is always greater than the two-way monostatic range to the same point, as shown in Figure 4.14. 82  Monostatic equivalent  Transmitter ^  7\ V l \. l ©sqT \ ^ 1 \ .  l L  l  1  Receiver ^  h  h  \  \  ' \  I  ' a ^ I sq \ w  i  \  \  ©sqR \  \  \  \  R  * = 2R /c m  M  Reference point  •  Figure 4.13: Bistatic geometry for the constant offset case. In Section 3.6.1, it is shown that the travel times are related by 4h  2  tl(8 ) + -^cos (e )  t (6 b  sq  m  (4.41)  2  sq  2h cos 9 2  2  (4.42)  sq  tDMOl.C'sqJ  ~  ^  The bistatic platforms are at a constant offset of 2h from each other and 9 is the sq  squint angle of the equivalent monostatic configuration. From the derivations given in [32], we have the following observations: the bistatic configuration can be transformed to the monostatic configuration by applying small negative delays  £DMO  as a function of monostatic squint 0 . Applying these negative sq  delays is akin to convolving the bistatic data with the smile operator. It was shown that the smile operator in the two-dimensional frequency domain for the constant  83  Range time  R  T +  R  R  eoi)  Azimuth time  Figure 4.14: Illustration of squint-dependent delay. offset case is #a(/r,  A) = e X  P  jj  ^27T(/ + 0  T  /  1  f )t  2  2  sq  2 „2  2/i cos ftsq 2  «exp{i^27r(/ + / ) 0  4h cos 0  1  1—4/1 —  b  2  ct  r  2  b  <exp <j  (2TT(/0  +  f  T  ^DMo(^sq)  (4.43)  where fr2„2  COS f9 = 1 — sq  4K (/o + / r ) 2  (4.44) 2  The equations (4.43) to (4.44) are also derived in [32] but adhere to the notations defined in Section 4.2. V is the common velocity of the two platforms. r  Natroshvili et al. [83] showed that Rocca's smile operator becomes the L B F 84  bistatic deformation term by using two approximations: 2R  (4.45)  a  F  =F  5 K  K  -F  «F  3 K  K  -(/  (4.46)  + / )  T  0  where f_y  F = {f + fo?  4K  T  (4.47)  2  and R is the common closest range of approach for both the transmitter and receiver. 0  Although not said in [16], it can be shown that the approximation in (4.46) is equivalent to assuming that cos 6 is approximately equal to cos 6 . 2  3  sq  sq  Substituting (4.45) and (4.46) into (4.43) and, after some algebraic manipulation, the smile operator can be written as (4.48)  Ha(fr, /7,)~exp(j<^ (/ , f )) a  T  v  where h COS e.sq 2  M r J v )  =  Mfr  + fo)  3  Rc 0  2TT(/ T +  f )H2 0  Rc  r  i  4K (/r + / o ) J 2  Q  (4.49)  2  In [29], it was shown that, for the constant offset case, the LBF bistatic deformation term in (3.27) can be expressed by fr (/ ,/,)^27r(/ 2  r  r  2  ~  + / )/i 0  Rc 0  fr2J.  2  1  4K (/. + / o ) , 2  2  (4.50) To arrive at (4.50), we find that the approximation in (4.47) is not necessary. Instead of (4.47), it is also possible to demonstrate the link between both methods using just 85  one approximation: 2R  (4.51)  n  cos 9. sq  c  Substituting (4.51) and (4.44) into (4.43), it can be shown that the smile operator (4.43) is equal to bistatic deformation term (4.49), for the constant offset case. Geometrically, the approximation (4.51) estimates the slant ranges from the transmitter and the receiver to the point target by twice the slant range from the equivalent monostatic platform in the middle of the baseline. This approximation is adequate when the baseline is small compared to the bistatic range, 2/i/R <C 1/cosf9 . In fact, 0  sq  as observed from Figure 4.13, this approximation is better than the approximation used in (4.49). The ignored cosine term in (4.49) is regained in (4.50), when they are used together, as in [83].  4.6.2  Geometrical Proof for the L B F  Geometrically, we can represent [see Figure 4.13 and Figure 4.14] R  R  Q  £b(#sq) C COS  (4.52)  a  V  C COS  sqT  tf  sqR  2R  tm(9,  (4.53)  0  sqj  c  cos 9, sq  Applying the cosine rule to Figure 4.13, Rn  Ro  cos 9 T  cos 9, sq  sq  R  _  0  cos 9  sqR  hcos 9 f  2R sin 9  Rl  cos 9. sq  2  sq  R  L  ^  0  cos #  sq  u  0  x  2  sq  2 0  (4.54)  _ 2R sin 6>sq  hcos 9 f R  sq  0  V  cosc?  sq  ,  (4.55) 86  Performing a binomial expansion on (4.54) and (4.55) up to the second-order term and substituting the results into (4.52), we have  and _ /i cos fr\ ~ — ^ 2  WW  3  fr  cos fl  4  q  3  sq  (4.57)  The last term in (4.57) can be ignored if the baseline is small compared to bistatic range, 2h/R  Q  <C 4. In a typical satellite case with an R of 600 km and a baseline of Q  10 km, the ratio 2h/R is 0.017 and the phase component of the higher order term has 0  the small value of A<p = 2TTf -^ 0  (4.58)  = 0.006TT  r  Thus, the smile operator becomes H (f ,9 ) » exp \j2n(f + f ) " f 'sqj - — r | J-"\JT ' ju/ j~> „ h  s  T  sq  T  (  It also should be noted that  ^DMo(^sq)  c  0  KC 0  (4  .59)  j  in (4.57) is more accurate for a bistatic SAR  configuration as compared to the one given in (4.43), as evident from the discussion in Section 3.6.1. The  £DMo(# ) s q  in (4.43) is accurate when it is used  to  transform a  bistatic survey to a monostatic survey in seismic image reconstruction. As the baseline to bistatic range becomes small, both estimates converge.  4.7  Simulation - Part 2  In essence, the Rocca's smile operator can be viewed as a bistatic deformation term therefore, it can be paired with the monostatic point target spectrum (quasi-monostatic 87  term) to formulate another point target spectrum. In this section, we simulate four cases to compare the accuracy of point target focused using the Rocca's point target operator, the LBF and the point target spectrum using the MSR.  4.7.1  Simulation Parameters  A point target is simulated in each case using the airborne SAR parameters given in Table 4.3. The results of the simulations are shown in Figure 4.15 to Figure 4.26. The Rocca's smile operator is decomposed into two operators - range migration operator and the phase operator. Both operators are applied in the range Doppler domain using the accurate form of the operator [32]. After the preprocessing the point target is focused using the monostatic point target spectrum [17].  4.7.2  Simulation Results  Rectangular weighting is used for both azimuth and range processing. The ideal range resolution is 1.06 cells both in range and azimuth. The ideal PSLR (Peak Sidelobe Ratio) is -13.3dB and the ideal ISLR (Integrated Sidelobe Ratio) is -lO.OdB.  Case IV: Low Baseline to Range Ratio with f7 x = —9 n sq  sq  For simulation case IV, the ratio 2h/R is small (0.05) and all the point target spectra Q  are accurate. Figure 4.15 shows the reference point target focused using Rocca's smile operator. Figure 4.16 shows the same reference point target focused using the LBF. Figure 4.17 shows the results with the MSR spectrum expanded up to the fourth azimuth frequency term.  88  Table 4.3: Simulation Parameters. Simulation parameters  Transmitter  Receiver  Platforms move in y direction with velocity  100 m/sec  100 m/sec  center frequency  10.17 GHz  Range bandwidth  75 MHz  Doppler bandwidth  232 Hz  Altitude  1000 m  1000 m  Case IV Ratio of baseline to R  0.05  Q  Distance between airplanes at 77 = 0  1000 m  Range to point target at 77 = 0  20031 m  20031 m  Squint angle at 77 = 0  -1.43°  1.43°  Case V Ratio of baseline to R  0.124  0  Distance between airplanes at 77 = 0  1000 m  Range to point target at 77 = 0  8071 m  8071 m  Squint angle at 77 = 0  -3.55°  3.55°  Case VI Ratio of baseline to R  0.83  Q  Distance between airplanes at 77 = 0  3000 m  Range to point target at 77 = 0  4026 m  4026 m  Squint angle at 77 = 0  -28.87°  28.87°  Case VII Ratio of baseline to R  0.27  Q  Distance between airplanes at 77 = 0  1000 m  Range to point target at 77 = 0  5813 m  4009 m  Squint angle at 77 = 0  21.24°  50.0°  89  R a n g e c o m p r e s s e d target  380  382  384  386  388  A z i m u t h c o m p r e s s e d target  390  1020  1022  T i m e ( s a m p l e s ) ->  1024  1026  Time (samples)  1028  1030  -*  gure 4.15: Point target response focused using Rocca's smile operator. R a n g e c o m p r e s s e d target  380  382  384  386  388  A z i m u t h c o m p r e s s e d target  390  1020  1022  T i m e ( s a m p l e s ) ->  1024  1026  Time (samples)  1028  1030  -»  Figure 4.16: Point target response focused using LBF. R a n g e c o m p r e s s e d target  380  382  384  386  388  A z i m u t h c o m p r e s s e d target  390  1020  Time (samples) - »  1022  1024  1026  1028  T i m e ( s a m p l e s ) —>  Figure 4.17: Point target response focused using MSR. 90  1030  Case V: Moderate Baseline to Range Ratio with  # x = — # qR sq  S  For simulation Case V, the ratio 2h/R is 0.124. The point target focused using 0  Rocca's smile shows significant phase degradation [see Figure 4.18]. The other two spectra are still accurate. Figure 4.19 shows the same reference point target focused using the LBF. Figure 4.20 shows the results with the MSR expanded up to the fourth azimuth frequency term.  91  R a n g e c o m p r e s s e d target  380  385  A z i m u t h c o m p r e s s e d target  390  1020  1022  Time (samples) - »  1024  1026  Time (samples)  1028  1030  -»  Figure 4.18: Point target response focused using Rocca's smile operator. R a n g e c o m p r e s s e d target  380  382  384 386 388 T i m e ( s a m p l e s ) —>  A z i m u t h c o m p r e s s e d target  390  1020  1022  1024 1026 Time (samples)  1028 ->  1030  Figure 4.19: Point target response focused using LBF. R a n g e c o m p r e s s e d target  380  382  384  386  388  A z i m u t h c o m p r e s s e d target  390  1020  T i m e ( s a m p l e s ) ->  1022  1024  1026  Time (samples)  1028 ->  Figure 4.20: Point target response focused using MSR. 92  1030  Case VI: Large Baseline to Range Ratio with 0  sqT  = —9 _ sq  For simulation Case VI, the baseline is increased from 1 km to 3 km, to create a large baseline to range ratio, (2h/R — 0.83). Figure 4.21 shows that Rocca's smile method 0  is not able to focus the point target with this large baseline. Also, Figure 4.22 shows that the focusing limits of the L B F are also reached at this baseline. Figure 4.23 shows that only MSR is able to focus this symmetrical, large baseline data correctly, by expanding the MSR up to the fourth azimuth frequency term.  Case VII: Moderate Baseline to Range Ratio with 9 r ^ —9 R. sq  sq  The bistatic configurations in Cases IV to VI satisfy the condition (4.39), since 9  sqT  —9 R. SQ  =  In these symmetrical cases, the LBF is able to maintain accuracy up to large  baseline to bistatic range ratios before starting to show phase degradation, as it does in Case VI, which has a very high baseline to bistatic range ratio. Basically, the LBF breaks down only at very extreme ratios when #  sqT  = —# . sqR  However, for simulation Case VII, the range vectors are no longer symmetrical and the condition (4.39) is no longer valid. Even with a smaller baseline to bistatic range ratio of 0.27, the point target response in Figure 4.25 is worse than the symmetrical Case VI (Figure 4.22, where the baseline ratio is 0.83). Figure 4.24 shows the impulse response of the point target focused using Rocca's smile operator. For this baseline ratio, the preprocessing method using Rocca's smile operator is not able to focus the point target accurately. Figure 4.26 shows the point target focus result with the MSR spectrum expanded up to the fourth azimuth frequency term. The accuracy is hardly affected by the change in bistatic configuration (compare with Figure 4.23).  93  4.8  Conclusions  In conclusion, the three spectra methods are linked, the point target spectrum formulated from the series reversion is the most general. The LBF can be derived from the series reversion method by considering Taylor expansions about the individual monostatic stationary phases (up to the quadratic phase term). Such an expansion results in a quasi-monostatic term and a bistatic deformation term. The Rocca's smile operator for the constant offset configuration was shown to be similar to the bistatic deformation using a geometrical method. The method of splitting the phase term into quasi-monostatic and bistatic deformation term may not be useful when there is a high bistatic degree as it may require the inclusion of many expansion terms in the bistatic deformation term, leading to the inefficiency. In the next chapter, the MSR is used to derive a new bistatic Range Doppler Algorithm.  94  R a n g e c o m p r e s s e d target  364  366  368  370  372  A z i m u t h c o m p r e s s e d target  374  1020  T i m e ( s a m p l e s ) —>  1025  11  T i m e ( s a m p l e s ) —>  Figure 4.21: Point target response focused using Rocca's smile operator. R a n g e c o m p r e s s e d target  374  376 378 380 T i m e ( s a m p l e s ) ->  A z i m u t h c o m p r e s s e d target  382  1020  1022  1024 1026 Time (samples)  1028  1030  ->  Figure 4.22: Point target response focused using LBF. R a n g e c o m p r e s s e d target  380  382  384  386  388  A z i m u t h c o m p r e s s e d target  390  1020  Time (samples) - »  1022  1024  1026  Time (samples)  1028 ->  Figure 4.23: Point target response focused using MSR. 95  1030  R a n g e c o m p r e s s e d target  460  465  A z i m u t h c o m p r e s s e d target  470  710  720  T i m e ( s a m p l e s ) ->  730  Time (samples)  740 ->  gure 4.24: Point target response focused using Rocca's smile operator. R a n g e c o m p r e s s e d target  374  376  378  380  382  A z i m u t h c o m p r e s s e d target  384  1080  1082  T i m e ( s a m p l e s ) ->  1084  1086  Time (samples)  1088  1090  ->  Figure 4.25: Point target response focused using LBF. R a n g e c o m p r e s s e d target  380  382  384  386  388  A z i m u t h c o m p r e s s e d target  390  1020  T i m e ( s a m p l e s ) ->  1022  1024  1026  Time (samples)  1028 ->  Figure 4.26: Point target response focused using MSR. 96  1030  Chapter 5 Bistatic Range Doppler Algorithm 5.1  Introduction  Bistatic SAR range histories, unlike monostatic ones, are azimuth-variant in general, as both the transmitter and receiver can assume different motion trajectories. Nevertheless, the bistatic system can remain azimuth-invariant by restricting the transmitter and receiver platform motions to follow parallel tracks with identical velocities. In this case, the baseline between the two platforms does not vary with time. This azimuth-invariant property is important to conventional monostatic algorithms such as the Range Doppler Algorithm (RDA) [21-23] and Chirp Scaling Algorithm (CSA) [24]. This is because processing efficiency is achieved by taking advantage of the fact that point targets with the same range of closest approach collapse to the same range history in the range Doppler domain. Performing one Range Cell Migration Correction (RCMC) operation in this domain achieves the correction of a whole family of targets. Also, the range Doppler domain allows the azimuth compression parameters to be changed conveniently with range.  97  In this chapter, the spectral result developed in the previous chapter is used to formulate a modified RDA that can handle the azimuth-invariant, bistatic case so that the same advantages can be obtained. Our approach to processing the azimuthinvariant bistatic SAR data with the RDA is to apply the spectrum [33] to improve the SRC accuracy in the two-dimensional frequency domain. The accuracy of the MSR allows this bistatic algorithm to handle highly squinted and wide-aperture cases. First note that the conventional RDA does not do any processing in the twodimensional frequency domain. SRC is commonly applied in the azimuth time domain as part of the range compression operation [19]. This approximation limits the degree of squint and the extent of the aperture that can be processed accurately. Focusing high squint and wide-aperture cases is not a trivial task as processing is complicated by a range Doppler coupling effect, which degrades the focusing ability of the conventional RDA. The squint-aperture cases that the RDA can handle accurately are considerably extended when SRC is performed in the two-dimensional frequency domain, since SRC takes on an increasing amount of azimuth frequency dependence as the squint or aperture increases (refer to SRC Option 2 in [17]). The two-dimensional frequency domain operations come at the expense of computing time, so are avoided if possible. The chapter begins by outlining the operations in the modified RDA. Following that, the two-dimensional phase equations for each stage are derived. A C-band airborne radar simulation is used to demonstrate the accuracy of the algorithm in Section 5.3. Finally in Section 5.5, an efficient way to combine the Secondary Range Compression (SRC) with range compression is developed for certain squinted, moderate aperture cases.  98  5.2  Bistatic Range Doppler Algorithm  The processing steps for the bistatic RDA are shown in Figure 5.1. It consists of the same steps as the RDA with SRC Option 2 [17], with range compression combined with SRC for efficiency.  Raw radar data  Range FT Azimuth FT  RCMC  Range Compression  Azimuth Compression  SRC  Azimuth IFT  Range IFT  Compressed data  Figure 5.1: Functional block diagram of bistatic RDA. The steps in the bistatic RDA are summarized as follows: 1. Range and azimuth FTs are performed to transform the signal into the twodimensional frequency domain. 2. Range compression is performed using a phase multiply in the two-dimensional 99  frequency domain (it can be performed in any domain, as it does not depend on range or azimuth). 3. SRC has its strongest dependence on range frequency and azimuth frequency, so is best implemented using a phase multiply in the two-dimensional frequency domain. Although not explicitly shown in Figure 5.1, the range compression and SRC phase multiplies can be combined into one phase multiply for efficiency. 4. A range IFT is applied to transform the data back to the range Doppler domain. 5. RCMC is applied using an interpolator in the range direction. • 6. Once the trajectories are straightened, azimuth compression is conveniently applied in the range Doppler domain using a range dependent phase multiply. 7. The final step is to perform an azimuth IFT to transform the data back to the time domain, resulting in a compressed image, which is complex-valued.  5.2.1  Analytical Development  The development of the bistatic RDA begins with the two-dimensional spectrum, (4.22), of the point target being considered. The first step is to replace the  l/(/ +/o) T  terms of (4.22) with the following power series expansions 1 (fo  +  1 fr)  1 (fo  +  1 fr)  2  1 (fo  +  fr)  1  (5.2)  1  (5.3)  fl  1 3  (5.1)  fo  /o  3  100  These power series converge quickly because f  \f \ in practice. Substituting  Q  T  (5.1) to (5.3) into (4.22), an explicit form of the phase of the two-dimensional spectrum can be obtained. The phase term in (4.22) can be decomposed into the following components: <t>2df(fr,f )~<Pr (fr) V  e  +  4>a.z{f ) + V  0src(/r,/r,)  +  </>rcm(/r, fr,)  +  <f>res  (5-4)  Each of these phase terms can be interpreted as follows: • The first phase term, </>, represents the g r  M r )  range modulation:  =- ^  (5.5)  This phase term is a well-known and dependent on only f and thus can be T  easily separated. Range compression uses a phase multiply to remove this phase term. Alternatively, the data could be range compressed in the range frequency, azimuth-time domain, just after the range FT. This operation is the same as in the monostatic case, as the bistatic geometry has no direct effect on the pulse modulation. • The second phase term,  4> , az  represents the a z i m u t h  101  modulation,  to be re-  moved by azimuth matched filtering and it is solely dependent on /,,: 1 Zhfv  4k  +  yfr  2  Jo  2  +  3^I /T) H  -7— frf + -J_ frf  2  Jo  Jo  9fc - 4fc fc 4*i/„ + + 64&| 2  3  2  4  -^f  6  2  Jo  Ak\ c  +  1/  (5.6) Jo  0  Because of the significant range dependence of these terms, the azimuth compression is applied in the range Doppler domain. • The third phase term,  4>icm,  is linearly dependent on range frequency  f  T  and  represents the range cell migration term: 1  Rcen ^  c  4fc  '*L__£  2  Sh\ c 2 £  2  fv  8k?  9fc - Ak k + 64fc| 2  3  2  A  k?  _  f  "/?7  J O  fV Jv 2  c  fo  3c 3  6k?c 2  J  3  (5.7)  4 Jv Jo f  f  4  Note that the terms inside the large braces represent the RCM displacement. As the  k  coefficients depend on range, as does the  R  c e n  term, this range dis-  placement must be compensated in the range Doppler domain. This is allowed, 102  as there is no range frequency dependence in these terms. The displacement is corrected using a range direction interpolator, as in the monostatic RDA. • The fourth phase term, </>, represents the range azimuth coupling term: src  2 Vsrc  / , \  3~]  r  4A; /o  JT]  2  (5.8)  This phase term is the remaining contribution that depends on  and f . This T  phase term becomes significant in higher squint, finer resolution and longerwavelength cases. If uncompensated, the range azimuth coupling may cause significant degradation in the resolution, especially in the range direction. SRC is used to remove this coupling term. SRC is applied in the two-dimensional frequency domain domain, as the strongest dependencies exist in this domain. However, the SRC term is weakly range dependent, and (5.8) must be evaluated at a specific range, called the reference range, usually at the swath center. While the SRC that corrects for the scene center is often sufficient for the whole scene, for wider range swaths it may be necessary to segment the scene into range blocks and process them separately. 103  In general, for each block, the uncompensated phase error, mainly quadratic, should be limited to be within ir/2. The last phase term, </>, represents a residual phase: res  k ^fo 2  0r<  2TT<  c  .,  4fc  2  4/c fc 64A;, 2  £  4  +  c  2  9fc,  +  T  ^•cen  8rC,  C  3  4  (5.9)  /o  5  As this phase does not depend on range frequency or azimuth frequency, it has no effect on the focusing process. However, it does depend on target range, and should only be ignored if a magnitude image is the final product. After removing the range modulation and range azimuth coupling using the phase terms (5.5) and (5.8) in the two-dimensional frequency domain, an inverse range FT is performed to obtain a two-dimensional signal in the range Doppler domain Srdirjr,)  = Pr(r-  - ^ p , / r , ) W (f az  exp{-j where the range envelope, migration term,  </>cm, r  -  f ) Vc  (5.10)  4>az(fr,)}  has a sinc-like shape in range, and the range  p (T,f ), r  v  v  is related to the phase, ,(/,,/„) =  4> {f ), Tcm  -2TT/  v  by (/,)  T  (5.11)  The function, H ^(/ — f ) , is the target envelope in the Doppler frequency domain, /  a  7)  Vc  where the average Doppler frequency (the Doppler centroid) of the target is given by ki flc  fo  104  (5.12)  RCMC is performed by a range-varying interpolation operation at this stage. The RCMC operation straightens the trajectories so that they now run parallel to the azimuth frequency axis. The final step is azimuth compression, which focuses each target to its mid-aperture range and mid-aperture time. The focused point target signal is given by 5(r, r?) = Pr(r-^  vj p (rj) exp {j2ir f az  Vc  77}  (5.13)  where p {-) and p {') are the sinc-like compressed pulse envelopes. The exponential T  az  term in (5.13) is a linear phase ramp that results from the squinted geometry (giving a non-zero Doppler centroid).  5.2.2  Compression Example  The effect of these terms, and their compensation, is illustrated in Figure 5.2, using an example of a squinted bistatic SAR. The same parameters are used as in the C-band simulation described in Section 5.3, except that only a single point target is used in the present section. The center target trajectory has a large migration of more than a hundred range cells, as shown in Figure 5.2(a). When transformed into the range Doppler domain, the range azimuth coupling causes the trajectory to be dispersed in range as shown in Figure 5.2(b). SRC removes the range azimuth coupling effect, as shown in Figure 5.2(c) after the range IFT. This process can be viewed as recompression of data in range, hence the name 'Secondary Range Compression'. After RCMC, all the energy from one point target falls completely into one range cell, as shown in Figure 5.2(d). Without SRC, the energy would remain spread over several range cells, resulting in resolution degradations in range and azimuth. 105  (a) Range Compressed  (b) Data after azimuth FT  800 1000 1200 Range time (samples) (c) Data after SRC  800 1000 1200 Range time (samples) (d) Data after RCMC  1400  800 1000 1200 Range time (samples)  800 1000 1200 Range time (samples)  1400  1400  Figure 5.2: How the target trajectories are changed by SRC and RCMC.  5.2.3  Application to the CSA  The Chirp Scaling algorithm (CSA) [24] is an alternate way of implementing an RDAlike algorithm. It differs in that RCMC is performed in two parts, a differential and a bulk part, and the interpolation operation is avoided. First, a phase multiply is applied to range uncompressed data in the range Doppler domain to equalize the target migration at different ranges. Then, a bulk RCMC is done in the two-dimensional frequency domain to complete RCMC. Range compression and SRC are also performed during the same phase multiply for efficiency. Because of the two-dimensional frequency domain operations of the CSA, it is straightforward to apply the bistatic range compression and SRC phase terms, (5.5) 106  and (5.8), to the CSA. For the RCMC operation, the bulk component is (5.7) with range set to a mid-swath reference range, and applied in the same two-dimensional frequency domain. The differential component is computed as the difference between (5.7) evaluated at the explicit range and that at the reference range. The differential component is inverse FTed and applied in the range Doppler domain, as in the monostatic case [17,24].  5.3  Bistatic Simulation Example  To prove the validity of the formulation and to show how a whole scene can be focused, a simulation based on a flat-Earth model with seven point targets is presented in this section. These point targets are illuminated at the same time with the composite bistatic beam. The geometry is illustrated in Figure 5.3.  5.3.1  Simulation Parameters  The simulation uses the airborne SAR parameters given in Table 5.1. An appreciable amount of antenna squint is assumed to introduce severe range azimuth coupling. The oversampling ratio is 2.0 in range and 1.5 in azimuth. The range resolution is 2.05 m (2.09 cells) and the azimuth resolution is 1.48 m based on the definitions given in [64,65]. A Kaiser window with a smoothing coefficient of 2.5 is used to suppress the side lobes in both range and azimuth processing. The separation between adjacent point targets is 200 m. All the point targets lie along along a vector, u . This vector corresponds to the projection of the vector g  gradient of bistatic range,  VR(rj),  onto the ground plane. Geometrically,  VR{T])  is a  vector passing through the angular bisector of the bistatic angle, B (refer to [64,65] 107  Figure 5.3: Geometry of bistatic simulation example, and Section 2.4.3).  5.3.2  Simulation Results  The simulation results of the seven point targets are shown in Figure 5.4. The SRC reference range is taken from the center target, Target A. The impulse responses of Target A are shown in Figure 5.5 and point target quality measurements for all targets are given in Table 5.2. It is seen that the reference target and its neighbors are wellfocused, but the edge targets are noticeably degraded due to the application of a range-invariant SRC filter for the whole scene [see Figure 5.6 for the responses of Target D]. If the phase error is restricted to be within ±7r/2, the SRC filter can handle a 108  Table 5.1: Simulation parameters for an azimuth-invariant case. Simulation parameters Velocity in x-direction Velocity in y-direction Velocity in z-direction Center frequency Range bandwidth Doppler bandwidth Altitude Nominal range Nominal squint angle Nominal bistatic angle Baseline distance between platforms  Transmitter Receiver 0 m/sec 0 m/sec 200 m/sec 200 m/sec 0 m/sec 0 m/sec 5.3 GHz 80 MHz 194 Hz 3000 m 1000 m 15237 m 13010 m 20° 51.7° 31.9° 8062 m  Bistatic SAR data focused using RDA  y~ • I  s  /  t •  0.5 pt. G  -600  pt.F  -400  pt. E  -200  c  pt. A  pt. B  pt.  0 Ground range (m)  200  400  pt. D  600  Figure 5.4: Point targets focused using the bistatic RDA. range invariance region of 1270 m for this example. Targets D and G are near the edges of the invariance region and have a phase error of 0.48TT. The range PSLR (Peak Side Lobe Ratio) is within 2 dB of the theoretical value of 20.95 dB and range ISLR (Integrated Side Lobe Ratio) is within 2 dB of the theoretical value of -18.5 dB. The combined weighting of the composite antenna window and the Kaiser window gives a theoretical azimuth PSLR of-18.4 dB and a theoretical azimuth ISLR of -15.8 dB. The measured azimuth PSLR and ISLR are within 2dB of these  109  theoretical values. Table 5.2: Point target quality measurements. Target A B,E C,F D,G A B,E C,F D,G  Broadening (%) PSLR (dB) Range Impulse Response <0.1 -20.9 -19.5, -19.7 <0.1 1.70, 1.68 -19.0, -19.1 5.02, 4.99 -22.5, -22.4 Azimuth Impulse Response <0.1 -18.2 <0.1 -17.8, -17.9 1.20, 1.21 -17.0, -17.3 1.50, 1.45 -17.5, -17.5  Range compressed target A  CD XJ © T3  -18.4 -17.1, -17.4 -17.8, -17.9 -19.9, -19.7 -15.8 -15.6, -15.6 -15.0, -15.3 -15.5, -15.6  Azimuth compressed target A  10  T  ISLR (dB)  10 IRW =2.093 cells  IRW =1.554 cells  PSLR =-20.936 dB  PSLR =-18.231 dB  0 ISLR =-18.407 dB  ISLR =-15.797 dB  -10  'E O) 01  5  -20  1020 1025 1030 Time (samples) ->  765  770 Time (samples)  775  Figure 5.5: Measurement of point target focus for point target A.  5.3.3  Implementation Issues  To focus the imaged scene, the range derivatives given in (4.3) to (4.5) have to be determined as a function of range. Instead of calculating the derivatives for each range gate, a curve fitting approach can be utilized. For instance, the range varying 110  Range compressed target D  1625  1630 1635 1640 Time (samples) ->  Azimuth compressed target D  1645  765 770 Time (samples) ->  775  Figure 5.6: Measurement of point target focus for point target D. derivatives are calculated for a representative set of targets equally spaced over a range invariance region. A curve is fitted for each derivative to obtain a polynomial function in range (a cubic polynomial was found to be sufficiently accurate for this 1.6 km example). Using these curves, the required parameters can be generated for each range cell. The image is focused in the bistatic range versus azimuth time domain. Point targets are registered to their mid-aperture range and mid-aperture time. Figure 5.7 illustrates how a rectangular patch becomes a parallelogram in the focused map. The three targets, denoted by white circles, lie along the same vector u and have the same g  beam center crossing time. When focused, they are registered at the same azimuth time. Thus, a rectangular patch on the ground plane appears as a parallelogram in the focused image, unless u is perpendicular to the flight path. g  Image registration maps the focused image, I\{r, rj), to a flat-Earth plane, l2(x,y), as shown in Figure 5.7. The process involves two steps. First, positions of known grid points in l2{x,y) are mapped onto I\(T,rj). These grid points are usually chosen to be parallel and perpendicular to the flight path. Then an affine transformation [91] is 111  used to map the rest of the points. Imaged scene  Figure 5.7: A simple illustration to show shearing in the focused map. An imaged rectangular patch becomes a parallelogram in the focused map.  5.4  Focusing a Real Bistatic Data Set  The bistatic RDA has been demonstrated to focus the azimuth-invariant parallel case earlier in Section 5.3. In this section, the bistatic RDA is used to focus real bistatic data collected using two monostatic systems. The data were acquired on a pair of airborne monostatic SAR platforms from the German Research Establishment for Applied Natural Sciences (FGAN). Airborne Experimental Radar (AER-II) acts as the transmitter and is mounted on the Dornier-228 aircraft. The Phased Array Multifunctional Imaging Radar (PAMIR) is the receiver and is mounted on a Transall C-160 aircraft. The sensors are operated in the X-band, sharing a common bandwidth of 300 MHz. Figure 5.8 shows the setup of the bistatic configuration. The experiment cannot be considered as a purely azimuth-invariant parallel case because of wind and other reasons. The positions of the airplanes were estimated by a Kalman filter fusing geographic positioning system (GPS) with inertial navigation sys112  Transall C160 (Receiver)  Dornier 228 AER II (Transmitter) Altitude  2750 m  1700 m  Figure 5.8: The bistatic configuration used in real bistatic data capture. tem (INS) measurements. The motion compensation allows the data to exhibit some degree of azimuth-invariancy and helps improve the quality of the image considerably. Figure 5.9 shows a focused bistatic SAR image. The scene is part of a larger image over Oberndorf (Lech) in Germany. The geometry and SAR parameters are given in Table 5.3.  5.5  Approximate Bistatic Range Doppler Algorithm  For coarser resolution and lower squint configurations, the range azimuth coupling (that causes the 'widening' of the the energy observed in Figure 5.2(b)) is less dependent on azimuth frequency. In this case, the azimuth frequency dependence can be neglected when generating the SRC filter by using a coupling term that is fixed at the Doppler centroid frequency, /,,,.. The derivation can be simplified further by neglecting 113  Table 5.3: Geometry and flight parameters for bistatic SAR system. Simulation parameters  Transmitter (AER-II)  Receiver (PAMIR)  Velocity in x-direction  0 m/sec  0 m/sec  Velocity in y-direction  98 m/sec  98 m/sec  Velocity in z-direction  0 m/sec  0 m/sec  Center frequency  10.17 GHz  Range bandwidth  300 MHz  PRF  1250 Hz  bistatic angle (3  13°  Nominal altitude  2448 m  2453 m  Depression Angle  55°  42°  Nominal distance between airplanes  1177.5 m  Minimum distance between airplanes  1174.3 m  Maximum Distance between airplanes  1179.2 m  the third-order and higher-order terms of f in the 0 T  phase equation. Consequently,  src  the SRC phase can be approximated by the quadratic function [17] (5.14)  </>src(/r)  where the constant 1 K  c  ,  3/c 4k$  2  3  f  Jrh  m  9ki  •+  4:k k 16fc 2  4  3^1  Jo  r2  c  Jo.  |  C  2  3  3c  5  2  Jo  Jo  3 4  (5.15)  Jo  The phase modulation of the range azimuth coupling and the range F M modulation can be combined together to form a new F M rate, K  m  J _  _  J.  114  1_  [17] (5.16)  bistatic S A R image of Oberndorf (Lech)  Figure 5.9: A bistatic SAR image of Oberndorf (Lech), Germany. In this case, SRC and range compression can be implemented at the same step, resulting in a more efficient algorithm. The processing steps for this approximate bistatic RDA are illustrated in Figure 5.10. The steps are similar to the conventional RDA (refer to SRC Option 3 in [17]). K  SIC  and  K  m  are weakly range dependent, and can often be kept constant for  the imaged scene. If the range swath is too wide, the scene can be subdivided into contiguous segments and processed in range invariance regions.  115  Raw radar data Azimu th FT  Range FT  RCMC  Range Compression with approximate S R C  I  Azimuth Compression with Azimuth IFT  Range IFT  T Compressed data Figure 5.10: Functional block diagram of approximate RDA.  5.6  Conclusions  The bistatic RDA derived in this chapter applies to azimuth-invariant cases where the baseline between the transmitter and receiver is fixed. Depending on the situation, it may not be possible to achieve such flight configuration, e.g., an airborne to satellite bistatic case where the difference in velocities is several orders. Even for the case when two platforms are traveling in the same medium, some form of motion compensation is applied to the signal to ensure that the transmitter and receiver are moving in ideal parallel tracks with constant velocities [49]. ft may be argued that motion compensation can be used to compensate data collected from a non-parallel tracks scenario to one with parallel tracks. This is not possible in the general case because motion compensation is computed based on a fixed reference point or "mocomp point". It is accurate for points in vicinity of the mocomp 116  point but, residual phase error increases further away from the reference. Likewise, it is important to limit the amount of motion compensation applied to the data. In the next chapter, the extended NLCS algorithm is introduced and it is able to cope with some amount of azimuth-invariance in the signal data.  117  Chapter 6 NLCS Algorithm for the Parallel and Slightly Non-Parallel Cases 6.1  Introduction  The NLCS algorithm is capable of focusing highly squinted and short-wavelength monostatic configurations. In this chapter, the NLCS is extended to focus the bistatic configuration where the receiver and the transmitter are moving in parallel or slightly non-parallel tracks with squint. This algorithm is mainly used for airborne-to-airborne platform bistatic configurations or satellite-to-satellite platform bistatic configurations. Due mainly to its LRCM, severe range Doppler coupling occurs in highly squinted SAR data. The original NLCS algorithm focuses highly squinted monostatic data by removing LRCM. However, it cannot process data from wide apertures and long wavelengths system. These data have appreciable range curvature. Left uncorrected, they would cause severe impulse response degradation in both the range and azimuth directions. The extended algorithm deals with this problem by incorporating RCMC  118  and SRC into the processing. A frequency-domain matched filter is also designed using the MSR result, this matched filter allows azimuth compression to be performed efficiently. NLCS makes several assumptions during image formation. These assumptions affect the accuracy of point target focusing and limit the size of region that can be processed with acceptable impulse response degradation (invariance region). The impacts of these assumptions are investigated in this chapter. A method of executing registration of the focused image to the ground plane is also introduced.  6.2  Extended NLCS  Extension of the NLCS to focus long wavelength bistatic configurations was introduced in [53,92]. Figure 6.1 illustrates the main steps taken in the extended NLCS algorithm. To ease discussion, the bistatic configuration where both platforms assume motion in a parallel or slightly non-parallel track is referred to as the parallel case. The stages in the extended NLCS are summarized as follows: 1. The first step of the algorithm is to perform a range F T to transform the data to the range frequency domain. . 2. Range compression is performed in the range frequency domain. This is followed by an LRCMC and a linear phase removal. The LRCMC can be implemented using a phase ramp in the range frequency domain. This allows LRCMC to be done together, with range compression. 3. A range IFT is then performed to transform the data back into range domain. 4. A linear phase removal is applied to remove the linear phase from the data. 119  Raw radar data  Azimuth FT Range FT  Range Compression/LRCMC  QRCMC  Range IFT  Azimuth Compression  Linear Phase Removal  Compressed data  NLCS  Figure 6.1: Functional block diagram of extended NLCS algorithm (without SRC). 5. Targets with different F M rates are aligned in the same range gate. A perturbation function is applied in the azimuth time domain to equalize the F M rates along each range gate. The perturbation function for the parallel case is a cubic function of azimuth time (similar to the monostatic case [44]). This stage is also known as the NLCS stage. 6. An azimuth FT is then performed to transform the data into the azimuth frequency domain. 7. Next, the QRCMC or range curvature correction is performed in the range Doppler domain. This is an extension to the original NLCS algorithm and is not necessary for short-wavelength cases. 8. Azimuth compression is then carried out using a frequency-domain matched 120  filter. 9. Finally, an azimuth IFT is performed to transform the focused data back into the range and azimuth time domain. For very wide apertures, even after LRCM, there is still cross coupling between range and azimuth in the range Doppler domain. The cross coupling is caused by a large range curvature in the order of tens or hundreds of range cells. For these cases, SRC is necessary. SRC is applied in the two-dimensional frequency domain to remove the cross coupling effect before applying QRCM correction. The extra FT and IFT operations make implementing the NLCS algorithm with SRC inefficient. Therefore, SRC should be avoided whenever possible. The SRC is discussed in more detail in Section 6.2.6. In the subsequent sections, each stage of the extended NLCS algorithm is analyzed and discussed.  6.2.1  L R C M C and Linear Phase Removal  A narrow range pulse is formed after range compression. The trajectory of the point target in the two-dimensional signal space contains both the LRCM and non-linear RCM or range curvature. After range compression, LRCMC is performed. After the removal of the LRCM, the linear phase is removed. Both the LRCM and the linear phase terms contribute to the cross-coupling. This can be concluded from observing SRC phase term,  <j) , SIC  in (5.8) and MSR formulation in (4.23).  The NLCS algorithm assumes the LRCM and the linear phase term to be constant. Ideally, the LRCM and linear phase term for each point target is dependent on the squint angles of the platforms since squint angles vary according to the ranges to the point target. In the azimuth-variant bistatic case, these linear components 121  vary with azimuth as well. A practical way to deal with this problem is to perform these corrections within an invariance region to keep the variations of squint angles negligible. With this assumption, bulk LRCMC can be applied throughout the whole invariance block. The amount of range shift is computed from a reference point. The range to be shifted varies with azimuth but is constant along the azimuth and is given by S (v)  = [V sin(f? ) + F sin(c9 )]r  r  T  sqT  R  sqR  (6.1)  ?  The LRCMC step can be done by linear interpolation in the range time domain or done together with range compression in range frequency domain using a frequency ramp. Removal of the linear phase can be done in the azimuth frequency domain by multiplication with a signal given by •sircmfa)  =  E  X  P  WT sin(#sqT) +  VR  sin((9 )] sqR  77  j  (6.2)  To illustrate the processing of the NLCS algorithm, consider a simple flight geometry, such as the one shown in Figure 6.2 with three point targets (A, B and C). Targets B and C have the same range from the flight paths. The time delay of the beam center crossing time of Target C from that of Target B is  Targets A and B  have the same beam center crossing time. For processing purposes, Target A is chosen as the reference point (mid-swath). The target trajectories are shown on the right of Figure 6.2. Since the L R C M is the same for all three targets, the following relation can be written, •RcenB = -RcenC = -RcenA + ^Al Vd  (6-3)  where the subscript of i? A is interpreted as the slant range of Target A at the beam cen  center crossing time, and .similarly for Targets B and C. 122  Re, (Rc„AR)  R„ (Rc„AR)  +  +  c  RjW'~  r  11 = 0  A * B  L  L  I  path — Range—^  Range—r  Before L R C M C After L R C M C  Figure 6.2: Illustration of LRCMC in NLCS. The signal of the reference point (Target A) is given by RwcmAirj) SA{T,V) ~ PT[T -  J^^exp  RlrcmAfo)  -j2nf  0  (6.4)  ^  where the instantaneous slant range of Target A after removing the linear term is given by RhcmAiv)  =  RcenA + ^A2  + k 3 rf + k A  7? + ...  (6.5)  4  M  For azimuth-invariant cases, Targets B and C have the same trajectory history and can be focused with the same azimuth matched filter. For the azimuth-variant case, the same azimuth-matched filter can be used provided that an azimuth-invariance region is used to keep the phase degradation within the acceptable limits. The signal for Target C after LRCMC and linear phase removal is given by SC{T,V) ~ PV{T -  - R  lrcmCHfo)  } ™ a z f o  ?7d)exp  -j'2rr/  0  RlrcmCfo)  (6.6)  ??d) + •••  (6.7)  where R\Tcmc(r)) =  -RcenA + &B2fo~  (r? - Tfa) + k 4 fo 3  T]d) + k 2  B3  123  B  ~  4  6.2.2  Range Curvature Correction  After LRCMC, the QRCM dominates the range curvature. Higher-order range curvature is usually negligible compared to the QRCM. Uncorrected range curvature leads to impulse response degradation in both range and azimuth directions. The residual RCM is usually kept at less than half a range resolution cell so that phase degradation is within acceptable limits. RCMC is done in the range Doppler domain. In earlier papers, the range curvature in the range Doppler was derived up to QRCM [53,92]. However, with the new analytical MSR spectrum, the higher order range curvature can be derived with relative ease. The range curvature in azimuth frequency can obtained from  ^rcmt/ri/ii)  by  setting k\ in (5.7) to null and performing a range IFT. The range Doppler signal of Target A is found to be SA{T, / „ )  1  = Pv  ,(/„) exp | - j 0  (RcenA "b -Rcurv(/?/))  azA  (/„) } (6.8)  where I^AO) is the azimuth phase modulation and the range curvature is azimuth frequency is given by R  c  A  2  + 3A  4  f  v  Ak ki  9kf  (6.9)  2  64/c  5  2  Range curvature does not have a strong dependency on range. As an example, consider a i m L-band system with a baseline of 5 km, #  sqT  = 30°,  6 R SQ  = 45.3° and a  QRCM of 10 range resolution cells for a range swath of 5 km, the percentage of QRCM mismatch between the mid-swath point target and the edge point target is less than 10% of a resolution cell. Thus, the range curvature of Target A and Target C overlaps quite closely in the range Doppler domain. The range curvature of both points can be removed with the same RCMC process. 124  Similar to the bistatic RDA, the RCMC is performed by a range-varying interpolation operation. Trajectories become parallel to the azimuth frequency axis after the RCMC. For efficient implementation of NLCS algorithm, range curvature correction is done after the F M rate equalization stage. This is because F M rate equalization is done in time domain while RCMC and azimuth compression are both done in the range Doppler domain. If RCMC is done before F M rate equalization, an azimuth FT has to performed before RCMC and an azimuth IFT has to be done after RCMC. Likewise, it is more efficient to perform F M rate equalization before range curvature correction because it saves an extra FT pair. However, performing the F M rate equalization before RCMC would result in different perturbation coefficients being applied to the same trajectory.  This is a  non-linear operation and the impact of this process is studied in Section 6.4.1. That investigation showed that the impact of switching these stages on the focusing accuracy is minor.  6.2.3  F M Rate Equalization  The azimuth signal now consists of baseband F M chirps from different range gates. Therefore, it would not be possible to properly compress the whole azimuth array using a single matched filter. One way to tackle this problem is to equalize the dominant phase term (quadratic phase term) using a perturbation function. Because the higherorder phase terms do not change as rapidly with range as the quadratic term does, they can be assumed to be constant for the processed region. This process can also be viewed in the context of pre-processing since it effectively makes the dominant, secondorder phase component azimuth-invariant before applying the azimuth compression. The perturbation function for the parallel case is a cubic function similar in the 125  monostatic case [44]. Figure 6.3 illustrates how the equalization works. Figure 6.3(a) shows the azimuth F M signal of three point targets with different F M rates lying in the same range gate and Figure 6.3(b) shows the corresponding quadratic phase of each azimuth signal. The phase of a cubic perturbation function is shown in Figure 6.3(c) and this is added along the azimuth of the range gate. Figure 6.3(d) shows how the quadratic phase of the three point target has been altered by the perturbation to achieve the same quadratic phase. Finally, Figure 6.3(e) shows the perturbed signal of the point targets.  This illustration can be used to interpret the phase signal derived Point target C  Point target A  Point target C  Figure 6.3: Illustration of the effects of perturbation in parallel case. in the next section, fn order to simplify the discussion, the phase equations are shown up to the quadratic term. This restriction can be lifted by inclusion of higher-order terms as discussed in Section 6.2.5. 126  6.2.4  Perturbation Coefficient for Parallel Case  Target A in Figure 6.2 can be interpreted as the middle target in Figure 6.3, while Target C is the target on the right. To determine the perturbation function, the phase functions of Target A and Target C after perturbation are derived first. Subsequently, the perturbation function is set in such a way that the azimuth F M rate of Target A and C becomes equalized. The azimuth signal for reference Target A after introducing the cubic perturbation function is given by SA ert(T",7?) P  « p ( r - ^Efp.) r  J  {r)) exp(jirar) ) 3  Waz  TT r y C 0 S ( t V ) 2  y C0S (c9 )  2  2  T  2  R  -^TcenA  sqR  7f!  Rcen  rf  A  (6-10)  where a is the coefficient of the cubic perturbation function. The transmitter slant range to the Target A at the mid aperture point  = 0 ] is defined as 7?TcenA and  [77  the receiver slant range to the point target at the mid-aperture point as i ? their sum is given by  R c e  nA  and  R Acen  Target C and B can be focused with the same azimuth-matched filter in the invariance region since they have the same trajectory history, so the signal of Target C after RCMC can be written as S c e r t ( r , r?) « P  w (r] - rj ) e x p f j T r a T ? )  p (r-  3  az  r  f  ,7T  d  rVy C0S (rV) 2  2  \/ C0S (c9 2  |  2  R  sqR  )  (r -r ?  (  A L  -RTcenB  ? d  )  (6.11  2  ^RcenB  The transmitter slant range to the Target B at the mid aperture point is i?TcenB and the receiver slant range to the point target at the mid-aperture point is i ? sum is given by  R nBce  127  R c e  nB-  Their  The beam center passes both Target A and Target B at the same time. Using Target A as reference, the roundtrip range of Target B can be approximated as [see Figure 6.2] AR  = AR  L  + AR  T  R-TcenB = - ^ T c e n A +  ~ ~  -^RcenB =  ff  -(V sin(c9 ) + V sin(0 ))r7  =  R  AR  T  p ^tcenA  ^?RcenA +  R  sqR  (6.12)  d  T  R-TcenA  -ftTcenA  sqT  (K sin(c? ) + V sin(c? ))r7 T  sqT  R  sqR  d  (6.13)  d  (6.14)  AR  R  -^RcenA  R RcenA  R,cenA  (V sm(c? ) + Vksin(t? ))r7 T  sqT  sqR  Substituting (6.13) and (6.14) into (6.11) gives SCpert(T,77) «  -^cenA  p (r r  %)exp (jVar? ) 3  )w z(va  C  V co (c9 ) r? ^ 2  .7T  AR z?  2  T  S  sqT  JlTcenA 2  '  J^TcenA  F cos (f7 ) 2  R  ,  T  sqR  |  Aii;  R  x  (V-Vd) } 2  (6.15)  Expanding the terms further, we have  RcenA  (1 +  AR  R  RcenA  RcenA  )-i  =  i_  +( RcenA  (6.16)  V RcenA J A R  *  \ RcenA,  +.  (6.17)  Using only the first two terms in (6.16) and (6.17), and replacing (rj — rj_) by azimuth  128  time rji, the perturbed signal at Target C can be written as SCpert(T,7?l) «  R,c e n A  p {r r  C  .7T  exp i  exp (jira{rji +  ) w (r)i) az  VT cos (g ) 2  3  d  Aflr  2  sqT  7? ) )  (1  ^TcenA  -^TcenA  Vgcos ^)  AiZ  2  RcenA  R  (6.18)  RcenA  Expanding (6.18), SCpert(T,7?l)  « p (r - ^Ef^L) ^ ( ^ j ) r  exp < jira TT /% cos (fV) 2  exp< — j  exp  .7T  l/ cos (c9 )\  2  2  RT c e n A  sqR  ^T-RcenA  Kr cos (f5 ) 2  2  R  2  sqT  ftT c e n A RcenA  /  y cos (t7 ) 2  +  2  R  2  sqR  R R c e n A RcenA  Vrsin(6» ) + V sin(6' ) T ? ^ 2 sqT  R  sqR  (6.19)  Having determined the perturbed signal for Target A and point target C, the next step is to determine the perturbation coefficient a, needed to equalize the F M rates of points lying in the same range gate. The F M signal of Target C signal can be visualized as F M signal of Target A signal displaced in azimuth time. Target A is centered at 77 = 0 and point target C is centered at 77 = rjd- Keeping this in mind and examining (6.10) and (6.19), the following can be concluded: • The cubic terms are identical. This cubic phase is caused by the perturbation process. 129  The quadratic terms can be equalized by setting perturbation coefficient a as a  = 3  y w(c? T  s q T  )  A-RxcenA^cenA  |  K w(g R  A.R  R c e n  s q R  )  ( V sin(c9 ) + V sin(0 )) (6.20) T  A - R cenA  sqT  R  sqR  As shown in Figure 6.3(b) and Figure 6.3(d), the curvature of phase of the three points becomes the same after adding the third-order phase from the perturbation function. The different F M signals in Figure 6.3(a) are equalized as shown in Figure 6.3(e), although the edge targets are transformed to non-baseband signals. There is an extra linear phase term in the perturbed phase function of Target C. The linear term causes a small shift, /shift  =  /shift,  in the spectrum of Target C.  —2—  (o-^ ) 1  This causes the signal to be non-baseband. This generally does not present a problem since the azimuth signal is usually oversampled by 20%. As long as the increase in spectrum bandwidth stays within this constraint, no aliasing of the spectrum will occur. The amount of spectrum shift is proportional to the squared of % (the azimuth time offset from rj = 0). Point targets furthest away from the reference experience the most spectrum shift. All point targets are shifted in the same direction as well. This result can be seen from Figure 6.3(e), the slight increase in the bandwidth due to the non-baseband signal. The azimuth-matched filter is evaluated using the phase history of reference point target and is therefore a baseband signal, ft focuses all the point targets in the azimuth array to their respective stationary points. Ideally, Target C should be registered to 771 = 0. As can be seen from Figure 6.3(d) and (e), the edge point targets (Target C and C ) will be misregistered to right. This misregistration is usually small and can be ignored in most cases. It can also be removed during 130  the image registration stage. The position of the stationary point can be easily found by setting differential of phase (6.22) to zero and finding its roots, #azC,  x  27T V g c O S ^ f l a q T )  O-Vl  , V^CQS (c? 2  -n-TcenA  A  Jl  R c e n  s q R  )  2  2  A  = 0  (6.22)  The amount of misregistration varies in a parabolic fashion with azimuth. In practice, it is sufficient to use a few points to fit a second-degree polynomial to this misregistration equation and interpolate the image for correct registration. • The constant phase term has no effect on the focusing process. It can be ignored if a magnitude image is the final product. This phase term raises or lowers the point target phase as shown in Figure 6.3(d). To check the validity of the perturbation coefficient, the parameters are set for the trivial case of a monostatic geometry. The velocity is set as V — Vr = V R , the squint r  angle is set as 0 = 6> = 6> , and one-way range is R = i ? sq  sqT  sqR  s  c e n  A/2  = R T = cen  -RcenR,  the perturbation function becomes ^  _ 2 V___^_i__ (  )  ( 6  2 3 )  which is the same as the perturbation coefficient derived in [44].  6.2.5  Azimuth Compression  At this stage, the azimuth F M rates are equalized and all the targets in the same range gate can be focused using the same matched filter. In the original NLCS paper [44], the impact of the cubic phase term on matched filter was not considered. This is fine for short-wavelength cases where a is small. However, for wider-aperture and 131  longer-wavelength cases, the phase of the perturbation function must be considered, otherwise, it will give rise to higher-order phase errors. For the monostatic and the parallel case, the perturbation function is a cubic function of azimuth time. Therefore, it affects the third-order phase term. The azimuthmatched filter should include the third-order phase in its formulation. The higher-order phase terms are less sensitive to range and point targets lying in the same range gates can assume the same higher-order phase term. The importance of adding this phase term to the matched filter can be illustrated with a simple C-band monostatic case with a range resolution of 1-m and a squint angle of 30°. Without including the phase term, the impulse response is poor due Range compressed target  500  505  510 515 520 Time (samples) ->  Azimuth compressed target  525  1275  1280 1285 Time (samples) -»  1290  Figure 6.4: Impulse response using matched filter with second-order phase. to the uncompensated third-order phase error [see Figure 6.4]. With the third-order phase term, the impulse response is improved significantly [see Figure 6.5]. The azimuth frequency-domain matched filter can be obtained by first doing a  132  Range compressed target  Azimuth compressed target  10  -[•  10 IRW =2.654 cells  IRW =1.251 cells  PSLR =-13.286 dB  PSLR =-13.189 dB  0 ISLR =-10.242 dB  ISLR =-9.874 dB  CO •o  5 "20  -30 500  505 510 515 520 Time (samples) ->  525  764  766 768 770 772 Time (samples) ->  774  Figure 6.5: Impulse response using matched filter with third-order phase. FT on (6.10) with the third-order term, which gives S (r, azA  fr,) « J PT{T - ^^A) w (r)) az  exp j - j ^ p  (^A r? + 2  2  exp(-j2nf r}) v  k rf  +  A3  rC r? ) + 4  A4  jnarj ^  dv  3  (6.24)  The higher-order terms are assumed to be the same for Target A and Target C. Continuing with the formulation, the azimuth frequency f and azimuth time rj is now v  related by fo / ~ . ~, 0 ., i \ . 3 - (2k v + 3k v + 4fc ?7 ) + ^oen 2  fv  A2  ri(f ) v  =  3  A3  A_f  + A f*  ri  (6.25)  2  A4  2  + A fi  (6.26)  +  3  where a\ =  k 2, a = A  2  c  I  Ai = —,A ai  V a 2  2  3 2  -aa  A  133  3/ ~—k ^J, a c 2a , — a\a  4/o  0  A  3  k  M  2  3  (6.27)  The frequency-domain matched filter is the conjugate of hnp(fri)  az  and is given by  = exp(-j<?!>p(r?(/^))  (6.28)  n  + irar/ifr,)  -  3  6.2.6  SA  (6.29)  2TTf r]{f ) v  v  Secondary Range Compression  Most of the Range Doppler coupling is eliminated by the removal of the linear components. For very wide aperture cases, e.g., L-band 1-m resolution, the residual range curvature is significant and causes range Doppler coupling. Thus, SRC becomes necessary. The SRC can be carried out in the same way as SRC in the bistatic RDA. The SRC is evaluated at a reference range and applied in the two-dimensional frequency domain. The SRC phase can be derived easily by setting the linear components in (5.8) to zero and replacing the derivatives with the derivatives from reference Target A, fsrcA (frjr,)  ~  +  1  27T  /T\2  8^A2  2TT  (-)'  . Jo  &A3  2TT  3(f)  fo  9k A3 2  +  (-)  4  Jo  ^ A 2 ^ A 4 ^_C_s3  64/c|  7o'  2  Jr,  4(f)  3  Jo  6(f) -10(f) 2  3  7o  Jo  ;  Jri  (6.30)  The coupling is azimuth dependent and increases away from zero-azimuth frequency [see Figure 6.6(a)]. This explains why coupling only affects wider aperture cases. The SRC phase can be simplified by considering the dominant quadratic phase term and neglecting the third-order and higher-order terms of f . The SRC phase can be T  approximated as <  /'srcA(/r) frj)  134  K-srcAifri)  (6.31)  (a) Signal in Range Doppler before SRC  (b) Signal in Range Doppler after SRC  -60 N I -40 >O  C -20 0  O" 0 o "5 20 E 40  .1=  60 -20  0  20 40 Slant range/m  60  80  0  (c) Range compressed target without S R C 10  1 CO  0  (d) Range compressed target with S R C  fIRW =1.314 cells  10  PSLR =-19.764 dB ISLR =-15.771 d B " \ .  -10 -20 -30 510 515 Time (samples) ->  20 40 Slant range/m  \N  I  0  g  -10  I  "20  S  IRW =1.065 cells PSLR =-13.604 dB ISLR =-10.183  -30  520  508  510  512 514 516 Time (samples) ->  518  Figure 6.6: Illustration of the effects of a SRC filter on range IRW. where •^•srcA^rj)  ^  —  3A A:A3  c2 ,  2k f  2  7  A2  "  +  n -1  2  (6.32)  i  44 /2 2  The phase modulation of the range azimuth coupling and the range F M modulation can be combined together to form a new F M rate, 1 ^mA  1 K  T  K , mA  1  (6.33)  ^src A  If SRC is not applied, the range F M mismatch between the signal and the data are K {fr,) mA  — K and the maximum phase error due to this mismatch is given by r  2  (6.34)  srcA  135  If this phase error is greater than rr/2 (for less than 10% broadening in the range IRW at the bandwidth extremities), then SRC should be applied to the processing. Without applying the SRC filter, the coupling results in a poor range impulse response as shown in Figure 6.6(c) and Figure 6.6(d). Adding the SRC process decreases the efficiency of the algorithm. This is because two extra pairs of FTs are needed. SRC is done immediately after removal of the linear phase. A range FT and an azimuth FT are performed to bring the signal to two-dimensional frequency domain so that SRC can be applied. A range IFT is performed to bring the signal to the range Doppler domain. RCMC is done in this domain, followed by F M chirp equalization in the time domain. RCMC is performed first because range curvature for such wide aperture is usually quite severe, stretching tens or even hundreds of range cells. Finally, azimuth compression is processed in the range Doppler domain to focus the image. SRC is not normally implemented with NLCS, because the invariance region tends to be quite small for highly squinted long-wavelength and wide-aperture cases. For instance, for a monostatic system L-band 1-m resolution with a squint of 30° at 20 km range, the range invariance region is only 600m. Several overlapping invariance blocks are required to cover a practical range swath of about 3 km or more. This will further reduce the overall efficiency of the algorithm.  6.3  Image Registration  Image registration transforms the focused image into ground coordinates. The SAR image is focused in the slant range and azimuth time domain. For monostatic SAR, point targets with the same closest ranges of approach have the same Doppler history. Hence, they are focused to the same range cell. The focused image is generally pre136  sented in the closest range of approach versus the azimuth time domain. A simple scaling relationship exists between the image focused in the range/azimuth time plane and the ground plane. For example, a square patch on the ground plane appears as a square or rectangle in the focused image. The azimuth is scaled to give cross range in meters and the range is scaled to give ground range in meters. For the general bistatic case, no simple scaling relationship exists between the image focused in the range/azimuth time plane and the ground plane. Moreover, there are two closest ranges of approach. An element of ambiguity exists if the image was defined in closest range of approach and azimuth time plane. It is more convenient to leave focused image space in the slant range time and azimuth time domain. It should be noted that either the flat-Earth assumption or an accurate Digital Elevation Model (DEM) is necessary to focus the point targets in a bistatic case. Since point targets that focus to the same range gate may not have the same trajectory histories [30]. All bistatic algorithms, including the time domain based algorithms, would require the flat-Earth assumption or the DEM information to focus the image properly. The image registration introduced here assumes a flat-Earth topography.  6.3.1  Registration Stages  Before registration can begin, the LRCM introduced must be reversed and the slight misregistration should be corrected if desired. The registration process essentially consists of shearing, rescaling and resampling stages. These two pre-processing steps may be absorbed into the registration process to improve efficiency. The first step of the registration process is to determine the position of known grid points in the focused image,  I\{T,rj).  The grid lines that these points lie on are  parallel and perpendicular to either the transmitter or the receiver flight path in the 137  flat-Earth plane,  l2(x,y).  Once the mapping of the grid points to the corresponding  points in the image is known, the next stage is similar to registration process discussed in Section 5.3.3. In slightly azimuth variant cases, a square patch may be mapped to non-regular quadrilateral with fairly straight edges. In such cases, it would be sufficient to do linear shearing and linear scaling in both the range and azimuth directions. In more severe cases, a square patch maps to non-regular quadrilateral with curved edges. In such cases, two-dimensional resampling may be needed.  6.3.2  Registration of Target between Ground Plane and Image Plane  Next, we determine how the position of an arbitrary point target located at {x ,y ) in n  the ground plane is focused to a corresponding point target located at  (T ,r] ) n  n  n  in the  image plane. NLCS compensates for linear phase prior to azimuth compression and the phase of the point target after doing a linear phase correction is given by 0 („) p  _ flfafr»y°)  =  27r  +  ?£ _ [ v  s i n (  0  s q x )  +  y  K sin  (0  sqR  )] „  (6.35)  Subsequently, azimuth compression focuses the point target to its stationary phase position, i.e., the point is focused to the azimuth time ry where [ d<f)p/dr} n  We can determine  rj  n  readily by solving for d(p /dn p  = n  0 ].  0 using traditional numerical  methods [93]. Once the azimuth time ry is known, r is simply n  =  R(rj )/c. n  In practice, a user would require an image product in certain spatial ground coordinates. Therefore the processed image must be resampled to a ground coordinates grid dictated by the user. Thus, the registration steps here do not significantly increase the computational requirement as resampling is an integral step in the imaging process. 138  6.4  Limitations of the Algorithm  NLCS makes use of a few assumptions and approximations that affect the size of the patch that can be processed in one processing block. Processing is done within an invariance region so that these assumptions and approximations are valid. In this section, the impact of these assumptions and approximations are studied.  6.4.1  The Effects of Applying Perturbation before Residual RCMC  The assumption that the perturbation function varies slowly with range, such that F M rate equalization can be performed before residual RCMC, is an important time-saving step. If the perturbation function changes rapidly with range, residual RCMC must be performed in the range Doppler domain [see Figure 6.1] before doing perturbation in azimuth-time domain. This requires an extra pair of azimuth FT and IFT, thus making the algorithm much less efficient. In this section, the impact of range varying a on the point target impulse response is studied. Consider the illustration given in Figure 6.7. After LRCMC and linear phase removal, the signal for reference target point target can be written as (up to thirdorder phase term) SA{T,TJ)  »  p  T  exp j - j ^ p  [tfcen  +W  +  fcl3T? ] 3  j  (6.36)  As shown in the illustration, a significant amount of QRCM is present, in this example, the residual RCM spreads over 4 range cells. Since the perturbation is applied before QRCM, the signal is weighted by 4 discrete perturbation coefficients and the signal is 139  given by r - - (R  SA(T,7?)  + k n + k rf) W^{v) 2  cen  .2TT/  0  exp < -j  u  u  [i? + k rj + k rj ] \ exp{-jira(v)v } 2  cen  3  n  3  u  where ai f  \v\ < Ti  re  a(v) = <  a  Te{  a  ref  + Aa  -T  <TI<-TI,T  2  1  >TI>  T  (6.37)  2  + 2Aa - T < 77 < -T , T > 77 > T 3  2  a f + 3Aa  \r}\ >  r e  2  3  T  3  where a f is the reference perturbation coefficient for Target A and Aa is approxire  mately the change in the magnitude of the perturbation coefficient between range cells near the reference range. The phase error that results from this application of discrete range varying perturbation function is given by A0«err  «  ^  7ra (r?)T (77) d  3  d  where '  Mv)  \ \< Ti  0  V  Aa 2Aa  -T <rj< 2  - T i , Tj > 77 > T  2  - T < 77 < - T , T > 77 > T 3  3Aa and T is the size of the time interval where d  2  I77I > T 01(77)  2  (6.38)  3  3  is valid.  The phase degradation impact of this signal is difficult to evaluate as the effect is non-linear in nature and the number of discontinuous sections is dependent on the oversampling rate in range and the amount of QRCM.  140  a+Aa a+2Aa  a  ^ i rc—  cen Rcen+/MRl.  Rcen Rcen+ARL  t t  i  !  Trajectory of Point Target A after LRCMC  v -A--  -Range -> Before LRCMC  a+3Aa  -Range -> Range Cells  After LRCMC  R +3Ar R +2Ar _  cen R + A r  R  Figure 6.7: Illustration of effects of range varying perturbation function. One method of approximating the phase error is to consider the fact that for any azimuth interval, as long as \a\\ > \a \ then the peak-to-peak, third-order phase error 2  27r|ai| ( T / 2 ) a  3  will always be larger than 27r|a | ( T / 2 ) . Ignoring the effects of the 3  2  a  discontinuities for the moment, the upper bound of the phase error can estimated by using the largest phase error in a (rj) for the entire integration interval. Thus, the peak-to-peak phase error caused by the range varying perturbation coefficient is a third order phase function and has an upper bound of A0  A  = 27r(M Aa) a  -f  (6.39)  The sum of the uncompensated third-order phase error and this upper bound phase limit should be kept less than 7r/2. The phase discontinuities cause a high-frequency noise to be added to the signal, in other words, the ISLR deteriorates. The noise power of the high-frequency noise is proportional to the the number of discontinuities and discrete level of the discontinuities. If the discrete level of the discontinuities is small enough, the approximation in 141  (6.39) should hold well. The level of the discontinuities is proportional to the ratio of A a/a.  It can  be shown that for a small change in bistatic range, the corresponding change in the perturbation function A a is given by Aa  Ar  "  JI-cenA  where A r represents the small change in range and is equal to the size of the range cell. For moderate wavelengths, the ratio of A a/a is typically much less than 0.001. For such small change in a, the impact on the ISLR is negligible. Thus, the upper bound in (6.39) should hold.  6.4.2  Invariance Region Analysis - L R C M  The NLCS requires the variation of squint angles in one processing block to be negligible so that the residual linear range cell migration is kept within half a range resolution cell. This limit keeps the broadening of the range and azimuth IRW to within acceptable limits. This requirement affects the allowable range swath that can be processed by the algorithm in one processing block. If .a reference point (Target A) has squint angles  0 T, sq  0  sqR  and an arbitrary point  (Target Pi) at a range AR from the reference point has squint angles 9 Ti,9 R\ then X  sq  sq  the tolerable limit is given by sin(f9 ) - sin(0 i) sqT  sqT  sin(f? ) - sin((9 i) sqR  sqR  8p 2  r  a  <  (6.41) The maximum value of AR  X  is one-half the swath width that can be processed  by the algorithm. If the range swath is smaller then scene, the data can be divided into contiguous segments and processed in range invariance regions. 142  For azimuth-variant cases, the azimuth extent is limited as well. The azimuthinvariance region needs to be determined numerically. The beam center crosses the reference point Target A at n = 0 with a range of  The azimuth extent is the  R Acen  maximum azimuth time Aw when the beam center crosses another point P with the y  same range of R \ cen  VT  2  and squint angles of 6?  sin(0 ) - sin(0 sqT  sqT2  )  + V  and #  sqR2  sqR2  such that  sin(0 ) - sin(f5 > ,  sqR  R  sqR2  T <  5p  r  a  (6.42)  6.4.3  Invariance Region Analysis - Residual R C M  It is assumed that range curvature is nearly the same for point targets in the processing block so that they overlap in range Doppler domain. The difference between the overlapping range curvature should be within one range cell to avoid significant degradation in range IRW and azimuth IRW. The mismatch in the overlap is more pronounced when there is high squint. High squint results in larger LRCM and this causes point targets with larger differences in QRCM to be placed in the same range gate after the LRCMC stage [see Section 6.2]. Generally, the limitations given in Section 6.4.2 are more stringent and would ensure that this limit is satisfied as well. Once the width of the invariance region AR , X  in Section 6.4.2 is determined, the amount of difference in range curvature overlap can be checked using l/ cos (g i) 2  2  sqT  |  (i?TcenA + A i ? T x P l )  T cos (c9 ) 2  Vr|cos (g  2  T  2  T  sqT  .RTcenA  (^RcenA +  K cos (6V) 2  |  2  R  ^RcenA  143  sqR1  )  ARfixPl)_  where AR  =  X  (6.44)  A / ? T x P i + ARRXPI  (6.45)  R  Ro  c  If this criterion is not met, the range invariance region AR needs to be reduced. X  6.4.4  Invariance Region Analysis - Q P E  The approximation in (6.16) and (6.17) causes QPE and limits the range invariance region. This limit is generally lower than (6.4.2). The QPE has to satisfy the limit impose by V cos (f7 ) R•TcenA 2  2  2  sqT  TT  AR \ (TA  V^cos (c9 ) R RcenA  2  sqR  ^cenA/  2  V 2 J  (6.46)  4  For azimuth-variant cases, the azimuth extent is limited as well. The azimuthinvariance region need to be determined numerically as in Section 6.4.2. The QPE between reference Target A and P should satisfy 2  K cos (c^ 2  2  T  R•TcenA 2  2  R  sqR  •^RcenA  I/ cos (f7 ) T  I/ cos (f7 ) 2  +  2  sqT2  -RTcenP2  V cos (e ) 2  2  sqR2  -RRcenP2  <  7T  where point Target P has a nominal slant range of (7?TcenP2 + -RRcenP2) = 2  144  (6.47)  RcenA-  6.4.5  Invariance Region Analysis - Higher-Order Phase Errors  It is assumed that the third-order and fourth-order phases are the same for the entire processing block. Both of these phase error vary more quickly in range than in azimuth for the slightly non-parallel cases. Therefore, these phase errors are more likely to limit the range extent than the azimuth extent. Nevertheless, the tolerable invariance region tends to be limited by the criteria in Section 6.4.2. If the peak-to-peak, third-order phase of the reference point is given by ,  ,„  D  0 3 r d ( i ? T c e n A  '^  , R c e n A )  4TT f V* sin(g ) cos (g )  V* sin(fl ) cos (fl ) \(T_\  2  sqT  =  T \  2  2  5qT  ^  sqR  2  +  3  sqR  ^  )UJ (6.48)  then the third-order phase error is given by A(f); '3rd  ^3rd(-RTcenA, ^RcenA) — <^3rd (-^TcenA + A i ? T x P l , ^?RcenA + A i ? R x P i )  + A<f> < |  (6.49)  a  A(j) is due to the cubic phase error due as a result of applying the same perturbation a  function across different range cells, refer to Section 6.4.1. A<p is set to zero if residual a  RCMC  is done before perturbation.  If the fourth-order phase of the reference point is given by 27r/v' cos (0 T)  [4sm (fV)-cos (c? T)]  ~r-< A  c~pF'cenA 8R_,  4  2  T  04th(-ttTcenA)-rtRcenA) =  sq  2  2  sq  I/ cos (0 ) [4sin (fj ) - cos (t? )] [ (TA 4  R  2  2  sqR  2  sqR  8 i ?RcenA R„„„A L  then the limit on the fourth-order phase is given by 145  sqR  IV 2  4  A</>4th =  |04th(^TcenA,-^RcenA)  — <^4th(^TcenA + A i ? T x P l , ^RcenA +  TT  AR-RxPl)  <4  (6.51)  If any of the criteria is not met, the range invariance region needs to be reduced accordingly.  6.5  Simulation Example  To verify the ability of the NLCS to focus data collected on non-parallel tracks with dissimilar velocity, a simulation using airborne SAR parameters given in Table 6.1 is performed. An appreciable amount of antenna squint is assumed by both the transmitter and receiver platform. A residual QRCM of 1.9 range resolution cell is present, therefore residual RCMC is necessary.  6.5.1  Simulation Parameters  An illustration of the geometry for this simulation is given in Figure 7.4. The oversampling ratio is 2.0 in range and 2.66 in azimuth so the theoretical range resolution is 1.35 m (1.76 cells) and the azimuth resolution is 2.5 m (2.36 cells) based on the definitions given in Section 2.4.3 and Section 2.4.4. Rectangular window is used in both the range and azimuth processing.  6.5.2  Analysis of Invariance Region and Limitations  SRC is not necessary as there is negligible range Doppler coupling. The range and azimuth-invariance region is mainly restricted by the residual LRCM. The range invariance region is limited by a range of 6.2 km in the direction of the range resolution 146  Table 6.1: Simulation parameters for a slightly non-parallel flight geometry. Simulation parameters  Transmitter  Receiver  Velocity in x-direction  0 m/sec  20 m/sec  Velocity in y-direction  200 m/sec  220 m/sec  Velocity in z-direction  0 m/sec  0 m/sec  wavelength  0.0566 m  Range bandwidth  200 MHz  Doppler bandwidth  105 Hz  Integration time  1.71 sec  Angle between tracks  5.2°  Altitude  3000 m  2000 m  Range to point target at r\ = 0  16532.8 m  13498.0 m  Squint angle at r\ = 0  30.0°  47.3°  Distance between airplanes at r\ = 0  4391.8 m  Minimum distance between airplanes  4242.6 m  Maximum Distance between airplanes  4553.8 m  Nominal value of a  0.179 sec"  Nominal value of A a  -1.79 x 1 0  -5  3  sec"  3  vector [see Section 2.4.3], which translates to a restriction of ground range of 2.5 km in x-direction and 1.84 km in y-direction. The azimuth-invariance region 2.45 km is restricted by the QPE due to the non-parallel flight path. The percentage of QRCM mismatch between the reference point target and the edge target that lies on the same range is less than 1% of the range resolution cell, so point targets in the same range gates have negligible mismatch in range curvature overlap. The magnitude of the third-order phase error due to range varying perturbation function is less than  O.OOOITT.  This is negligible compared with the third-order phase error for the range invariance region, which has a magnitude of  0.307T.  Therefore, QRCMC can be done after F M 147  origin < <  4  k  m  „ 14km  Figure 6.8: Flight geometry for a slightly non-parallel flight path. equalization without causing any significant phase degradation. The uncompensated fourth-order phase error is less than 0.00227T, therefore the same fourth-order phase term can be used in the frequency-domain matched filter for the invariance region. The increase in frequency is 17.2 Hz, which is about 6% of the spectrum, no aliasing of the azimuth spectrum will occur. The misregistration is about 7m in azimuth and needs to be corrected for proper registration.  6.5.3  Simulation Results  The focused image is shown in Figure 6.9 and the image is registered to the ground plane is shown in Figure 6.10. All the range point targets are very well focused, the range PSLR deviates from the theoretical value of -13.3 dB by less than 0.1 dB and the range ISLR is within 0.05 dB from the expected value of -10.0 dB. The azimuth 148  Focused Map  -2000  -1500  -1000  -500 0 500 Range time (samples)  1000  1500  2000  Figure 6.9: Focused map for the simulated non-parallel case. impulse responses of the point targets are tabulated in Table 6.2. Targets 5 and 21 are the furthest from scene center and therefore, they suffer the largest phase degradation. The impulse response of these two point targets are shown in Figure 6.11 and Figure 6.12.  149  Focused Image Registered to Ground Plane  E  -1000  -800  -600  -400 -200 0 200 400 Relative distance from scene center X (m)  600  800  1000  ;ure 6.10: Point targets focused using NLCS and registered to ground plane.  Range compressed target no.5  Azimuth compressed target no.5  2260 2265 2270 Time (samples) — >  955  960 965 Time (samples)  Figure 6.11: Impulse response of corner Target 5. 150  970 >  975  Table 6.2: Azimuth impulse response for simulated non-parallel case. TGT  1  2  3  4  5  6  7  8  9  10  IRW  2.37  2.36  2.36  2.38  2.41  2.37  2.36  2.35  2.36  2.38  PSLR  -12.6 -13.0 -12.8 -11.9 -11.5 -12.6  ISLR  -9.50 -9.97 -9.94  -13.0 -13.0 -12.7 -12.3  -9.18 -8.69 -9.47 -9.84 -9.98  -9.79 -8.96  TGT  11  12  13  14  15  16  17  18  19  20  IRW  2.36  2.36  2.36  2.36  2.37  2.36  2.36  2.35  2.36  2.36  PSLR  -12.7 -13.0 -13.2 -13.0 -12.5 -13.0 -13.0 -13.0 -13.1  -12.7  ISLR  -9.76  TGT  21  22  23  24  25  IRW  2.38  2.37  2.36  2.36  2.36  PSLR  -11.9 -12.4  -12.7 -12.8 -12.7  ISLR  -9.23 -9.64  -9.81  -9.90 -10.0 -9.94 -9.56 -10.1  -9.96  -9.95 -9.74  -9.80 -9.61  Range compressed target no.21  1830  -10.1  1835 1840 Time (samples) — >  Azimuth compressed target no.21  1845  4775  4780 4785 4790 Time (samples) — >  Figure 6.12: Impulse response of corner Target 21.  151  4795  Chapter 7 NLCS Algorithm for the Stationary Receiver Case 7.1  Introduction  In this bistatic configuration, the transmitter is moving in a straight line and imaging at broadside, while the receiver remains in a fixed position. This geometry is useful for bistatic systems where the transmitter is located in a satellite or airborne platform and the receiver is located on top of a high building or hill. The NLCS has been demonstrated to focus this bistatic configuration in [44]. However, it is only able to handle short-wavelength cases where the range curvature is negligible. The main ideas introduced in the previous chapter, such as QRCMC, SRC and frequency-domain matched filtering are applicable in this case as well. These techniques allow the algorithm to handlefiner-resolutionand longer-wavelength cases with accuracy and efficiency. The limitations of the algorithm and the registration of the focused image to the ground plane are discussed in this chapter. An S-band  152  simulated example is used to demonstrate the focusing capability of the improved algorithm.  7.2  Improved NLCS for the Stationary Receiver Case  The bistatic configuration where the transmitter is imaging at broadside and the receiver is stationary is referred to as the "stationary receiver case". It should be noted that this is but a term of convenience, as the focusing algorithm is the same whether a stationary receiver or a transmitter is used. For short-wavelength cases, the steps taken by the NLCS algorithm are identical to Figure 6.1, minus the LRCMC and linear phase correction steps, since the linear terms are not present for this bistatic setup. For wide-aperture cases, RCMC must be included. For even wider-aperture cases, range Doppler coupling occurs as well. RCMC and SRC are both required in the processing steps. Figure 7.1 shows the sequence of operations of this improved NLCS algorithm. The steps for this implementation of the improved NLCS are summarized as follows: 1. The data are first transformed to the two-dimensional frequency domain to perform range compression and SRC. 2. A range IFT is used to transform data back into the range Doppler domain. Range curvature correction is then performed in this domain. RCMC straightens out the trajectories so that they run parallel to the azimuth frequency axis. 3. An azimuth IFT is performed to transform the data back into the time domain. A perturbation function with a quartic dependence is then applied along the 153  Raw radar data  Range FT Azimuth FT  NLCS  SRC Range Compression  Azimuth FT  Range IFT  Azimuth Compression  RCMC  Azimuth IFT  Azimuth IFT  Compressed data  Figure 7.1: Functional block diagram of NLCS algorithm with SRC for the stationary receiver case.  azimuth time [44] to equalize the F M rates along each range gate. 4. Finally, the data are transformed back into range Doppler domain and azimuth compression is carried out using a frequency-domain matched filter. An azimuth IFT converts the focused data back to time domain. Since some of the operations in the stationary receiver case are quite similar to the parallel case, the analysis here are briefly discussed. There are many operational 154  similarities between the stationary receiver case and the parallel case discussed in Chapter 6. Therefore, process analysis of the stationary receiver will be brief where similarities exist and more detailed where difference arises.  7.2.1  SRC and R C M C  These two steps are similar to the extended NLCS algorithm presented in the Chapter 6. The SRC term is evaluated at a reference range and applied in the two-dimensional frequency domain. The SRC phase is given by (6.30). The use of LRCMC is the main limiting factor of the range invariance region for the NLCS algorithm for the parallel case [see Section 6.4.2]. The NLCS algorithm for the stationary receiver case generally has less restriction on the range invariance region because of the absence of this step. Therefore, the range invariance region tends to be larger. For instance, for an L-band system with a lm resolution with the nominal range of about 5 km for the receiver and 20 km for the transmitter, the range invariance region is about 2.4 km. This is a larger range invariance region compared to the one given in Section 6.2.6, therefore fewer overlapping regions are necessary. Furthermore, the SRC requires two fewer FT pairs. Therefore, performing SRC is still a feasible option for this case. The RCMC step is performed immediately after the SRC step. This is advantageous because RCMC removes the need to apply different perturbation coefficients across the azimuth aperture [see Section 6.4.1]. Because range curvature is often quite large when SRC occurs, it is better to perform range curvature correction before performing F M rate equalization. SRC is needed in high resolution and long-wavelength cases. However, for moderate wavelengths like S-band and C-band, only RCMC is needed. The range curvature 155  is usually only a few cells. To improve efficiency, the algorithm performs F M rate equalization in the time domain first, followed by RCMC in the range Doppler domain. The discussion in Section 7.4.1 shows that reversing these two steps has little impact on the focusing accuracy of the algorithm.  7.2.2  F M Rate Equalization  The data for this bistatic configuration are naturally azimuth-variant. The reason for this azimuth invariancy is depicted in Figure 7.2(a). Targets D, E and F lie parallel to the flight path. They have the same F M rates because their Doppler histories are the same. However, they lie on different range gates. Targets E and F lie on the same range gate as Target D. Targets E and F have the same F M rates but, a different F M rate from Target D. The signal for each range gate is azimuth-variant since point targets with different F M rates lie in the same range gate. Figure 7.2(b) shows where each point target lies in the raw signal. Figure 7.2(c) shows how the signals overlap in the range Doppler domain. The perturbation function to equalize the F M rates in each range gate for the stationary receiver case is derived earlier in [44].  The effects of applying the perturbation function on the signal are illustrated in Figure 7.3. Figure 7.3(a) shows the signal of three point targets [ Targets D, E' and F' ] with different F M rates lying in the same range gate. Figure 7.3(b) shows the corresponding quadratic phase of each F M chirp. The phase of a quartic perturbation function is shown in Figure 7.3(c). This phase function is added to the phase of the signal to give the phase in Figure 7.3(d). After perturbation, the three point targets have the same quadratic phase. Finally, Figure 7.3(e) shows the perturbed signal of the point targets.  156  7.2.3  Perturbation Coefficient for the Stationary Receiver Case  For wider-apertures, it is important to include the analysis for the third-order and fourth-order phase terms. Such analysis was not presented in the earlier paper [44], as it deals with only shorter wavelength cases. In this section, higher-order terms are included in the analysis. The perturbation function for the stationary receiver case is derived using the same method given in the earlier chapter. As before, the phase equations are evaluated up to the quadratic term and this restriction can be lifted by inclusion of higher-order terms at a later stage. Consider the azimuth signal for reference Target D after RCMC and F M equalization steps. The signal can be written as SDpert(T,77)  «  p (r T  -  +  7T  ^  R  ° ) W ( 7 ? ) exp( a z  j ^ t T ?  V  4  )  2  (7.1)  V } 2  RT o  The transmitter closest range of approach to the Target D at the mid-aperture point is defined as Rro and the receiver distance to Target D is 7?R . The perturbation 0  157  Point target E'  Point target D  Point target F'  Az time  E  (b)  S Az time  4(c)  |  Az time  (d)  Az time  (e)  Fi gure 7.3: Illustration of the effects of perturbation for the stationary receiver case, coefficient is given by a . The instantaneous range of reference Target D is given by si  R {rj) D  = RRO + RT + k 0  rf + k  m  (7.2)  rf + ...  m  Target E' and Target F' can be focused with the same azimuth matched filter and the signal for Target E' can be written as •SEpertOr,  rj) «  p (r r  R  t  °  +  R  r  ° ) w (r] - r] ) exp(j7ra r7 )  V  2  exp <J -j-  4  az  d  st  "l  (7.3)  RTO  where the transmitter closest range of approach to the Target E is i i ^ . 0  Once again, the azimuth time (r? - %) is replaced by azimuth time r/i. The 158  perturbed signal at Target E' can be written as ) w (r]i) w  exp (jira (rji st  +  7? ) ) 4  d  (7.4) Expanding (7.4), /  '  \  SEpert(T,?7i)  «  /  p_{T  exp I  ^ T o +  ^ R o x  /  x  )^a (?7l) Z  rj\ + Arjdril + Q^Wi + ?d ?i + Vd 4r  JTra  st  7  (7.5) Having determined the perturbed signal for Target D' and point target E', the next step is to determine the perturbation coefficient a , needed to equalize the F M rates st  of points lying in the same range gate. The F M signal of Target E' signal can be visualized as F M signal of the reference point target D signal displaced by azimuth time. Target A is centered at 77 = 0 and Target C is centered at 77 = r/d- Keeping this in mind and examining (7.1) and (7.5), the following can be observed of the phase terms: • The quartic terms are identical. • There is an additional third-order phase term in the result. Both of these higherorder terms were not recorded in [44]. This third-order phase term changes with azimuth time and is left uncompensated in this implementation. The magnitude of this uncompensated phase should be kept below 7r/4 to avoid poor PSLR. • The quadratic terms can be equalized by setting perturbation coefficient a as si  (7.6) 159  T h e value of a  s t  can be determined using a numerical method outlined i n [44].  • There is an additional linear phase term i n the perturbed  phase function of  Target C . T h e linear term causes a small shift i n the spectrum of Target C . T h i s generally does not present a problem, due to the oversampling i n the azimuth. A s long- as it stays w i t h i n this constraint, no aliasing of the spectrum occurs. T h e amount of spectrum shift is given by  /shift.s =  2a T]l  (7.7)  st  T h e point target w i t h the furthest range from reference w i l l experience the most spectrum shift. T h e point targets are shifted away from zero-azimuth frequency. • T h i s linear term also causes a slight misregistration of the point targets at the edge. Since focused point targets are registered to the stationary points, it can be deduced from Figure 7.3(d) and (e), that the edge targets are misregistered slightly away from the zero-azimuth time. T h i s misregistration is small enough to be ignored i n most cases. T h e position of the stationary point can be easily found by differentiating the phase of (7.5) and setting it to zero.  T h e point  targets are registered to the roots of the equations,  aZ  "(V,r/i) = 47Ta r]l + I2ixa r) r)l st  =  si  0  d  2TT  RT o  7/! + 47ra 7/ st  3  1  (7.8)  T h e misregistration can be removed during image registration using the same technique given i n the previous chapter. • T h e constant phase term has no effect on the focusing process. It can be ignored if a magnitude image is the final product.  160  7.2.4  Azimuth Compression  The frequency-domain matched filter can be derived in the same manner as the matched filter for the parallel case. The extra third-order phase term is left uncompensated since it is dependent on the position of the F M signal in the azimuth array. This third-order term limits the invariance region in azimuth for wider-aperture cases. Ignoring the effects of extra third-order phase term and assuming the fourth-order phase term is constant throughout the processed block, the range Doppler signal for the reference point target can be written as Sazoir, f ) v  ~ j' PT(T -  exp  R  T  °  +  R  ° ) w (r)) exp(-j'27r/ 77)  R  C  7)  az  ^ [kmr] + 2  ^D4?? ) + 4  J7ra t?? | dr) 4  s  (7.9)  where a is the perturbation coefficient for the stationary receiver case. Using POSP st  [94] and series reversion formula [refer to Appendix C.l], the azimuth frequency and azimuth time r) is now related by f = -^(2fc r -t-4fc 7? ) +2a 3  v  D2  ?  V(fv) = A f  D4  + A f +A^ 2  (7.11)  + ...  2  1 v  (7.10)  3 stV  v  where ai  =  - — f c  c  D  2  A = ±A ai 1  ,  a = 0, a = 2a - — c 2  3  st  ( -12) 7  &D4  = Q,A = -(™) \ a\ J  3  (7.13)  3  The frequency-domain matched filter is the conjugate of <SzD and is given by a  M/u) Mv(fv))  =  =  ex  P( J''MvUV)) _  (M/,)  2  + W / , ) ) + ^str?(/,) 4  161  4  27r/„77(/„)  (7.14)  7.3  Registration to Ground Plane  For a flat-Earth assumption, registration for this case is quite straightforward. Point targets that lie parallel to the flight path are focused into a hyperbola. Assuming a beam center crossing time of % for an arbitrary point target, transmitter closest range of approach of R , and a receiver closest range of approach of R , the point target To  Ro  will focused to the azimuth time of r/d and the range of (7.15) Interpolation is required to remove this hyperbolic dependence and register the image properly.  7.4  Limitations of the Algorithm  The NLCS algorithm for the stationary receiver case generally has less restriction on the range invariance region when compared to the squinted parallel case. The restriction in range is caused by the use of SRC. Without SRC, the NLCS algorithm for the stationary case can handle a large range swath with ease, since all its operations are either in the range Doppler domain or the time domain. Operating in these domains allows the algorithm to handle range varying parameters. Interestingly, the algorithm is limited in azimuth, mainly because of the additional third-order phase and the requirement for the range curvature to overlap. These limitations of the algorithm are discussed in the next few sections.  162  7.4.1  The Effects of Applying Perturbation before Residual RCMC  Applying different perturbation functions to the phase of a point target results in a quartic phase error. The same procedure in Section 6.4.1 can be applied to find the upper bound for this phase error. The upper bound of the phase error is given by 4  A<p  asl  = n (M Aa ) atSt  st  (7.16)  y  Provision for this upper bound error is not necessary when SRC is implemented since range curvature correction is done before applying the perturbation function for the stationary case. The sum of the uncompensated fourth-order phase error and this upper bound phase limit should be kept less than 7r/4 in the invariance region.  7.4.2  Invariance Region Analysis - SRC  The range extent that can be processed by the algorithm is limited by the use of the bulk SRC filter. This limitation only affects the wider-aperture cases where SRC is required. The SRC phase is computed using the same method given in Section 6.2.6, with all the receiver phase terms set to zero. Consider a reference Target D and an arbitrary Target Q with the same beam center crossing time. Point target Q is at a slant range of AR from Target D. The X  phase error due to the use of a single bulk SRC can be computed by (7.17) where K _ SRC  and K Q STC  are the residual F M rate caused by the cross coupling. They  can be derived in the same manner as (6.32) in Section 6.2.6. This phase error should 163  be within IT/2. The maximum value of AR is one-half the swath width that can be X  processed by the algorithm.  7.4.3  Invariance Region Analysis - Cubic Phase Error  The azimuth-invariance region that can be handled by the algorithm is mainly limited by the uncompensated third-order phase error. The uncompensated third-order phase should be limited to a phase error of 7r/4, (7.18) where a of  Arjy  r e  f>t  is the perturbation coefficient at the reference range. The maximum value  will determine the azimuth-invariance region that can be accommodated by  the algorithm.  7.4.4  Invariance Region Analysis - Residual R C M  Range curvature difference is the largest between the reference range and the edge targets. This range mismatch should be kept within one range cell to avoid significant degradation in impulse response. If arbitrary point target Q is focused at  n  =  Arj  y  and at the same range [ RT + RRO ] as the reference point, then the tolerable limit for 0  overlapping of QRCM is given by (7.19) where RT Q 0  RT Q 0  and  is the closest range of approach of point target Q with the transmitter.  Ar)  y  have to be determined numerically.  164  7.4.5  Invariance Region Analysis - Quartic Phase Errors  It is important to consider the quartic phase only for high resolution and very wide aperture. The quartic phase is assumed to be the same for the invariance region and it is given by 2TT| 04th,st(-RTo) =  ~r  A  V4  T  (7.20)  1 8R__  then the limit on the fourth-order phase is given by 4th,st —  A0  Qst  04th,st(-RTo)  04th,st(-RToQ)  _  + A0  QLT  (7.21)  < -  is due to the quartic phase error due as a result of applying the same perturbation  function across different range cells as discussed in Section 7.4.1.  A<j>  a  vanishes if the  SRC stage is implemented.  7.5  Simulation Example  The NLCS was used to focus a short-wavelength X-band stationary receiver case in [44]. In this simulation, the NLCS is used to focus a long-wavelength (S-band) case that has range curvature, which stretch several (7.1 range resolution cells) range cells.  7.5.1  Simulation Parameters  The simulation uses the airborne SAR parameters given in Table 7.1.An illustration of the geometry for this simulation is given in Figure 7.4. The oversampling ratio is 1.2 in range and 2.0 in azimuth so the theoretical range resolution is 2.08 m (1.06 cells) and the azimuth resolution is 1.42 m (1.77 cells) based on the definitions given 165  in Section 2.4.3 and Section 2.4.4. A rectangular window is used in both the range and azimuth processing. Table 7.1: Azimuth impulse response for simulation on stationary case. Simulation parameters  Transmitter  Receiver  Velocity in x-direction  0 m/sec  0 m/sec  Velocity in y-direction  200 m/sec  0 m/sec  Velocity in z-direction  0 m/sec  0 m/sec  Wavelength  0.08 m  Range bandwidth  200 MHz  Doppler bandwidth  125 Hz  Integration time  4.3 sec  Altitude  4000 m  500 m  Range to point target at r? = 0 sec  17222 m  4792 m  Range to point target at r\ = 5 sec (77a)  17171 m  4844 m  Distance between airplanes at r\ = 0  12486 m  Minimum distance between platforms  12486 m  Maximum Distance between platforms  12593 m  Nominal value of a  5.85 x 10- sec" 4  st  Nominal value of A a  7.5.2  -1.17 x 1 0  s t  -7  4  sec  -4  Analysis of Invariance Region and Limitations  Using the nominal value of a , the azimuth-invariance region is about 2.15 km. SRC st  is not necessary for this configuration, therefore there is not much restriction in range swath. The percentage of QRCM mismatch between the reference point target and the edge target is about 2% of the range resolution cell, so point targets in the same range gates have negligible mismatch in range curvature overlap. The magnitude of 166  \ Transmitter flight path  o r i  9  4.77km-  «  i n  16.75km  Figure 7.4: Flight geometry for moving transmitter with stationary receiver. uncompensated fourth-order phase error due to range varying perturbation function is less than 2.1  xl0~ 7r 5  therefore there is little impact to the focusing accuracy by  switching the F M equalization stage with the range curvature correction stage. The amount of spectrum shift,  / hift,s S  = 0.15 Hz, which is less than 0.2% of the spectrum,  therefore no aliasing of the azimuth spectrum will occur. The misregistration equation in (7.8) gives a small range misregistration of l m at the edge of the azimuth array.  7.5.3  Simulation Results  Figure 7.5 shows a well-focused image. The image is properly registered and is shown in Figure 7.6. All the range point targets are very well focused, the range PSLR deviates from the theoretical value of -13.3 dB by less than 0.05 dB and the range ISLR is within 0.05 dB from the expected value of -10.0 dB. The point target quality  167  measurements for the azimuth direction are given in Table 7.2. The targets that are further away from the midaperture (rj = 0) increasingly suffer from increasing third-order phase error. This shows up evidently from their asymmetrical sidelobes. The impulse response of a pair of corner point targets are shown in Figure 7.7 and Figure 7.8. Focused Map  R a n g e time (samples)  Figure 7.5: Focused map for the stationary receiver case.  168  Focused Image Registered to Ground Plane  Relative distance from scene center X (m)  Figure 7.6: Focused map registered to ground plane.  Range compressed target no.1  576  578 580 582 Time (samples)  584 >  Azimuth compressed target no.1  586  790  795 800 Time (samples) >  Figure 7.7: Impulse response of corner point target 1. 169  805  Table 7.2: Simulation results for point target azimuth impulse response. TGT  1  2  3  4  5  6  7  8  9  10  IRW  1.80  1.75  1.77  1.756  1.75  1.77  1.77  1.77  1.76  1.78  PSLR  -12.2  -11.5  -11.9  -11.8  -11.7 -12.4  -12.4  -12.6  -12.4  -12.8  ISLR  -10.5  -9.37 -9.80  -9.55  -9.36  -9.89  -9.85  -9.93  -9.70  -10.2  TGT  11  12  13  14  15  16  17  18  19  20  IRW  1.75  1.76  1.77  1.75  1.80  1.77  1.77  1.77  1.76  1.78  PSLR  -12.8  -13.0  -13.2  -12.9  -13.9  -12.4  -12.4  -12.6  -12.4  -12.8  ISLR  -9.46  -9.65  -9.96  -9.56  -10.8  -9.89 -9.85  -9.93  -9.70  -10.2  TGT  21  22  23  24  25  IRW  1.80  1.75  1.77  1.76  1.75  PSLR  -12.2  -11.5 -11.9  -11.8  -11.7  ISLR  -10.54  -9.37 -9.80  -9.55  -9.36  Range compressed target no.21  576  578 580 582 Time (samples)  584 >  Azimuth compressed target no.21  586  3295 3300 Time (samples)  3305 >  Figure 7.8: Impulse response of corner point target 21.  170  Chapter 8 Conclusions 8.1  Summary  The objectives of this thesis have been to investigate the processing of bistatic SAR data synthesized using different geometrical configurations and developing processing algorithms to focus those data. The following is a summary of the work and contributions contained in the thesis. The theory of bistatic SAR imaging has been reviewed, such that bistatic SAR processing has been presented in terms of matched filtering. The importance of the impulse response of a point target as a measure of system performance has been highlighted with point target quality measures: resolution, PSLR and ISLR. The geometrical dependence of the bistatic resolution measurement has also been discussed in detail. To avoid ambiguity, impulse response parameters are measured in the range time and azimuth time domains. A signal model was introduced to describe a general bistatic configuration. A common notation was used to describe and review a broad selection of the existing 171  bistatic algorithms. They can be broadly characterized into time-domain-based algorithms and frequency based algorithms. Time domain algorithms have excellent phase accuracy and can be used for any bistatic configurations. However, they are computationally intensive. Frequency based algorithms are generally more efficient but most have to restrict processing either to azimuth-invariant bistatic configurations or to small invariance regions. The frequency based algorithms can be further sub-divided into three categories: numerical methods, point target spectrum methods and pre-processing methods. Overcoming the inversion of the DSR is the most important step of the frequency based methods and the way each of these methods handles the problem has been briefly discussed with a few prominent algorithms. The two-dimensional point target spectrum for the general bistatic case is developed by expressing the bistatic range equation as a power series and using the method of series reversion to express azimuth time as a function of azimuth frequency. This results in a power series expression for the spectrum of the point target. Deriving the point target spectrum in this new way is known as the method of series reversion. The accuracy of the spectrum is controlled by the number of terms used in the expansion of the power series. The accuracy of the derived spectrum is verified using a simulation where the point target is simulated in the time domain, then compressed using a twodimensional matched filter derived from the spectrum. The MSR is also applicable to monostatic stripmap and spotlight situations where the simple hyperbolic range equation does not hold. Two ways of deriving an approximate bistatic point target spectrum were examined. It has been shown that the LBF is a special case of the spectrum formulated using reversion of series. The method of splitting the phase terms into a quasi-monostatic term and a bistatic deformation term becomes inefficient when the bistatic degree is 172  high. An alternate geometrical method of deriving the Rocca's smile operator was also shown. A new bistatic RDA has been derived using series reversion to obtain an analytical expression for the two-dimensional point target spectrum. It applies to azimuthinvariant cases where the transmitter and receiver are operating in parallel tracks with the same velocity. Arbitrary accuracy can be achieved by retaining sufficient terms in the series reversion. SRC is applied in the two-dimensional frequency domain to obtain good accuracy with antenna squint and wide-apertures. This approach can also be applied in the development of a CSA for azimuth-invariant bistatic cases. SRC is used to correct for the range azimuth coupling in the received data. The severity of the coupling increases with squint angles, length of synthetic aperture and range bandwidth. The SRC term depends on range, but this dependence cannot be accommodated in the two-dimensional frequency domain. Thus, the SRC filtering process imposes a limit on the range that can be processed in one batch. However, this can be overcome by the use of range invariance regions with overlapping range blocks. The width of the range invariance region is determined by the severity of the range azimuth coupling. For less severe coupling, an approximate and more efficient RDA can be used, which is similar to the monostatic RDA. The NLCS algorithm is reviewed and was found to be a suitable algorithm to process bistatic SAR data. The original algorithm is extended to process bistatic data acquired on a parallel flight path or slightly non-parallel flight path using a thirdorder perturbation function. The extended algorithm encompasses all the advantages of the original algorithm: ability to process highly squinted geometry, efficiency approaching that of monostatic algorithms and the capacity to handle some amount of azimuth invariancy in the signal data. The algorithm is improved further by including a residual RCMC step which allows the algorithm to handle moderate wavelength cases 173  (e.g., C-band and S-band). The azimuth focusing is done efficiently with the use of a frequency-domain matched filter developed using the analytical MSR spectrum. For longer wavelength cases, the use of SRC to reduce the range Doppler coupling effects would be necessary. SRC is applied in the two-dimensional frequency and is normally avoided as it drastically reduces the efficiency of the algorithm. The NLCS method is more suitable for focusing moderate to short-wavelength bistatic data. A registration technique was introduced to register the focused image to the ground plane. The following simulations were presented to verify the accuracy of the algorithms developed. The simulation examples cover several configurations: • C-band system with parallel flight paths and fixed baseline. • C-band system with non-parallel flight paths and non-equal platform velocities. • S-band system with a transmitter imaging at broadside and a stationary receiver. In addition, the bistatic RDA has been applied to focus real bistatic data from FGAN.  8.2  Contributions of this Dissertation  This dissertation makes a number of significant contributions to bistatic SAR image formation and the development of bistatic SAR algorithms to accommodate various bistatic SAR geometries. The following is a list of the major contributions of this thesis: 1. Derivation of the power series representation of the general bistatic point target spectrum (MSR) in Section 4.2.  174  2. Linking the spectra results between MSR with LBF. The MSR can be formulated as TSPP using a Taylor series expansion of the bistatic phase about the two monostatic stationary points. Section 4.4 shows how the LBF is a special case of the more general TSPP formulation. 3. Section 4.6 shows how the bistatic deformation term of LBF is geometrically related to the Rocca's smile operator. An alternative method to derive the Rocca's smile operator using a simple geometrical method is shown in Section 4.6.1. 4. Section 5.2 developed the bistatic RDA, which is able to handle highly squinted, wide-aperture azimuth-invariant cases. This method is applied successfully to focus real bistatic data. 5. The NLCS algorithm is originally only able to handle the stationary receiver case. Section 6.2 shows how the NLCS algorithm is extended to handle the bistatic configuration where the platforms are flying in a parallel or slightly nonparallel tracks. The limitations, invariance region analysis and registration of the focused image to ground plane were also discussed and analyzed in detail. 6. Improvements of the NLCS algorithm to handle wider-aperture and longerwavelength cases was made possible through the use of the MSR formulation. The MSR formulation was applied to the algorithm to: • remove residual RCM in the range Doppler domain • handle range Doppler coupling via the use of a bulk SRC filter • perform azimuth compression efficiently through the use of a frequencydomain matched filter The following is a list of the minor contributions of this thesis: 175  1. The impact of applying several perturbation coefficients across the phase of a point target has been studied and found to have minor impact on the accuracy of the NLCS algorithm. Likewise, it is possible to perform F M rate equalization before residual RCM correction without severe impact on the accuracy of the. algorithm. Section 6.4.1 investigated the impact for the parallel case and Section 7.4.1 studied the impact for the stationary receiver case. 2. Appendix D shows how improvement on the to — k algorithm to allow the algorithm to focus the configuration where both platforms are flying in a non-parallel plane from the ground plane. The results of this thesis will be useful to a number of agencies working on bistatic SAR development, including the author's sponsoring company, DSO National Labs in Singapore.  8.3  Further Work  The MSR derived in this thesis can be used to formulate other algorithms. The spectral result can be used to derive a more accurate formulation of the bistatic deformation term (see TSPP in Section 4.4.1) for algorithms using the LBF. The Rocca's smile operator can also be improved using the MSR. Monostatic algorithms that previously limited the phase expansion to the fourth-order terms such as [95] can now handle even wider-apertures by systematically expanding more terms. This spectrum can also accommodate geometries that assume flight paths that are not straight. The NLCS assumes a constant linear phase and a constant L R C M within an invariance region. For longer-wavelength and steeper incidence angles, the change of squint angles with range is more rapid, causing the processing to be valid in only a 176  small invariance region. The use of a non-linear RCM may help to increase the size of the invariance region that can be processed. Most frequency based bistatic SAR processing algorithms are based on a flatEarth assumption. For a monostatic case, point targets with the same range of closest approach have the same Doppler history. They are focused to the same range even if they have different heights. For the bistatic case, point targets which focus to the same range gate do not necessarily have the identical pair of closest ranges of approach. The flat-Earth topography assumption eliminates this ambiguity. Thus, for practical implementation, it may be necessary to know the height of the target or the geometry of the image plane using a DEM for proper focusing [73]. This requirement is necessary for all focusing algorithms, including time-domain-based algorithms. The accuracy of focusing point targets with different elevation from the imaging plane is an important area that could be investigated. This thesis has concentrated on the study of focusing of bistatic data. The accuracy of the registration of the image to the ground plane has not been studied in detail. The registration of three-dimensional point targets is dependent on the flight geometries of the platforms. Combining position information from several platforms could improve the registration and is another area to explore. Most of the work in this thesis is based on simulation and verifies the processing capability of the NLCS. Even so, the advantageous capability of processing highly squinted data have not been shown on real data. Thus, it would be interesting to investigate the ability of the algorithm to focus real data when it becomes available. The bistatic problem has been studied in detail in geophysics and solutions exists for several cases for two-dimensional and three-dimensional geometries. It would interesting to study these seismic processing techniques and link them with the equivalent  177  bistatic SAR counterparts. Some of these ideas have been introduced into SAR signal processing [30,32]. The stationary receiver case is similar to the so called common shot profile (as named in geophysics) [96-98]. Another area worth exploring is in the area of diffraction tomography [99]. This method is able to process three-dimensional images of the subsurface using geophysical diffraction tomography. A linear flight path is often used since it is the least taxing on a platform's measurement unit [51]. Errors in the estimation of the flight path lead to phase errors in the processed image, degrading the image quality. Autofocus is a technique that can help to estimate the phase errors and help refocus the image [48]. It can relax the accuracies needed in the navigation unit and thus help reduce overall cost. To date, very little has been published on the effectiveness of existing autofocus algorithms on bistatic SAR. This is another key area for future work.  178  Appendix A Resolution A.l  Vector Gradient Of Bistatic Range  An arbitrary point target is given by the Cartesian coordinates (X ,Y ,Z ). 0  transmitter located at (X ,Y ,Z ) T  T  and the receiver is at (X ,Y ,Z ).  T  R  R  R  0  0  The  The total  slant range R is given by the sum of RT and R , R  \RT\  =  -  \RR\  = y/(X - X )  V(*o  + (F - F ) + (Z - Z ) 2  Xy  0  T  0  r  + (F - F ) + (Z - Z )  2  a  2  T  2  R  0  2  R  0  R  (A.l) (A.2)  The vector gradient of the slant range gives the direction of the steepest change in range, the x component of the vector is given by dR (x -x )~dx~ \R \ 0  T  t  T  (X -X ) \R \ a  +  R  -  [A  R  (AA)  179  6)  Similarly, OR dy  (Y -Y ) \Rr\  =  OR  0  (Y -Y )  T  0  \RR\  (Z -Z )  (Z -ZT)  =  0  0  dz  (A.5)  R  \R_\  (A.6)  R  \R \  +  R  Recognizing that u and u are defined by r  t  - Xo) . , (Y - Y ) . , (Z -  (X  T  T  -I +  \RT:\ u  r  =  R  "X  ~J~  (A.7)  0  \RT\  J  (Y -Y )_  A  Z  T  -3 +  \R-T\  {X -X ) \RR\ R  A  (Z -Z )  ,  0  R  (A.8)  0  \R*  \RR\  Giving the expression for Vi? in (2.23).  A.2  Vector Gradient Of Bistatic Doppler  The Doppler signal of the point target recorded at the receiver is given by equation (2.30) (X  0  - X)  •V  T  TX  y/{x  0  +  (X  0  + (Y - Y ) 0  - xTy  T  R  0  T  R  + (Z  RY  V ( X - x ) + (Y - Y y 0  R  T  0  •V  2  - Z) R  0  + (z  0  VTZ  zy  2  T  0  0  R  - Z)•  + (Z  TY  + (Y - y ) + {z -  • VR* + (Y - Y )  - X)  •V  zy  -  • V,Rz  (A.9)  R  where the velocity of the transmitter is given by (VTX, Vr , V_ ) and velocity of the y  z  receiver is given by ( V R * , V , V ). RY  RZ  The vector gradient of the Doppler frequency V / d is given by (2.31). The x component of the vector gradient is given by  dfM dx  =  1 A  V  T X  \Rn 180  (X -X ) V ~ 0  T  2  TX  (A.10)  Similar expression can be derived for y component and z component.  dfM  =  Oy  dfM dz  =  1 A  Ty  1 x  YrYViT y  ZT)  \RT\  VTZ  (A.ll)  (A.12)  Substituting (A.10) to (A.11) into the components of (2.31), V / d can be written in the compact form (2.32).  181  Appendix B Loffeld's B i s t a t i c F o r m u l a t i o n  B.l  A Geometric Result on Two Quadratic Functions  The sum of two quadratic functions is a shifted and scaled quadratic function, w ab + -cd  1  2  ac (b - df (B.l) + a+c a+c The result in (B.l) is applied to (3.22), leading to (3.23), with the substitution a a{x' - bf  <h(rh), c =  + c(x' - df  4>R(VK), X'  =  (a + c) '  = rj, b = rjr and d = rj . K  Further, if ,  ab + cd  (B.2)  a+c then the first term on the RHS of (B.l) becomes zero, leading to a(x' - b)  2  ac -Ab-df a+c  + c{x' - df  (B.3)  The equation (B.3) is equivalent to (4.34) with the following substitution d = </>T('7T), c = (PR(VR), X' = rjb, b  —  r}j and d =  TJR.  The result holds only if the condition in  182  (B.2) is met. Interestingly, this condition actually implies that -% ~ rfb, since the RHS of (B.l) is the definition of 1% from (3.24).  B.2  Loffeld's Bistatic Formulation  The two-dimensional point target spectrum has two phase component: quasi-monostatic term and a bistatic deformation term [29]. The quasi-monostatic term is given by *I(/T,  /,)  =  -fv • (2r?Ro + a' ) + 2rr 0  (B.4)  C  and the bistatic deformation term is given by  2  [  U  '  H  )  R*oC-  (fr + f )*\l.F (f J )l a' .V£.F (f jrf 0  T  r  v  +  2  R  '  T  [  V  T  V  R  )  (B.5) where FT(UJV)  =  (fr + f o ? - f ^  (B.6)  Fn(fr,fv)  =  (fr + f o ) - f i ~  (B.7)  2  where RT is the range of closest approach for the transmitter and r/xo is corresponding D  azimuth time when it occurs. _RR is the range of closest approach for the receiver and 0  ?7R  0  is corresponding azimuth time when it occurs, rjr is the solution to stationary  point of the transmitter and ?T~R is the solution to the stationary point of the receiver. The coefficients a' and a' determine the bistatic grade of the configuration. They 0  2  are defined by a' = VTO - r? 0  a  2  = —— -rtRo  183  Ro  (B.8) (B.9)  Appendix C Reversion of Series Formulation C.l  Reversion of Series Formula  Series reversion is the computation of the coefficients of the inverse function given those of the forward function. For a function expressed in a series with no constant term, an = 0 as y = aix + a x + a 3 X . . . 2  (C.l)  3  2  the series expansion of the inverse series is given by x = A  + A y + A y ... 2  lV  (C.2)  3  2  3  Substituting (C.l) into (C.2), the following equation is obtained y = aiA'iy + {a A\ + a A )y + (a Al + 2a A A 2  2  x  2  3  1  1  + aiA )y + ... 3  2  3  (C.3)  184  By equating coefficients then gives Ai = aT/  1  A = -a^ a 3  2  2  ^3 = a r ( 2 5  2 a  _  a  i 3) a  A4 = a7/ (5a a a — a\a — 5a ) 7  2  2  3  4  2  (C.4)  Mathematical formula book by Dwight [100] gives the rest of coefficients for the reversion of series. A derivation of the explicit formula for the n term is given by Morse th  and Feshbach [85].  185  Appendix D Bistatic SAR Image Reconstruction Using Omega-K Algorithm In this section, the numerical method of handling the DSR is shown using the ukA implementation given in [35]. The platforms move with identical velocities and the flight paths lie in a plane parallel to the ground plane (a flat-Earth topography is assumed). Figure D . l shows the geometry of the bistatic system. An arbitrary point in the image (x ,y ) can be referenced by the closest range of n  n  approach as in the monostatic case. For the bistatic case, ambiguity arises since there are two closest ranges of approach as discussed in Section 3.4.1. For this algorithm, the semi-bistatic range 7 is introduced to replace these two parameters and it is defined n  as follows: 7 n = ^0RTOP + # R O P )  (D.l)  where  7?TOP  is the closest range of approach for the transmitter to an arbitrary point  P and  RROP  is the closest range of approach for the receiver to an arbitrary point P.  Thus, each point on the ground plane with the coordinate (x , y ) has a corresponding n  186  n  Figure D . l : Bistatic geometry for azimuth-invariant case. coordinate of (7 ,y„) in the 7-y plane. n  The instantaneous range of a point target can now be expressed in the two parameters form Ry( ) U  = ^R\  o P  + (yn-U-  y  i  y  +  ^Rl  op  +  (y n  U  )2  yi  +  ( D  . ) 2  The parameter 7! satisfy the following equation [ see triangle MiM P in Figure D.l], 2  -RTOP = 7n - 7I #RoP = 7n + 7 i  (D-3)  Using the cosine rule, T  •Jl — Xi — cosf? 7n  187  n  (D.4)  where r is the one-way slant range from the equivalent monostatic system to point n  Target P and 8 is the squint angle. An approximation of the parameter X[ is given n  by T  x\ — cos 9  7i ^  (D.5)  0  7o  where r is the one-way slant range from the equivalent monostatic system to point 0  Target P and 8 is the squint angle. The instantaneous range can be expressed in terms of 7 and y. Q  0  Rv{v)  =  \/(7n -  7i)  2  + {v-  yi)  2  +  \/(7n -  7i)  2  + (v-  yi)  2  (D.6)  where v = y — u. n  At this point, the bistatic system is transform from a system where each point is reference by three parameters by two parameters  (RT P, 0  RR P, 0  2/n)  to one where each point can be reference  (7 ,y )n  n  In the monostatic case, the phase in the K — K plane is linearized using Stolt u  interpolation [25] to K — K as discussed in Section 3.4.1. We can focus the bistatic x  y  data by linearizing the phase in the K — K domain in the same manner. There is no n  y  simple analytical solution for the spectrum, therefore a numerical approach is used. To obtain the phase in the  — K domain, we must first obtain the twoy  dimensional spectrum in the K — K domain. Starting from the baseband range v  compressed signal, a FT is done in azimuth to get the two-dimensional spectrum of the signal. The phase of the Fourier integrand is given by ij>(K, K , v; , y ) = v  7 n  n  ( A Y ( 7 n  + Ky v  n  -  7i)  2  - Kv  + (v -  y i )  2  +  V (  7  „ +  7i)  2  +  (v + 2/1) ) 2  (D.7)  v  Using the method of stationary phase and setting the derivative of the phase to zero, 188  the following equation is derived K = !<(  V  W h n -  ~  V  uY  +  l  V  + iv-yif  V(^  +  + n)  + (v +  2  (D8)  \  Vl  yi) J 2  Unlike the monostatic case where v can be expressed in terms of an analytical solution of K , K and 7 , the bistatic case would require a numerical method to obtain the u  n  u  solution. We define this numerical solution as v*(K, K ,j). v  The next step is to linearize the two-dimensional spectrum from K—K to K^ — K v  y  domain. In [35], the method to linearize the phase is to rewrite the phase of the spectrum as i>(K, K , v*- , y ) = K (K, K ) v  7n  n  7  v  + K (K, K )y + MK,  ln  y  v  K)  n  (D.9)  y  where the terms K (K, K ), K (K, K ) and ipo(K, K ) can be obtained by performing 7  v  y  v  y  a Taylor series expansion of ip(K, K , v*\7„, y ) around 70 and equating the coefficients. v  n  Expanding the phase up to the linear term, ip{K,K ;7 ,y ) v  n  = ip {K, K ,  n  0  y  dtp , y ) + Q^( , K  7 o  y ){ln  ;TO,  K  0  ~  n  V  7o)  + ^(K,K ,y )(y -y ) v]ln  0  n  (D.10)  0  and equating the coefficients, 7 0  K (K K ) = K (  W(ln-7i)  '  7  K (K,K ) y  v  ~ 2  +  7 7  1 0  + (v* -yi)  Vhn  2  0  +  V n 11)  1 1  + 7r)  2  + (v*o + yr) ) 2  = K  '  (D.12)  v  7/  ~  7o7/  +  y  2  -  7/  VIVQ  V(7o - 7/) + K - yi)  7o7/  v (7o + 7/)  2  2  +  /  + 2  Vi  +  VWQ  (D.13) 2  + (^ + y/).  where VQ is a solution (D.8) with 7 = 70 and y = yo- The bistatic phase term n  ipo(K, K ) exists only for the bistatic configuration and vanishes for the monostatic v  case. 189  The origin of the coordinate system needs to be shifted to a reference point (usually the scene center) to reduce the bandwidth of the signal processed. This step can be done together with the removal of the bistatic phase term. Stolt interpolation [25] is performed by solving (D.8) and determining where to extract the phase in the K — K  v  domain for each point in K — K domain. Once this linearization process is done, the 1  y  focused image can be obtained by a two-dimensional IFT. This algorithm is only able to handle configurations where the platforms are offset in the x and y directions, which means that both platforms must be in a plane parallel to the ground plane. Thus, the algorithm is not able to handle the case when the platforms are in a plane non-parallel to the ground plane. This limitation can be lifted by including another parameter z/. The approximation for 7/ becomes =  7l  Ux  2  + zA  ^ c o s 0 « (sjx\ + zA n  190  -cos#  0  (D.14)  Appendix E Proof For DMO This section give the proof for the relation between dipping angle  bistatic survey  time t (#<i) and monostatic survey time t (6d). Figure E . l gives the bistatic system D  m  geometry for computing the DMO operator. This diagram is extracted from Figure 3.4 with the baseline extended to point A. The dipping layer is represented by line AC. The result here proves the relation given in (3.28). Triangle ACS is congruent to triangle ACB. The distance c ib(^d) is given by line e  SS' and line RB. The distance c t (8a) is given by line M M ' and SN. Consider triangle e m  NSS 55' = C tb = e  SN  cos(#j) cos(#j)  191  (E.l)  M  S  R  A  VJ\ 1 p  \  y&  •  -  ' \  / "V  S S  V  / V  /  V  1  / v  /  L _ _ _ _ V M'  Y ei|  /  \  V  /  B  Figure E . l : Bistatic system geometry to find relation between dipping angle, bistatic survey time and monostatic survey time. In triangle SRB, sinjZRBS) _ sin(90° - ZBSP) 2h ~ KB  cos(# ) sin(t9i) = 2hd  ct e  cos (f7) = 2  i  b  .  l-4/ ^yM 2  i  (E  .2)  Combining (E.l) and (E.2), we get 4/7  2  *b(*d) = t (e ) + — cos (c? ) 2  m  d  192  ci  2  d  (E.3)  Bibliography [1] D. Massonnet. Capabilities and Limitations of the Interferometric Cartwheel. IEEE Trans, on Geoscience and Remote Sensing, Vol. 39(3):506-520, March  2001. [2] D. Martinsek and R. Goldstein.  Bistatic Radar Experiment.  In Proc. Eu-  ropean Conference on Synthetic Aperture Radar, EUSAR'98, pages 31-34,  Friedrichshafen, Germany, May 1998. [3] G. Krieger and A. Moreira. Multistatic SAR satellite formations: potentials and challenges. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'05,  pages 2680-2684, Seoul, Korea, July 2005. [4] G. Krieger, H. Fiedler, and A. Moreira. Bi- and Multistatic SAR: Potential and Challenges. In Proc. European Conference on Synthetic Aperture Radar,  EUSAR'04, pages 365-370, Ulm, Germany, May 2004. [5] H. M. Jones. Predicted properties of bistatic satellite images. In IEEE National Radar Conference, pages 286-291, April 1993. [6] H. D. Griffiths and N. R. W. Long. Television-based bistatic radar. IEE Proceedings, Vol. 133(7) :649-657, 1986. [7] P. Hartl and H. M. Braun. A Bistatic Parasitical Radar (BIPAR). In JPRS Report: Science and Technology. USSR: Space. 16th International Congress of the International Society for Photogrammetry and Remote Sensing, ISPRS, pages  11-19, Kyoto, Japan, January 1989. [8] L. Cazzani, C. Colesanti, D. Leva, G. Nesti, C. Prati, F. Rocca, and D. Tarchi. A ground-based parasitic SAR experiment. IEEE Trans, on Geoscience and Remote Sensing, Vol. 38(5):2132-2141, 2001.  193  [9] H. D. Griffiths. From a Different Perspective: Principles, Practice and Potential of Bistatic Radar. In Proceeding of International Radar Conference, pages 1-7,  September 2003. [10] N. J. Willis. Bistatic Radar. Artech House, Norwood, MA, 1991. [11] J. L. Autermann. Phase Stability Requirements for a Bistatic SAR. In National Radar Conference, pages 48-52, Atlanta, 1984. [12] I. Walterscheid, A. Brenner, and J. Ender. Geometry and System Aspects for a Bistatic Airborne SAR-Experiment. In Proc. European Conference on Synthetic  Aperture Radar, EUSAR'04, pages 567-570, Ulm, Germany, May 2004. [13] M . Weiss. Determination of baseline and orientation of platforms for airborne bistatic radar. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'05,  pages 1967-1970, Seoul, Korea, July 2005. [14] P. Dubois-Fernandez, H. Cantalloube, B. Vaizan, G. Krieger, R. Horn, M. Wendler, and V. Giroux. ONERA-DLR bistatic SAR campaign: planning, data acquistion, and first analysis of bistatic scattering behaviour of natural and urban targets. IEE Proc. Radar, Sonar and Navigation, Vol. 153(3):214-223,  2006. [15] G. Yates and A. M . Home. Bistatic SAR image formation. In Proc. European Conference on Synthetic Aperture Radar, EUSAR'04, pages 581-584, Ulm,  Germany, May 2004. [16] J. Ender. A step to bistatic SAR processing. In Proc. European Conference on Synthetic Aperture Radar, EUSAR'04, pages 359-364, Ulm, Germany, May 2004. [17] I. G. Cumming and F. H. Wong. Digital Processing of Synthetic Aperture Radar  Data: Algorithms And Implementation. Artech House, Norwood, MA, 2005. [18] A. W. Rihaczek. Principles of High-Resolution Radar. Artech House, Norwood, MA, 1996. [19] M . J. Jin and C. Wu. A SAR Correlation Algorithm Which Accommodates Large Range Migration. IEEE Trans. Geoscience and Remote Sensing, 22(6):592-597,  November 1984. 194  [20] B. Barber. Theory of digital imaging from orbital synthetic aperture radar. International Journal Remote Sensing, 6(6): 1009-1057, 1985.  [21] C. Wu. A Digital System to Produce Imagery from SAR Data. In AIAA Conference: System Design Driven by Sensors, October 1976.  [22] J. R. Bennett and I. G. Cumming. A Digital Processor for the Production of SEASAT Synthetic Aperture Radar Imagery. In Proc. SURGE Workshop, ESA Publication No. SP-154, Frascati, Italy, July 16-18, 1979. [23] C. Wu, K. Y. Liu, and M . J. Jin. A Modeling and Correlation Algorithm for Spaceborne SAR Signals. IEEE Trans, on Aerospace and Electronic Systems,  Vol. AES-18(5):563-574, September 1982. [24] R. K. Raney, H. Runge, R. Bamler, I. G. Cumming, and F. H. Wong. Precision SAR Processing Using Chirp Scaling. IEEE Trans. Geoscience and Remote Sensing, Vol. 32(4):786-799, July 1994. [25] R. H. Stolt. Migration by Transform. Geophysics, Vol. 43(l):23-48, February 1978. [26] C. Cafforio, C. Prati, and F. Rocca. Full Resolution Focusing of SEASAT SAR Images in the Frequency-Wave Number Domain. In Proc. 8th EARSel Workshop, pages 336-355, Capri, Italy, May 17-20, 1988. [27] C. Cafforio, C. Prati, and F. Rocca. SAR Data Focusing Using Seismic Migration Techniques. IEEE Trans, on Aerospace and Electronic Systems, Vol. 27(2): 194-  207, March 1991. [28] R. K. Raney. A New and Fundamental Fourier Transform Pair. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'92, pages 106-107, Clear Lake, T X ,  May 1992. [29] O. Loffeld, H. Nies, V. Peters, and S. Knedlik. Models and Useful Relations for Bistatic SAR Processing. IEEE Trans, on Geoscience and Remote Sensing,  42(10):2031-2038, October 2004. [30] A. M . Guarnieri and F. Rocca. Reduction to monostatic focusing of bistatic or motion compensated SAR surveys. IEE Proc. Radar, Sonar and Navigation,  153(3):199-207, 2006. 195  [31] R. Bamler and E. Boerner. On the Use of Numerically Computed Transfer Function for Processing of Data from Bistatic SARs and High Squint Orbital SARs. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'05, volume 2,  pages 1051-1055, Seoul, Korea, July 2005. [32] D. D'Aria, A. M. Guarnieri, and F. Rocca. Focusing Bistatic Synthetic Aperture Radar using Dip Move Out. IEEE Trans, on Geoscience and Remote Sensing,  42(7):1362-1376, July 2004. [33] Y . Neo, F. Wong, and I. G. Cumming. A two-dimensional spectrum for bistatic SAR processing using series reversion. Geoscience and Remote Sensing Letters, accepted for publication, 2006. [34] I. Walterscheid, J. H. G. Ender, A. R. Brenner, and O. Loffeld. Bistatic SAR processing using a n w - / j type algorithm. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'05, Seoul, Korea, August 2005. [35] V. Giroux, H. Cantalloube, and F. Daout.  An Omega-K algorithm for  SAR bistatic systems. In Proc. Int. Geoscience and Remote Sensing Symp.,  IGARSS'05, Seoul, Korea, August 2005. [36] V. Giroux, H. Cantalloube, and F. Daout. Frequency Domain Algorithm for Bistatic SAR. In Proc. European Conference on Synthetic Aperture Radar, EU-  SAR'06, Dresden, Germany, May 2006. [37] M . Soumekh. Bistatic synthetic aperture radar inversion with application in dynamic object imaging. IEEE Trans, on Signal Processing, Vol. 39(1):20442055, 1992. [38] D. Hale. Dip-moveout by Fourier Transform. Geophysics, 49(14):741-757, June 1984. [39] F. H. Wong, N. L. Tan, and T. S. Yeo. Effective Velocity Estimation for Spaceborne SAR. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'00,  volume 1, pages 90-92, Honolulu, HI, July 2000. [40] A. Papoulis. Signal Analysis. McGraw-Hill, New York, 1977. [41] O. Loffeld, H. Nies, H. Gebhardt, V. Peters, and S. Knedlik. Bistatic SARSome Reflections on Rocca's Smile... In Proc. European Conference on Synthetic Aperture Radar, EUSAR'04, pages 379-383, Ulm, Germany, May 2004. 196  [42] M. Rodriguez-Cassola, G. Krieger, and M. Wendler. Azimuth-invariant, Bistatic Airborne SAR Processing Strategies Based on Monostatic Algorithms. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'05, Seoul, Korea,  August 2005. [43] P. Dubois-Fernandez, O. Ruault du Plessis, M . Wendler, R. Horn, G. Krieger, B. Vaizan, H. Cantalloube, D. Heuz, and B. Gabler. The ONERA-DLR Bistatic Experiment: Design of the Experiment and Preliminary Results. In Proc. Advanced SAR Workshop, Canadian Space Agency, Saint-Hubert, Quebec, June 25-27, 2003. [44] F. H. Wong and T. S. Yeo. New Applications of Non-Linear Chirp Scaling in SAR Data Processing.  IEEE Trans, on Geoscience and Remote Sensing,  Vol. 39(5):946-953, May 2001. [45] E. Hanle. Survey of bistatic and multistatic radar. Vol. 133:587-595, 1986.  IEE Proceedings,  [46] U. Gebhardt, O. Loffeld, H. Nies, and J. Ender. Bistatic airborne/spaceborne hybrid experiment: basic consideration. In Sensors, Systems, and Next-Generation Satellites IX., volume 5427, pages 479-488. SPIE, October 2005. [47] U. Gebhardt, O. Loffeld, H. Nies, K. Natroshvili, and J. H. G. Ender. Bistatic Airborne/ Space Hybrid Experiment: Simulation and Analysis. In Proc. European Conference on Synthetic Aperture Radar, EUSAR'06, Dresden, Germany,  May 2006. [48] W. G. Carrara, R. S. Goodman, and R. M. Majewski. Spotlight Synthetic Aperture Radar: Signal Processing Algorithms. Artech House, Norwood, MA, 1995.  [49] H. Niels, O. Loffeld, K. Natroshvili, A. Medrano-Ortiz, and J. H. G. Ender. A Solution for Bistatic Motion Compensation. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'06, Denver, United States, August 2006. [50] V. Giroux, H. Cantalloube, and F. Daout. Motion Compensation for Bistatic SAR Frequency Domain Algorithm. In Proc. European Conference on Synthetic  Aperture Radar, EUSAR'06, Dresden, Germany, May 2006.  197  B. D. Rigling. Signal Processing Strategies for Bistatic Synthetic Aperture Radar.  PhD thesis, Dept. of Electrical Engineering, Ohio State University, Columbus, Ohio, September 2003. A. V. Oppenheim and A. S. Willsky. Signals and Systems. Prentice Hall, Upper Saddle River, NJ, 2nd edition, 1996. Y. L. Neo, I. G. Cumming, and F. H. Wong. Bistatic SAR Processing Using Non-linear Chirp Scaling. In Proc. CEOS SAR Calibration Workshop, (ESA), Ulm, Germany, May 27-28, 2004. S. W. McCandless. SAR in Space — The Theory, Design, Engineering and Application of a Space-Based SAR System, fn Space-Based Radar Handbook, L. J. Cantafio (ed.), chapter 4. Artech House, Norwood, MA, 1989. J. Curlander and R. McDonough. Synthetic Aperture Radar: Systems and Signal  Processing. John Wiley & Sons, New York, 1991. R. K. Raney. A Comment on Doppler F M Rate. International Journal of Remote Sensing, Vol. 8(7): 1091-1092, January 1987. J. R. Wertz and W. J. Larson. Space Mission Analysis. Kluwer Academic Publishers, 3rd edition, 1999. R. K. Raney, A. P. Luscombe, E. J. Langham, and S. Ahmed. RADARSAT. Proc. of the IEEE, Vol. 79(6):839-849, 1991.  E. L. Key, E. N. Fowle, and R. D. Haggarty. A Method of Designing Signals of Large Time-Bandwidth Product, fn IRE Intern. Conv. Record, volume 4, pages 146-154, March 1961. E. Kreyszig. Advanced Engineering Mathematics. John Wiley & Sons, NY, NY, 8th edition, 1999. A. V. Oppenheim, R. W. Schafer, and J. R. Buck. Discrete-Time Signal Processing. Prentice Hall, Upper Saddle River, NJ, 2nd edition, 1999. M. C. Jackson. The geometry of bistatic radar system. Vol. 133:604-612, 1986.  198  IEE Proceedings,  [63] L. R. Mover, C. J. Morgan, and D. A. Rugger. An exact expression for resolution cell area in special case of bistatic radar systems. IEEE Transactions on Aerospace and Electronic Systems, Vol. 25(4):584-587, July 1989.  [64] G. P. Cardillo. On the use of the gradient to determine bistatic SAR resolution. In Antennas and Propagation Society International Symposium, volume 2, pages  1032-1035, May 1990. [65] I. Walterscheid, A. R. Brenner, and J. H. G. Ender. Results on bistatic synthetic aperture radar. Electronics Letters, 40(19):1224-1225, September 2004. [66] T. Zeng, M . Cherniakov, and T. Long. Generalized Approach to Resolution Analysis in BSAR. IEEE Transactions on Aerospace and Electronic Systems,  41(2):461-474, April 2005. [67] M . Soumekh. Synthetic Aperture Radar Signal Processing with MATLAB Algo-  rithms. Wiley-Interscience, New York, 1999. [68] D. C. Munson, J. D. O'Brian, and W. K. Jenkins. A Tomographic Formulation of Spotlight Mode Synthetic Aperture Radar. Proc. of the IEEE, Vol. 71:917-925, 1983. [69] C. V. Jakowatz, D. E. Wahl, P. H. Eichel, D. C. Ghiglia, and P. A. Thompson. Spotlight-Mode Synthetic Aperture Radar: A Signal Processing Approach. Kluwer  Academic Publishers, Boston, MA, 1996. [70] S. Basu S. Xiao, D. C.Munson and Y.Bresler. An 0(N logN) Back-Projection Algorithm for SAR Image Formation. Proc. of the IEEE, Vol. 42(7):1362-1376, 2004. 2  [71] S. Basu and Y. Bresler. 0(N logN) Filtered Backprojection Reconstruction 2  Algorithm for Tomography. IEEE Transactions on Image Processing, 9:1760-  1773, October 2000. [72] Y . Ding and D. C. Munson. A Fast Back-Projection Algorithm for Bistatic SAR Imaging. In Proc. Int. Conf. on Image Processing, ICIP 2002, volume 2, pages  449-452, Rochester, NY, September 22-25, 2002. [73] J. Sanz-Marcos, P. Prats, and J. Mallorqui. Bistatic Fixed-receiver Parasitic SAR Processor Based on the Back-propagation Algorithm. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'05, Seoul, Korea, August 2005. 199  [74] F. Rocca, C. Cafforio, and C. Prati. Synthetic Aperture Radar: A New Application for Wave Equation Techniques. Geophysical Prospecting, Vol. 37:809-830, 1989. [75] A. K. Jain. Fundamentals of Digital image Processing. Prentice-Hall, New Jersey,  1989. [76] A. S. Milman. SAR Imaging by u — k Migration. International Journal on Remote Sensing, Vol. 14(1): 1965-1979, 1993. [77] C. H. Edwards. Advanced Calculus of Several Variables. Dover Publications,  New York, 1995. [78] W. Kaplan. Advanced Calculus. Addison-Wesley, Reading, MA, 3rd edition, 1984. [79] I. Cumming, Y . L. Neo, and F. H. Wong. Interpretations of the Omega-K Algorithm and Comparisons with other Algorithms. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'03, Toulouse, France, July 2003. [80] J. Leader. Numerical Analysis and Scientific Computation. Addison-Wesley,  Reading, MA, 1st edition, 2004. [81] I. Walterscheid, J. H. G. Ender, A. R. Brenner, and O. Loffeld. Bistatic SAR Processing and Experiments.  IEEE Trans. Geoscience and Remote Sensing,  Vol. 44(10):2710-2717, October 2006. [82] K. Natroshvili, O. Loffeld, H. Nies, and A. M . Ortiz. First steps to bistatic focusing.  In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'05,  pages 1072-1076, Seoul, Korea, July 2003. [83] K. Natroshvili, O. Loffeld, and H. Nies. Focusing of Arbitrary Bistatic SAR Configurations.  In Proc. European Conference on Synthetic Aperture Radar,  EUSAR'06, Dresden, Germany, May 2006. [84] O. Loffeld, K. Natroshvili, H. Nies, U. Gebhardt, S. Knedllik, A. MedranoOrtiz, and A. Amankwah. 2D-Scaled Inverse Fourier Transformation for Bistatic SAR. In Proc. European Conference on Synthetic Aperture Radar, EUSAR'06,  Dresden, Germany, May 2006.  200  P. M . Morse and H. Feshbach. Methods of Theoretical Physics, Part I. McGraw-  Hill, New York, New York, 1st edition, 1953. M. Abramowitz and I. A. Stegun. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover Publications, New York,  NY, 1965. J. H. G. Ender, I. Walterscheid, and A. R. Brenner. New aspects of bistatic SAR: processing and experiments. In Proc. Int. Geoscience and Remote Sens-  ing Symp., IGARSS'04, volume 3, pages 1758-1726, Anchorage, United States, September 2004. H. Zhong and X . Liu. A Fourth-Order Imaging Algorithm for Spaceborne Bistatic SAR. In Proc. Int. Geoscience and Remote Sensing Symp., IGARSS'06,  Denver, United States, August 2006. R. K. Raney. Considerations for SAR Image Quantification unique to Orbital Systems. IEEE Trans, on Geoscience and Remote Sensing, Vol. 29(5):754-760,  May 1991. 0. Yilmaz. Seismic Data Processing. SEG Publications, Tulsa, OK, 1987. H. T. Croft, K. J. Falconer, and R. K. Guy. Unsolved Problems in Geometry.  Springer-Verlag, New York, NY, 1991. Y. L. Neo, I. G. Cumming, and F. H. Wong. Focusing Bistatic SAR Images Using Non-Linear Chirp Scaling. In IEEE/URSI International Conference on Radar Systems RADAR'2004, Toulouse, France, Oct 18-22, 2004. R. W. Hamming. Numerical Methods for Scientists and Engineers. Dover Pub-  lications, New York, 2nd edition, 1987. A. Papoulis. The Fourier Integral and Its Applications. McGraw-Hill College Division, New York, 1962. K. Eldhuset. A new fourth-order processing algorithm for spaceborne SAR. IEEE Trans, on Aerospace and Electronic Systems, 34(3):824-835, July 1998.  J. Shragge, J. Irving, and B. Artman. Shot profile migration of GPR data. In Tenth International Conference on Ground Penetrating Radar, Delft, The  Netherlands, 2004. 201  [97] B. Biondi and J. Ronen. Dip MoveOut in shot profiles. Geophysics, 52(11):14731482, November 1987. [98] B. Biondi. 3D Seismic Imaging. In Society of Exploration Geophysicists Publi-  cations, September, 2006. [99] R. S. Wu and N. Toksos. Diffraction tomography and multisource holography applied to seismic imaging. Geophysics, 52(1):11—25, January 1987. [100] H. B. Dwight. Table of Integrals and Other Mathematical Data. Macmillan, New  York, New York, 4th edition, 1961.  202  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0100718/manifest

Comment

Related Items