UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Quantization noise of signal correlators. Klingler, Rolf Jerg 1972-04-11

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
831-UBC_1972_A7 K45.pdf [ 5.07MB ]
Metadata
JSON: 831-1.0101675.json
JSON-LD: 831-1.0101675-ld.json
RDF/XML (Pretty): 831-1.0101675-rdf.xml
RDF/JSON: 831-1.0101675-rdf.json
Turtle: 831-1.0101675-turtle.txt
N-Triples: 831-1.0101675-rdf-ntriples.txt
Original Record: 831-1.0101675-source.json
Full Text
831-1.0101675-fulltext.txt
Citation
831-1.0101675.ris

Full Text

QUANTIZATION NOISE OF SIGNAL CORRELATORS by ROLF JERG KLINGLER Diplom., E. T. H. Zurich, Switzerland A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in the Department of Electrical Engineering We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA May, 1972 In present ing., thi s,„ thesis,, in pa.rt i.al. fu.lf i lmen.t-o-f- the- requ i rements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the Head of my Department or by his representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Electrical Engineering The University of British Columbia Vancouver 8, Canada Date Hay 3rd 1972 ABSTRACT In radio-astronomy, spectra of noisy signals are often computed using digital auto-correlation techniques. To simplify the design of the many high-speed multipliers and averagers, coarse quantization is employed, using only a few digital levels. This thesis is a theoretical study of the penalty paid for such coarse quantization in the form of increased output noise. A degradation factor is defined and is calculated for a variety of logic schemes which have been used or proposed. For each scheme, results are given as a function of sampling rate and it is demonstrated that there is often significant improvement in sampling at rates faster than the Nyquist rate. . A computer simulation technique was developed for verifying the computed results, and for extending the. results to complicated schemes where analysis is very difficult. i TABLE OF CONTENTS Page ABSTRACT . i TABLE OF CONTENTS ii LIST OF ILLUSTRATIONS . .. ... iv LIST OF SYMBOLS vACKNOWLEDGEMENT ix I. INTRODUCTION : 1 1.1 Literature Survey.... 1 1.2 Contribution of this Thesis 2 II. THEORETICAL MODEL OF A CORRELATION RECEIVER 4 II. 1 General Assumptions 5 II. 2 Definitions 6 II. 3 Degradation Factor for Small Signals.... 8 III. RESULTS OF PRACTICAL INTEREST 10 .111.1 Degradation Factor vs. Sampling Rate 10 111.1.1 Replacement of Small Gaussian Signals by d-c. 10 111.1.2 Output S/N-Ratio for a Quantized, Sampled Correlator 11 g 111.1.3 (—) for Sampling at Nyquist-Rate 18 g III. 1.4 (—) for Infinite Sampling-Rate 19 S III. 1.5 (JJ) of an Analog Correlator 20 III.1.6 General Formula for the Degradation Factor... 21 III. 2 Degradation Factor of the (2 x 2\ (3 x 3), (2 x 3), (3 x 5), and fa x 4) Level Correlators 23 111.2.1 2x2 Level Correlator 2111.2.2 3x3 Level Correlator 6 111.2.3 2x3 Level Correlator 7 111.2.4 3x5 Level Correlator.... 28 111.2.5 4x4 Level Correlator 30 III. 2.6 Conclusions 32 III.3 3x5 Level Correlator at Unequal Sampling-Rate. 35 111.3.1 Asymmetric Sampling 37 111.3.2 Syjnmetric Sampling 42 III. 3.3 Conclusions ^5 ii Page III. 4 Degradation for Overquantized Correlators 46 III.4.1 Multiplication Using Four Possible "Products"..... 49 III.4. 2 Overquantized 3-Product-Correlator 54 III.4.3 3-Level Quantization After Analog Multiplication.. 59 III. 4.4 Conclusions 65 IV. SIMULATION WITH RANDOM NUMBERS 7 IV. 1 Creation of Correlated Samples 6IV.2 Simulation of the Variance 9 IV. 3 Results of Simulation Runs 72 V. RESULTS OF MORE THEORETICAL INTEREST. 7 V. l Optimization of Decision-Levels 7V.l.l Optimum Decision Level for the 3x3 Level Correlator 77 V.l. 2 Optimum Decision Level for the 2><3 Level Correlator 82 V.l.3 4 x 4 Level Correlator 83 V.l.4 3 x 5 Level Correlator 4 V.2 Decomposition into Single Channel Correlators 90 V.2.1 Single Channel Correlation Factors and Decomposi tion Error 93 V.3 Degradation of Strong Signals..... 96 V.3.1 Unquantized Correlator for Strong Signals 97 V.3.2 Application to a 2 x 2 Level Correlator, 99 VI. OVERALL CONCLUSIONS. 104 APPENDIX Al 105 APPENDIX A2. . 8 A2.1 Evaluation of for the d-c Case 109 A2.2 Evaluation of dw for the General Case.... 110 dR REFERENCES 112 iii LIST OF ILLUSTRATIONS Figure Page II. 1 Model of a correlation receiver 4 II. 2.1 Typical variation of expected correlator output with cross-correlation factor 7 III. 1.2.1 Sampled quantized correlator 12 III. 1.2.2 n-level quantizer 13 III.2.1 Degradation factor, D, versus sampling rate, K 34 III. 2.1.1 2-level quantizer 23 III. 2.1.2 Arcsin(x) versus x 4 III.2.2.1 3-level quantizer 26 III.2.4.1 5-level quantizer 8 III. 2.5.1 4-level quantizer 30 III.3.1 Model of a correlator with unequal sampling rates ... 36 III.3.1.1 Signals x(t) and y(t), asymmetrically sampled 37 111.3.2.1 Signals x(t) and y(t), symmetrically sampled 42 111.3.2.2 Autocorrelation function of s(t),(symmetric samples). 45 111.3.2.3 Autocorrelation function of s(t),(asymmetric samples) 45 111.3.3.1 Degradation factor, D, versus oversampling factor, n, for a signal with flat spectrum 47 111.3.3.2 Degradation factor, D, versus oversampling factor, n, for a signal with triangular spectrum 48 111.4.1.1 Degradation factor, D, versus decision level, P, for a 4x4 level correlator with lowest products deleted .. 52 111.4.1.2 Degradation factor, D, versus decision level, P, for 5x5 level correlator with lowest order terms deleted 55 III.4.2.1 Correlator with product merger 54 III. 4. 2.2 Logic scheme of product merger 6 111.4.2.3 Probability chart of product merger 56 iv Figure Page III.4.2.4 Degradation factor, D, versus decision level, P^, for merged product correlator 60 111.4.3.1 Quantization after multiplication 59 111.4.3.2 Degradation factor, D, versus decision level, P, for the 3-level quantizer after multiplication 64 III. 4.4.1 Degradation factor of 3-product correlators, versus decision level, P 66 IV. "1.1 Generation of correlated noise samples 68 IV.2.1 Simulation model of a quantized, sampled correlator . 70 IV. 3.1 Simulated degradation factors versus sampling rate, K 76 V. 1.1 Degradation factor, D, versus decision level, P, for 3x3 level correlator, for various sampling rates .... 78 V.1.1.1 Optimized decision level,-P , versus sampling rate, K, for a 3x3 level co rrelator 81 V.1.2.1 Optimized decision level, Pq , versus sampling rate for a 2x3 level correlator ??. 84 V.1.3.1 Optimized decision level, Pq , versus sampling rate for a 4x4 level correlator ??.. 86 V.1.4.1 3-level quantizer 85 V. 1.4.2 5-level quantizerV. 1.4.3 Optimized decision levels of P, P^ and ~P^, versus sampling rate for a 3x5 level correlator 91 V.2.1.1 Decomposition error, e , versus sampling rate, K, for the 3x3 and the 3x5mlevel correlators 95 V. 3.1 2x2 level correlator with strong signals 97 V.3.2.1 Strong-signal degradation factor versus input signal-to-nois ratio, for a 2x2 level correlator .... 103 v LIST OF SYMBOLS B Bandwidth D Degradation Factor K Normalized sampling-rate = fg/B M Number of simulated runs MSD Minimum detectable signal N Number of samples P Decision level of quantizer Pr(l) Probability of inputs to product merger lying in sector (1) Pr(w=b) Probability of satisfying equation in parentheses R =R (0) correlation factor of the signals x and y xy R^Cx) Autocorrelation function of x(t) R (T) Cross correlation function of x(t) and y(t) xy S Spectral power density sin(x) Sa(x) Sampling function Sa(x) = —— S/N Signal-to-noise ratio T Averaging Time of Averager Tg Sampling interval 2 W Variance of w as a simulation result a Quantizer output level as a subscript: analog correlator or: asymmetrically sampled d Signal-to-^ignal + noise) ratio e Subscript:correlator with least significant products eliminated 2 u 1 fX ~ 2 erf(x) Error function erf (x) = I ' e du /2~7 -°° erfc(x) Complement error function erfc(x) = 1 - erf(x) f Frequency variable f Sampling frequency s fXai' P^) Quantizer function, defined on page 15, equation(III. 1.2.13) i Subscript: input m Subscript: products merged n,m Subscript: number of quantized levels in the 2 sides of the correlator n Oversampling factor n Subscript "of noise signal" n(t) Noise waveform o Subscript: output opt Subscript: optimum p (x) Probability-density of a random variable x p (x,y) Joint probability-density function of 2 random variables x and y x, y q quantized value of the signal x X r Normalized correlation factor SQ Consta nt d-c signal ss Subscript: strong signal case s(t) Signal waveform t Time variable v(t) Delayed signal y(t-x) w Output of correlator = averaged value of q^ x Subscript: on the x-side of the correlator x(t) Unprocessed waveform on one (x-) side of the correlator y Subscript: on the y-side of the correlator y(t) Unprocessed waveform on one (y-) side of the correlator z Output of the multiplier for unprocessed signals x(t) and y(t) vi J e Error of the simulated degradation factor e (K) Decomposition error nmv ' v K Relative step-size'of quantizer P(T) Normalized autocorrelation function a Standard deviation x Time variable in correlation function, delay time 'JlhCt)} Fourier transform of function h(t) {H(f)} Inverse Fourier Transform of function H(f) viii ACKNOWLEDGEMENT This work was supported by a Swiss-Canadian exchange fellowship, granted at the University of British Columbia. The author is indebted to his supervisor, Professor F. K. Bowers for his very helpful suggestions and contribution of ideas, as well as for his enthusiasm, and his constant availability. I am grateful to •Dr. G. Anderson for reading and correcting the -manuscript. I am very thankful to Miss Linda Morris and Miss Norma Duggan for typing the final draft and to my fiancee, Cecily Palmer, for graphical works and her moral support. ix 1 I. INTRODUCTION In radio-astronomy, as in other fields, it is often necessary to measure the amount of correlation between two signals. The signals usually are adequately modelled as white Gaussian noise, and the amount of correlation between the signals is typically very small. Hence, correlation coefficients must be measured by taking long-time averages using instruments free from drift. Such instruments tend to use analog to digital conversion followed by digital techniques for multiplication and averaging. One particular application of correlators is in the determination of spectra of noisy signals. This can be done by first determining the autocorrelation function of the signals, multiplying s(t) by s(t-Tn) for many different values of T . A Fourier transform is then used to .calculate .a .power spectrum. -Similar techniques can be used to find the spectrum of a correlation signal coming from two antennas. In a correlation spectrometer, two A/D converters feed a large number (over 100) of multipliers and averagers, all processing samples at high rates. It thus becomes important to simplify the design of these many repetitive units. Early instruments did this by using 2-level (1-bit) A/D converters"'""'". Of late, more complicated logic schemes have been used and some thought has been given to reducing the penalty paid for coarse quantization. This penalty can be expressed as the degradation of the output S/N ratio, when compared with analog instrumention. I.1 Literature Survey Polarity-coincidence correlators have been studied by F. Bowers"*, Burns and Yao^, Cheng'7, Ekre^, and Yerbury^"*. Burns and Yao mention the 2 fact that the output signal-to-noise ratio does not change when the out put of an analog correlator is oversampled beyond the Nyquist-rate. They claim that the actual shape of the input-filter (which is usually assumed to be rectangular) is important. Cheng found the Degradation factor for a polarity-coincidence correlator at an infinite sampling-rate for arbitrary signals and signal power. Ekre found the output signal-to-noise ratio vs. sampling-rate of a polarity-coincidence correlator. Yerbury investigated the effect of amplitude limiting the analog correlator to increase its stability. His special case of infinite limiting (infinite stability) is identical to the polarity-coincidence correlator. Watts^ gives a mathematical description of a quantizer with an infinite number of levels. Cooper treats the 2 bit correlator sampled at Nyquist-rate and investigates "incomplete multiplication" where the least significant products are neglected. 1.2 Contribution of This Thesis This work is a continuation of studies by F. Bowers"*. He found the Degradation factor of different combinations of quantizers for Nyquist-rate-sampling. Here the Degradation factor of a correlator can be calculated as the product of two single channel Degradation-factors. He mentions that a Gaussian signal can be replaced by a DC-signal for small input signal-to-noise ratios and that the decision levels and stepwidths can be optimized. The present work investigates the dependence of S/N ratio of quantized multL-rlevel correlators on the sampling-rate. General formulae are presented. Numerical evaluations give the actual values of the degradation factor for five different correlators. Procedures are presented which optimize decision levels for four different correlators and variable sampling-rates. Two special problems are also discussed: (a) The degradation factor for a 3*5 level correlator where the two channels are sampled at a different rate (one channel at Nyquist-rate, the other channel oversampled) and; (b) The effect of "overquantization". Here the product of two signals is limited to a small number of levels, but the signal before multiplication can have many more levels. Finally, a simulation program to determine the degradation factor of quantized correlators was developed. This is useful for higher level correlators where theoretical analysis becomes too difficult. The simulation results confirm the findings from theoretical calculations for the five different correlators considered. 4 r II. THEORETICAL MODEL OF A CORRELATION RECEIVER Figure II.1 shows the model of a correlation receiver whose properties are to be investigated, noise 0 x(t) = n (t) + s (t) X X signals R = s s x y x-signal processor multiplier q = q q 7 v y ^ </l-()T Z noise y(t) = ny(t) + sy(t) y-signal processor Averager output vw n Fig. II.1 Model of a correlation receiver Each of the waveforms, x(t) and y(t) is made up of two components: a relatively large amount of "noise" and a relatively small amount of "signal". The two noise sources are completely uncorrelated, while the signal sources may have a finite cross-correlation factor, R = sx • sy. The task of the instrument is to determine R as accurately as possible. The two signal processors can take a variety of forms: (a) There could be no processing at all, in which case we have an "analog correlator"; (b) the waveforms could be sampled (usually at rates higher than twice the bandwidth); (c) the waveforms could be quantized into several discrete levels, with subsequent digital handling of the multiplication and averaging, or; (d) the waveforms could be both sampled and quantized. This is the common situation and is also the most general case. All other treatments can be regarded as limiting cases of this one. Often the two signal processors are alike, but this is not necessary, and several instances of unequal processing will be investigated. The processors will normally have a transfer function symmetrical around zero volts, and this is assumed for simplicity in the calculations. The analog correlator (a) above is normally difficult to instrument due to problems of drift. Theoretically it has the best output signal-to-noise ratio, and its performance is the standard by which all other instruments will be judged. II.1 General Assumptions In calculations using the model shown in Figure II.1, the following assumptions are made: • (1) The signals, s and s , and the noise sources, n and n , are x y' x y all limited to the frequency band (0 to B). (In practice, observations are usually made at higher frequencies. The waveforms x(t) and y(t) are then band-limited at that frequency and are translated down in frequency by heterodyning.) (2) The noise sources, n and n , are both white Gaussian signals ' x y with zero mean values. (Usually such noise is a mixture of "antenna noise" from sky background and of "receiver noise". The assumption of uniform spectral power density is true only because in practice the bandwidth observed is small compared with its centre frequency. The assumption of Gaussian statistics is an idealization, but is valid for Johnson noise and for many other noise sources). (3) The two noise sources have the same average power, a , and the 6 two signals have the same average power, a . (This is true for most examples of interest and simplifies the calculations, but the results can easily be generalized to the case of unequal powers.) (4) The noise sources are statistically independent of each other. (This is obviously true when the noise is generated in two different receivers. For some other sources of noise there may be some correlation, but if this is so, then the correlated components are separated out and are treated as part of the signal to be measured.) (5) The signal powers are small compared with the average noise powers. (This assumption simplifies the calculations considerably. It is not valid for all applications of correlators; but the greatest interest in optimizing signal-to-noise ratios arises when the signals are small.) (6) The signals, and s^, are ergodic and they are not correlated with the noise sources, n and n . x y (It is not necessary to make any other assumptions about the spectra or the statistics of the signals. It is shown in Appendix A2 that the degradation in signal-to-noise factor is independent of the character of the signals, and can be computed using a very special d-c case where s (t) = s (t) = s = constant.) o x y II.2 Definitions 2 Let a • be the average power (the variance) of the noise sources n and n , and R be the cross-correlation factor of the signals s and x y' b x s . y We then define the input signal-to-noise ratio of a correlator as 7 (S/N)± = R/a* In the d-c case, where R = s , this becomes o (II.2.1) (S/N). - B2yn (II.2.2) If, further, w is the expected value of the output, w, and o"w is the variance of w, then we define the output signal-to-noise ratio as (S/N) = w/o (II.2.3) o w In general we will find that the average correlator output, w, will be some monotonically increasing function of R, as shown in Figure II.2.1. Expected TTZ . Correlator Output A R Cross - Correlah'on facior Fig. II.2.1 Typical variation of expected correlator output with cross-correlation factor A particular value of w, obtained by averaging over one time-interval, will show deviations from this expected value, w, with standard deviation a as shown, w If now R changes by AR, this will result in a change in the expected output by an amount Aw. Whether such a change is detectable by a single observation of w depends on the relative size of o"w and Aw. Quite arbitrarily, we define the change as "detectable" if Aw exceeds c , and not detectable otherwise. This leads to the concept of 8 the minimum detectable signal, MDS. It is defined as that change in R which will result in an expected output change exactly equal to the standard deviation. Hence MDS = a /(^) (11.2.4)" W flK The MDS will depend on the noise power, the bandwidth, the integration time, and (for large signals) on the cross-correlation factor. When all these are held constant, one can compare minimum detectable signals for various correlators. The best instrument, with least MDS, will be the analog correlator. All other instruments pay some penalty in terms of higher values of MDS. This finally leads to the definition of the degradation factor, D, of a particular correlator as D = MDS/(MDS) , (II.2.5) analog The calculation of this degradation factor for a variety of correlators is the subject of this thesis. II.3 Degradation Factor for Small Signals When the signals are small compared with the noise (as is assumed in most of the calculations), the degradation factor can also be expressed in terms of the output signal-to-noise ratios. Using Equation (II.2.4), each MDS can be expressed in terms of a and (4^) • However when the signals are small R will be small, and w dR Figure II.2.1 can be linearized near the origin. We then have MDS - ow/(f) = R/Of) = R/(|)o. (II.3.1) w so that D can be expressed as (S/N). for analog correlator D= TFT^S—7 (II.3.2) (S/N)o for system The degradation factor is therefore a measure of the deteriora tion in output signal-to-noise ratio as a result of the insertion of the signal processors into the correlator. 10 III. RESULTS OF PRACTICAL INTEREST III.1 Degradation Factor vs. Sampling Rate 2 In the following we assume the signal power as to be far below 2 the noise power a . Therefore equation (II.3.2) is applicable and we have to find the output signal-to-noise ratio of a correlator in order to compute the correlator degradation factor. We defined the output signal-to-noise ratio in equation (II.2.3). For small signals it can be shown that o"w does not depend on a and can be calculated assuming a =0. The two inputs are then white, s s Q _ independent Gaussian noise sources . in this case w = 0 and a2 = w2 (III.1.1) w The output signal-to-noise ratio then becomes: S [*'a«» 'Gn —- (III. 1.2) Ho -/f [/wZ] s=0 III.1.1 Replacement of Small Gaussian Signals by d-c For small signals (as<<crn) it is shown in Appendix A2, that for the purpose of calculating the degradation factors, the Gaussian zero-mean signals can be replaced by a d-c signal s (t) = s (t) = s = const. (III.1.1.1) x y o Then both input signals x(t) = s + n.(t) (III.1.1.2) o 1 and y(t) = s + n„(t) (III.1.1.3) o I are independent, Gaussian signals with mean value x = y = Sq and with variance a2 = a2 = a2 (III.1.1.4) x y n The signals x and y are independent Gaussian random variables. Replacing the signals by a d-c signal, the probability-density functions of x and y become those of the noise signals n^(t) and ^(t), displaced by an offset equal to the d-c value, s : o - (x-s )2 0 I o PY(X) = —ZT~ e (III.1.1.5) TTO 1 t ^2 2 (y_So) p (y) = —-— e 2°n (III.1.1.6) y /27a Since both channels are statistically independent, the joint probability density, pxy(x,y) is given by the product of the probability-density functions of the two signals x and y: Pxy(x,y) = px(x) * py(y). (III.1.1.7) It can be shown (see Chapter III.1.2 and III.1.6) that the degradation factor D depends only on the input signal-to-noise ratio defined in 2 II.2.2. Therefore the variance of the signals x and y, , can be arbitrarily set equal to 1 in all further calculations. III.1.2 Output S/N-Ratio for a Quantized, Sampled Correlator The block diagram of a quantized, sampled correlatoris , shown in Fig. III.1.2.1. Assuming f as the sampling frequency, T = -|— denotes the sampling-period (III.1.2.1) s and f K = — the normalized sampling-rate (III.1.2.2) 12 noise noise x-signal quantizer Averager w Figure III.1.2.1 Sampled quantized correlator According to the sampling-theorem, the lowest sampling frequency that allows recovery of a signal from its samples is f = 2B. In this case the normalized sampling-rate K becomes equal to 2. This lowest possible sampling-rate is called the Nyquist-rate. The value of the time function x(t) in Fig. III.1.2.1 taken at t = iTg is denoted by x(i). The same is valid for the signals y, q^, qy and q^. The output signal-to-noise ratio of a correlator with small input signals is defined in (ill.1.2.) Therefore we have to find (a) the expected value of the correlator output w when the signal Sq is present and (b) the standard deviation of w when we omit the signal So. (a) Since the two signals x and y are considered to be statistically independent, the expected value of w is equal to the product of the expected value of q and q , i.e. x ny w = q • q ^x ny (III.1.2.3) Fig. III. 1.2.2 shows a symmetric n-level quantizer. Pr(P_^ $ x < P^_j_^) denotes the probability of x being between the two decision levels P^ and P, i+1* •p» -a -p, o, -o. Fig. III.1.2.2 n-level quantizer The expected value of x becomes n-1 n-1 2 2 q = E a.Pr(P. £ x < P._,_.,) - E a. Pr(-P.(1 $ x < -P.) nx . . x 1 l+l . _ x 1+1 x i=l 1=1 where (III.1.2.4) Pr(P. <: x < P.+1) = P.,, P.,, 1, ,2 f 1+1 i r.1+1 " 2^X~Sc> p (x)dx = e dx P. P. l l and = erfc(P.-s ) - erfc(P.^ - s ) (III.1.2.5) l o 1+1 o -P. ' 1 Pr(-Pi+1 $ x < -Pi) = | px(x)dx -Pi+1 = erfc(P. + s )-erfc(P.11 + s ) (III.1.2.6) 1 o 1+1 o The complement error-function erfc (x) is defined as r» 12 erfc(x) = . e~ 2 X dx. (III.1.2.7) x Since Sq<<1, we can expand each of the terms in eq. (ill. 1.2.5) and(III.1.2. around 0 using a Taylor series. 2 Therefore P. So -— erfc(P. - s ) Z erfc(P.) + -4- e 2 (III. 1.2.8) and PSo " — erfc(P. + s ) * erfc(P.) - —— e 2 . (III.1.2.9) 1 ° 1 /2T Using (III.1.2B) in fill.1.2.5) and (ill.1.2.9) in (ill. 1. 2.6) we obtain 2 2 PT P.., -I l+l Pr(P. $ x < P..-) Z erfc(P.) - erfc(P. + 1) + — s (e 2 - e 2 ) 11+1 1 1 /27 ° and (III.1.2.10) 2 2 P P _ _ i+1 Pr(-P. • $ x < -P.) «s erfc(P..1) - — s' (e 2 -e 2 ) (III.1.2.II) 1+1 1 1+1 pr— O Using III.1.2.4 we find finally We define 2 2 n-1 . P. . P.,/ ~T~ — _ 1+l A v c~ 2 2 \ c qx V¥ . , ai(e " 6 )-s° i=l 2 2 n-1 PT P (III.1.2.12) _ _i i+1 f (a., P.) = Z a.(e 2 - e 2 ). (III.1.2.13) x i l . , 1 1=1 Since is of the same form as qx> the general form of w is therefore w = - s2f (a., P.)-f (a., P.) (III.1.2.14) iroxiiyii where f (a., P.) and f (a.. P.) describe the actual quantizers, x l l y l l 2 (b) To compute the variance a we can neglect the signal s and as w an approximation consider only two independent noise sources as inputs to the correlator, i.e. x(t) .= n (t) (III.1.2.15) and y(t) = ny(t) (III.1.2.16) Therefore and N N N w2 * (rjr E q (i))2 =4 ^ I q (i)q (j) (III.1.2.17) N i=l Z IT i=l j=l Z Z - N N . N N w ft -4- T. I q (i)q (j) = -± I 1 R (i-j)(III.l,2.18) N i=l j=l z z NZ i=l j=l qz Using the relation N N N-1 Z I f(k + n-m) = Z (N-n)(f(k+n) + f(k-n)) + Nf(k) n=l m=l n=l (III. 1.2.19) and recognizing that R (k) = R (-k), (III.1.2.20) 16 it follows that ,~2 l N"1 i w =|(2E (i)+R (0)) (III.1.2.21) i=l qz qz and 1 N-1 v2 I VJ. , „ w . „ w K7 • -i N q q /N 1=1 ^z 4< /7 1 1 i 2 g = /w = ^(  (1 - i) R (i) + R (0)T (III.1.2.22) The autocorrelation function of qz(t) is R (T) = q-(t) q (t + x) = q (t) q (t + T) q (t) q (t + x) (III.1.2.23) q^ z z xx y y = R (T) R (X) (III.1.2.24x qy since cix(t) and cjy(t) are statistically independent random processes. Note that R^ (IT ) = R (i) = R (i) R (i). (III.1.2.25) q s q q q nz nz nx ^y The autocorrelation function of q (t) is given by R (x) = q (t) q (t + x). (III.1.2.26) q x x x Let the normalized autocorrelation function of x(t) be denoted by RX(T) R (X) y?s <f)} px(x) = p(x) = = —joy = ^TJ— = Sa(2TTBx) (III.1.2.27) where n Sa(-) = S(")(') (III.1.2.28) Let v(t) = x(t + x). (III.1.2.29) Then x and v are jointly normal random variables with the joint probability-density function 1 , 2 . ... 2. (III.1.2.30) — (x -2p(x)xv + v ) P (x,v) = XV 2iT/l-p2(x) 2(l-p2(x)) 17 Considering again the symmetric n-level correlator (Fig. III.1.2.1) we see that q q = a.a. with probability Pr(P.£x<P.,,, P.$v<P.,,)+Pr(-P.,£x<-Pi, xnv 1 j r J 1 1+1' j j+1 l+l ' -P.,1$v<-P.) (III.1.2.31) J+1 J and q q = -a.a. with probability Pr(-P.,.£x<-P., P.$v<P.,.,)+Pr(P.£x<P.,,, xnv l j v „ 1+1 1 J j+l 1 i+1 -P..,*v<-P.) (III.1.2.32) J+1 J ' where • 'i- i 9 n-1 iij - J- > • • • 2 Therefore, n-1 n_l qq =T I a.a.[Pr(P.$x$P..1>P.$v<P..1)+Pr(-P..1$x<-P., -P _$v<-P.] xnv ^ 1 J 1 1+1 j j+1 i+l i J+l J n^l n^l ' (III.1.2.33) 2 2 - Z Z a.a. Pr(-P.,,Sx<-P., P.$v<P.,1) + Pr(P . £x<P . , 1 -P.,,£v<-P.) i=1 j=1 i J i+l i J J+1 i J+1 J Since p (x,v) is symmetric in x and v and p (x,v) = p (-x,-v), XV XV XV Pr(P.Sx<Pi+1, Pj$v<Pj+1) = Pr(-P.+1$x<-P., -Pj+^v<-P.) P P , i+l. j+1 ,P ,12 J p (x,v)dxdv = — J [erfc(-* *—)-erfc(-^— ) ]e dx P. P. XV /2TT P. / 2 / 2 i j l /1-p (T) /1-p (T) (III.1.2.34) and •Pr(P.$x<P.- P...$v<-P.) = Pr(-P. ,1^x<-P. , P.^P.,.) l" i+l' j+1 2 1+1 J J J+1 Pi+1 CVi . fPi+l - \ x2 P.+ptfx P +P(T)X p (x,v)dxdv = J e (erfc(-^ )-erfc( * )dx P. • -P,+1 ^ /2lT P. A^fa A-p\r) (III.1.2.35) 18 and finally n-1 n-1 2 2 R (x) = 2e t a a [Pr(P^x<? , P $v<P.+1) - Pr(+P±«x<P r -P.+1*v<-P >], qx i=l j=l J J.J J J .n-1. (III.1.2.36) — ^ 2 ' R (0) = q2 = 2 I a2Pr(P. $ x <P..,) (III.1.2.37) Mx i=l The same calculations hold for the y-channel if x is replaced by y in the above formulas. The output (S/N) - ratio is therefore c - - /N s2f (a.,P.) f (a., P.) (f) =J*- = 1 0X11 y 1 1 (III.1.2.38) o aw N_x . - \ (2 I (1- |)R (i)R (i) + R (0)R (0))2 i=l x Hy Mx Hy where f (a., P.) and f (a., P.) are given by equation (III.1.2.13) x i' I y i' I ° J R (i), R (i) (III.1.2.36 ) for i = IT , q q s' x ny and R (0), R (0) (III.1.2.37 ). q q x ^y The total number of samples during the observation time T is N = f=KBT (HI.1.2.39) s III. 1.3 (j^) for Sampling at Nyquist-Rate For the calculations in this chapter, the Nyquist sampling rate f = 2B is assumed, i.e. K = 2. s The autocorrelation function of the bandlimited noise is: pn(x) = Sa(2irBx). (III.1.3.1) Therefore, 1, if i =0 p (iT ) = Sa(TTi) = { , (III.1.3.2) n 0, if i 4 0 19 i.e. the noise-samples taken = apart are uncorrelated, and since they are Gaussian, they are also statistically independent. It is easy to see that R (iT ) = R (IT ) = 0 for K = 2, (III.1.3.3) q s q s x y since the quantizers are memoryless devices. Replacing N by 2BT in (ill.1.2.38)yields the output signal-to-noise ratio f (a , P )-f (a P ) Q =-s2/2BT X 1 1 2—1 L_ (III.1.3.4) TT o /R (0)-Rfl (0) qx qy S It is remarkable that (jj) c^n be expressed as a product of two functions f (a., P.) f (a., P.) x l i , y x l - and 17 —— /R (0) /R (0) x y which depend only on one channel of-the correlator. Therefore, for sampling at Nyquist-rate, the dual channel correlator can be decomposed into single channel correlators. This result is due to F. Bowers^ and is treated in more detail in Chapter V.2. It can be seen (III.1.2.38) that we improve (jjj) by sampling faster than at Nyquist-rate and that sampling at rate infinity gives us the asymptotic value or the maximum g (jj) °f a quantized correlator. S III.1.4. 0=) for an Infinite Sampling-Rate No c  It can be seen that the limiting case, when K goes to infinity, corresponds to an unsampled but quantized correlator. Thus signal-to-noise ratio is maximized for this limiting case. From III.1.2.13 and III.1.2.14 we see that w does depend on K. 20 Substituting N = KBT in (ill. 1. 2.22) we get ~2 1 KBT_1 i W = KBT (2 \ (1 " KBT)Rq <*> + \ (0) (HI.1.4.1) i=l z z Taking the limit we obtain — KBT^ lim wZ = -£=• lim i (2 E (1 - ^)R (i) + R„ (0)] K*» K-*x> i=l qz qz T 42/ d " |)Rn (t).dt- (III.1.4.2) I o 1 q z ~2 This result can also be found by computing w for an unsampled correlator, where 1 N 1 T — E q (i) is replaced by — ^ q (t)dt. 1=1 Finally f (a P ) f (a P ) lim Q =-a /T -2—1 - y—- i-. (III.1.4.3) v ^N'o 7T S 1 (2 /A(l- |)R (t)dt)2 z S It was found in II. 3.6 that the degradation factor is the ratio of (^) of the quantized correlator to (JJ) of the analog correlator. The value of (jj) for the analog correlator is calculated in the following chapter. III. 1.5 (JJ) of an Analog Correlator We can omit the quantizers in the x- and y-channel or, equivalently, set q = x (III.1.5.1) and qy = y. (III.1.5.2) Then w = x.y (III.1.5.3) 21 and 1, N 2 1 oo ~ 2" " s ) x = y = — / xe dx = s (III.1.5.4) / 2IT -«> Therefore w = s2. (III.1.5.5) o The variance of w is given by (ill.1. 2. 21) as where — KBT-1 W KBT (2 E (1 ~ KBT)Rx(i) + Rx(0))' (III.1.5.6) i=l R (i) = p (i) = Sa(^). (III.1.5.7) x n K For long integration time — 00 w2 = ^ I (2 E sf C2^) + 1) = . (III.1.5.8) i=l ™ , • . , . for any K >, 2 The relationship CO I i si(^) = I (III.1.5.9) i=—00 for any K £ 2 is proved in Appendix Al. Therefore, (|) =-^=s2/2BT (HI.1.5.10) N o y=j o /w and this is independent of the sampling-rate K as long as K >, 2 III.1.6 General Formula for the Degradation Factor Let n be the number of quantizer-levels in the x-channel and let m be the number of quantizer-levels in the y-channel. Then D ^ J nxm the degradation factor of an nxm level correlator sampled at rate KxB. D „ will be a function of K. n*m 22 (f) . s2 /2BT D (K) » g o,analog = _g (III.1.6.1) nxm ,b_-. .b^V No,nxm No,nxm Substituting f. 1.2.10 inHf.1.6.1 and letting N = KBT gives: , KBT-1 . \ .2 2: (1- -i-)R (i)R (i) + R (0)R (0)]2 T /2 ^ i=l KBT qx qv qx qv » OO-Ui — * ,„ Z ^ Y (III.1.6.2) nxm 2/ K f (a., P.)-f (a., P.) x x' x v x i At the Nyquist-rate, since R (i) and R (i) = 0 for i =f 0, we have qx qy /R(0)-R(0) W2) = f f (a., P.)f (a.,P.) (HI.1.6.3) xi xyi'x For K-*>°, (ill.1.4. 3) substituted in (ill. 1. 6. l) yields T 1 (B 1 (l- f)R (r)Rn (r)dr}2 I o T q q ' D ., M = TT T-f X w / (III.1.6.4) nxm f(a.,P.)f(a.,P.) xi x y x x For long integration time 1 .2 (B / R (T)R (x)dr)' lim D (°°) = /JT- x y ,TT-r r r\ T-~> nXm " f (a,, P.) f (a., P.) (III.1.6.5) x i' •i' y I i where R (T) and R (T) are functions of only p (x) = Sa(2irBx). Substituting qx qy x = Bx , D (°°) becomes independent of the bandwidth B. The degradation nxm r 6 factor D is independent of the input signals, the bandwidth, and the integration time and is only a function of the quantizers, the multi plication scheme, and the sampling-rate. The definition of D allows us to compare different correlators. In the following chapter the degradation-factor D is calculated for four different quantized correlators. 23 III. 2 Degradation Factor of the 2x2, 3x3, 2x3, 3x5 and 4x4 Level Correlators III.2.1 2x2 Level Correlator Both the x and the y-channel have quantizers shown in figure (III.2.1.1) 1 -1 Fig. III.2.1.1 2-ievel quantizer The quantized signals <lx(t) and qy(t) are given by qx(t) = sign(x(t)) and qy(t) = sign(y(t)), (III.2.1.1) (III.2.1.2) Since x(t) and y(t) are statistically independent signals5, the expected value of w is given by the product of the expected values of q (t) and qy(t). Therefore 1, .2 1, N2 i - -r(x-s ) 1 -^-(x-s ) 1 r°° 2 o , 1 ro 2 " j i o c / \ q = q = J e dx f e dx=l-2erfc(s ) x y v/27 ° /2T-ro (III.2.1.3) Taking the first two terms of a Taylor series expansion of the complement error-function around s = 0 we obtain o a =/ — therefore q = q =v — s x y tr o - 2 2 w = — s IT O and the functions defined in (ill.1.2.13) b ecome then f (a.,P.) = 1 x l' 1 (III.2.1.4) (III.2.1.5) (III.2.1.6) 24 and Ciir.2.1.7) The autocorrelation functions R (T) and R (x) can easily be found with q q x Jy the van Vleck relation (or arcsin law, see pg. 483) to be R (x) = R (T) = — arcsin(p (T)), q q TT n ' nx y where (III.2.1.8) p (T) = Sa(2TTBx) n (III.2.1.9) and R (0) = 1. x Equations (ill. 2.1.6) to (ill. 2.1.10) used in (ill. 1.6. 2) yield (III.2.1.10) /T« n KBT-1 . 9 • 9 T | (~ I (1- ^T) (arcsin(Sa(^))T + 1) (III.2.1.11) T  1-1 KBT' K At the-Nyquist-rate we have D2x2(2) = 2 (III.2.1.12) For K-*» using (III. 1. 6.5) it follows that D2x2(00) * ( £°° tocsin (Sa(x)))2dx) 2 (III.2.1.13) The integral / (arcsin(Sa(x))) dx cannot be solved analytically. However, an upper bound can be found to be (see Figure III,2.1.2) •7T' A • a J -Orcc'trnxj -l "z Fig. III.2.1.2 arcsin(x) vs. x 25 arcsin (x)| S | |x| (III.2.1.14) |Sa'(T)| = |^-| *£| (III.2.1.15) Letting x = Sa(x), we obtain | arcsin (Sa(x)) | <: y |Sa(x)| £ j |—^-j- (III.2.1.16) Therefore, ? 2 (arcsin(Sa(T)))£ -^TT (III.2.1.17) 4T2 There exists an R such that o 2 i 2 /OO / TT i* CO I TT D (arcsin(Sa(x))) dx ^ V f "4" d^ = (III.2.1.18) R 4 R ^-2 4R The integral R 2 / (arcsin(Sa(x))) dx can be evaluated numerically. The error of the remainder can be bounded with any desired accuracy using (III.2.1.18). R 2 Evaluating / (arcsin(Sa(x)) dx numerically for R = 1000 yields 1.2515 i 0.00013. ^et mnn 9 I = / (arcsin (Sa(x)rdx (III.2.1.19) and 2 e = • I T (III.2.1.20) 4'R'I Then D2x2(°°) vT+T (i + I e) (III.2.1.21) A lower bound is given by the accuracy of the numerical integration of I. Therefore, 1.2515 - 0.00013 $ D2x2(°°) $ /— (1 + |e) (III.2.1.22) / IT and 1.25137 $ D2x2(oo:) $ l-2528 (III.2.1.23) 26 13 -'Tr Yerbury found D , (•») by an approximation as — = 1.28. 1x1 / -6 He states that his value is 2-3% too highj which agrees with our result. D2x2(K) is plotted in Fig. III.2.1. on page 34. It is remarkable that we can achieve up to a 20% lower degradation factor for the 2x2 level correlator by sampling faster than at Nyquist-rate. At 4 times the Nyquist-rate the degradation factor is 18% lower and at twice the Nyquist-rate it is 14% lower. III.2.2 3x3 Level Correlator Both the x- and the y-channel have quantizers of the form illustrated in Fig. III.2.2.1: 9,. °lu. . -i Fig. III.2.2.1 3-level quantizer P is the decision level and should be optimized to yield a minimum degradation factor. The expected value of w is found by letting I>2">00 in (ill. 1.2.13). Then we get and f (a., P.) = f (a., P.) = e xx x y i' x 2 2 -P w = — s e TT O (III.2.2.1) (III.2.2.2) To calculate the standard deviation o" , the autocorrelation functions w R (T) and R (x) can be obtained from (ill. 1.2. 36) and (ill. 1.2. 37) as 2 x R Or) - ft- ( /~e 2(erfc(P;p(T)x )-erfc( P +P(T)x))dx) (III.2.2.3) • 1-p (x) and R (0) = 2 erfc (P) (III.2.2.4) Above results in (III.1.6.2) substituted yield * II foKBv_1n _ _L_M, 2,,, M _^2^,12.P2 (III.2.2.5) D3,3(K) " I/I {2 (1 " KBT)Rq + 4 erfc^P)} V i=l nx For sampling at Nyquist-rate, D^^(2) becomes ,2 D3><3(2) = TreP erfc(P) (III.2.2.6) and for K-*» the degradation factor takes on the limit 2 I D,.(«) = ^eP (B /°°R 2(x)dx)2 (III.2.2.7) 3X J o q nx The decision level P can be optimized as shown in Chapter V.l.l. The optimum value of P, which depends on the sampling-rate, is 0.612 at Nyquist-rate and about 16% higher at infinite sampling rate. III.2.3 2*3 Level Correlator In this case the x-channel has a 2-level quantizer (Fig. III.2.2.1), The functions f (a., P.) have already been found in (III.2.2.1.6) and x 1 l J (tll.2.2.1) respectively for the 2x2 and the 2x3 level correlators. Therefore, p2 w = - s2 e 2 (III.2.3.1) •no The autocorrelation functions R (T) and R (x) have been found in q q x y (III.2.1.8) and (III.2.1.10) for the 2x2 and in (III.2.2.3) and (III.2.2.4) for the 3x3 level correlator. 28 Using (III.1.6.2) we obtain therefore, ,2 P l _ KBT—1 —-• Do.o(K) = I II e 2 \l 1 (1- (i)R (i) + 2erfc(P)}2 i=l qx qy 2x3 2 / K Sampled at Nyquist-rate, D „(2) becomes ZX j (III.2.3.2) D2x3(2) = | /2erfc(P) e and as K-*» ^2x3^°°^ ta^es on fc^e limit P_ 1 9 co 9 D,XJ») = ^e' (B. / R (T)R (r)dT)Z 2xj o q q x My (III.2.3.3) (III.2.3.4) Again P can be optimized as shown in Chapter V.l.2 and is 0.612 at Nyquist-rate and slightly higher than P for the 3*3 level correlator at higher sampling-rates. III.2.4 3x5 Level Correlator The x-channel has 3 levels; its quantizer is shown in Fig. III.2.2.1. The y-channel has a 5-level quantizer as shown in Fig. III.2.4.1. Fig. III.2.4.1 5-level quantizer The functions f (a., P.) and f (a., P.) are given by (III.1.2.13). xx'i yi'x 6 J The function f (a., P.) was calculated in (III.2.2.1) and 29 y2 Therefore, f (a., P.) = e 2 + Gc - De 2 y l I 2 2 2 P P _ il yl _ y2 w = — s2 e 2 (e 2 + (K - l)e 2 ) IT O (III.2.4.1) (III.2.4.2) The autocorrelation function R (T) is given by (III.2.2.3) and III.2.2.4) and Py2 -\ Pv1"P(Ox e ^erfcQ-^ )- erfc( ^ ) "yi A-P2(T) P +P(T)X > A-P2(T ) P 2+p(x)x + erfc(-^ /l-p2(t) 2 P 2-P(T)X ) - erfc( y ))dx +2K Py2 - x 2/,„J,/Py2"P (t)x e (erfc(-yi 7I-P2(T) P"(T) P +p(r)x ) - erfc( y ))dx X-p2(x ) /•CO X + Pv2-p(x)x (erfc(-^— ) - erfc( y2 /I-P2(T) / P.o+P (x)x 2 JZ2Z ))dx] (x) and Rq (0) = 2(erfc(Pyl) + (K -l)erfc(Py2)) (III.2.4.3) (III.2.4.4) Substituting the above results into equation (III. 1.6.2)., we get KBT-1 D3x5(2) 'I /|(2 z [(l-f|^)Ra a)>4erfc(P).(erfc(P 1)+(K/-^erfc(P ))! IT i=l x qy y y 2 2 2 P P e 2 (e 2 + (K - l)e 2 ) (III.2.4.5) 30 Sampled at Nyquist-rate, D^^(2) becomes D3x5(2) = // (erfc(P) (erfc(Pp + (K -l)erfc(P 2>) o 2 «2 P , P\ _ i_ _yJL y_2 e 2 (e 2 + (K - l)e 2 ) (III.2.4.6) and as K goes to infinity D„ _(K) takes on the limit A B / R (T)R (x)dr o qx qy _ 2 P2 _ 11 111 e 2 (e 2 +(K-l)e 2 ) 'il (III.2.4.7) P, Py^ and P^ can be optimized as shown in Chapter V.l.4. At Nyquist-rate P = 0.612, P . = 0.422 and P „ = 1.266. opt yl,opt y2<Jopt III.2.5 4x4 Level Correlator Fig. III. 2.5.1 shows the quantizer used in the x- and in the y-channel. ' -P -K *,1 Fig. III.2.5.1 4-level quantizer The formulae (III.1.2.13), (III.1.2.36) and (III.1.2.37) are easily applied to quantizers with an even number of levels by letting P^ = 0, since a n-level quantizer (n even) is equal to a n+1 level 31 quantizer with the first decision level set to zero. For a 4 level quantizer we set n=5 and take the .a^'s and P^'s as shown below: n=5 pr° ai = 1 P2=P a2 = K P3=*> Using III.1.2.12 we obtain P2 fv(a-!' Pi> = f„(a-i> P,) = 1 + (K - De 2 (III.2.5.1) and therefore w = - s2 (1 + (K - l)e 2)2 (III.2.5.2) IT O The autocorrelation functions R (T) and R (T) are the same and given by qy (III.1.2. 36) and (III.1.2. 37) .: 2 r P - — R (T) - R (T) = /![/ e 2(erfc(p(T)x -) - erfc(p(T)x ) x y 17 Z 2~rr z 277 vl-p (T ) /1-p (x) + erfc(P^^L.) - erfc(ZzL^»dx p -x 2 c + 2K /* e 2 (erfc(P-p(T)x ) - erf c (?+p (t )x-) )dx /l-p2(x) A-p2(x) 2 o o" P-p(x)x v .P+P(T)X NVJ T . 2 .°° 2, _ ,——)-erfc( . ))dx] p <erfcte^ (III.2.5.3) 32 Note that . erf(«) = l-erfc(«) (III.2.5.4) and R (0) = R (0) = 1 + 2(K2-l)erfc(P) (III.2.5.5) qx qy above results used in (III.1,6.2) yield' KBT-1 | , (2 E (1- =|=-)R Z(i)+(1+2(K -l)erfc(P))V TT I? i=l ^ qx D4x4(K) = lyfc — ^-J2 (IH.2.5.6) (1 + (K -l)e 2 )2 Sampled at Nyquist-rate^ ^ • then becomes DAx4(2) = \ 1+2(K2-Derfc(P) (III.2.5.7) (l+(K-l)e 2 )2 and as K goes to infinity. D. , takes on the limit (B r R 2(T)dx)2 o q D4x4(00),= 71 ~ 2 (III.2.5.8) (l+(K-l)e 2)2 The optimized values of P vs. sampling-rate are calculated in Chapter V.1.3. At Nyquist-rate, P ^ = 0.995 and increases about 17.5% at an J opt infinite sampling-rate. III.2.6 Conclusions For long integration times, D is a function of the quantizers and the sampling-rate only. Two functions characterize a quantizer, f (a.,P.) and R (T). XXI Q x Referring to (ill.1.2.12), the normalized, averaged output of one quantizer is 33 VV V " If/! (III.2.6.1) The autocorrelation function of the quantizer output, R (T), qx was obtained in (III.1.2.36) and (III.1.2.37). These two functions are valid for any symmetric quantizer. Knowing f (a., P.) (f (a., P.)) xiiyii and R (T) (R (T)) for the x- and the y-channels, we are able to q q x y compute D^xm(K) for any combination of two quantizers. For sampling at the -Nyquist-rate, the dual-channel degradation factor is simply the product of two single-channel degradation factors. It will be shown in Chapter V.2. that the single-channel degradation factor is the D obtained for a correlator with only one quantizer in one channel, the other channel left unquantized, i.e. D (2) = D (2) D. (2) (III.2.6.2) nxm nxoo ' mxoo ^ D (<») is the limiting value for D as K-*», and is the minimum achievable nxm ° ' degradation factor for a correlator receiver with nxm level quantization. For an unquantized correlator receiver, sampling faster than at the Nyquist-rate does not change the degradation factor. For a quantized correlator, however, we obtain a lower degradation by "over-sampling" (K > 2). Figure III.2.1, is a graph of the degradation factor vs. sampling rate K for the five combinations of quantizers considered in this chapter. As an example, it can be seen from that figure that a 4x4 level correlator sampled at Nyquist-rate has approximately the same degradation factor as a 3x3 level correlator at twice the Nyquist-rate. Decision levels as well as stepwidths, a^, can be optimized to minimize the degradation factor. Choosing the optimum quantization levels for minimization of the degradation results in impractical logic 12 complications . Choosing the quantization levels as integral multiples D decision levels: optimized values at K = 2 taken 2x2 2x3 3*3 3*5 1.2S2 7.755 — • 1.075 ' ^1.03i 1 2 3 4 5 6 7 8 9 W U 12 13 14 15 16 17 Figure III.2 . i Degradation factor,D versus sampling rate K 35 of one another, preferably powers of two, yields a near optimum per formance in terms of the degradation factor. As the decision levels can be set continuously on the analog-digital converters, non-integral size of P. creates no additional difficulties. A method to optimize 1 the P^'s is given in Chapter V.l. In all previous calculations the x- and the y-channel were sampled at the same rate. The next section investigates the degradation in one case where the two channels are sampled at different rates. III.3 3x5 Level Correlator at Unequal Sampling-Rate A hardware construction of a 3x5 level correlator for Nyquist-12 sampling has demonstrated that under certain circumstances the 5-level channel can be sampled at a faster rate with little increase in complexity. This chapter investigates whether there is anything to be gained by "oversampling" the 5-level channel. A general model of the scheme under consideration is given in Fig. III.3. 1. Assume that every sample of the 3-level x-channel is multiplied with n samples of the 5-level y-channel. If the sampling rate of the x-channel is the Nyquist rate, 2B, then that of the y-channel will be f = 2n«B (III.3.1) sy If the averaging is done over N x-samples, it will include Nn products. The multiplication of a given x-samples.with y-samples at a variety of time-intervals will attenuate any high-frequency components in the correlated signal. Hence, for the purposes of this calculation, it is no longer legitimate to calculate degradation factors by replacing 36 -B sn (f) x x(t) -/-- f sx x-quantizer (3 levels) w V') -B sn (f) y "y(t) '—f sy y-quantizer (5 levels) Fig. III.3.1 Model of correlator with, unequal sampling rates both signals with a d-c value. We can still simplify the calculations by assuming that sx(t) = s (t) = s(t) (III.3.2) but we need to make some assumption about the spectral characteristics of s(t), by specifying its power density spectrum Sg(f). Signals of interest in radio-astronomy will, in general, not have a flat spectrum, but may contain spectral lines within the bandwidth B. If after translation to baseband, such spectral lines occur near the origin, the attenuation of the signal due to the time-displacement of the samples will be negligible, and we would find a rather small amount of degradation. If, on the other hand, there is much spectral power near 37 the upper end of the band, the degradation will be severe. To obtain typical and realistic values of the degradation factor, we assume that the power density spectrum is not concentrated at either end. In fact, the calculations are carried out assuming white Gaussian noise for s(t). It must be remembered that the results so obtained are merely a representative value of the degradation factor, and that in a practical case the degradation could be better or worse, depending on the nature of the signal to be correlated. III.3.1 Asymmetric Sampling Each x-channel sample is taken to be synchronized with the first of a group of y-channel samples, as shown in Figure III.3.3.1.1. t Fig. III.3.1.1 Signals x(t) and y(t), asymmetrically sampled The output w of the correlator is then found to be ^ n N w = ~rr E E q ((k-l)n+l)q ((k-l)n + i) (III. 3.1.1) i=l k=l x 7 Therefore the expected value of w is given by equation (III.3.1.2). 38 n N w = -^r Z Z q ((k-l)n+l)q ((k-l)n+i) . 1=1 k=l X y - n N . n-1 = Z Z R (1-1) = ± Z Rn (i) (III.3.1.2) nN ...... q q n. qq i=l k=l x i=o x y where R (i) is defined as the expected value of the product of the qxqy two ergodic random processes <lx(t) and q (t + lTg) Rn n (i) = qv(t)qv(t + IT ) (III.3.1.3) qxqy x y s where Tg is the sampling interval T = A (III.3.1.4) s -^nB Since the joint probability-density function of the signals x and y is given by n ~ y- \ (x2-2r(x)xy + y2) Pw(x,y) = r=~T— e l-r (T) (III.3.1.5) xy 27r/l-r (x) and r(x) = P (T)« O2 « 1 (III.3.1.6) where R (T) ps(T) = R70T (III.3.1.7 we find RXy(T) as tne expected value of the signals x(t) and y(t + T) or, equivalently as the cross-correlation-function of x(t) and y(t). Therefore, R (x) = x(t) y(t +T) 00 CO . . L L xy Pxy(x'y) dxdy = r(x) = p (x)a2 (III.3.1.8) s s 39 R (T ) is a linear function of R (T ) if a <<1 and depends only on qq xy s r J x y J the quantizers and R (x ) . xy In equation III.1.2.13 we used a d-c signal with the normalized autocorrelation function p (T) = 1 =const. (III.3.1.9) and therefore R (T) = const. = R (0) = q q . (III.3.U0) q q„ q q ^xny x y MxHy J Since R (x ) is proportional to p (x) it follows that qxqy s R (T) = - f (a., P.) f (a., P.) p (x)o2 (III.3.1.11) q q IT xi I y i' l Ksv s Mxny J and for the 3x5 level correlator it follows from III.2.4.2 that 2 2 2 P P _ El _ yi _ y2 R (x) =-a2 e 2 (e 2 + (ic-l.)e 2 )p (x) (III.3.1.12) q q ir s s x y Letting T = iT and denoting iT shortly by "i" we obtain > s s after substituting (III.3.1.12) into (III.3.1.2), 2 2 2 P P w = i-2-G2e2(e 2 +(K-l)e 2 )Z p (i) (III.3.1.13) n n s . _ s i=0 1 ~2 Since D is proportional to -zr and w is independent of s for small signals, it can be seen that the lowest degradation factor is obtained for a d-c signal where Ps(i) = !• The more high-frequency components s(t) contains, the higher the expected degradation factor. If s(t) = cos 2-rrBt, then p (i) = cos — and Hs n 1 N_1 ITT w is proportional to — 1 cos(—) n i=0 n 40 In the limiting case as m<», w is proportional to — ^ cos(x)dx = 0 and TT O therefore the degradation factor goes to infinity. A low degradation factor can be expected for an unequal sampling-rate if the signal s(t) has most of its spectral power at low frequencies. To compute the 2 variance a we assume x(t) ~ n (t) and y(t) ~ n (t), i.e. w x J y —2 -, n n N N •w = -A" Z Z Z E q ((k-l)n+l)q ((X-l)n+l) q ((k-l)n+i)q ((X-l)n+j) n NT 1=1 j=l k=l A=l x x y y 2„2 1=1 j=l k=l A=l Hx Hy n N J J Using the identity N-1 N N Z (N-i) (f(k+i) + f(k-i)) + Nf(k) = Z Z f(k+i-j) 1=1 1=1 j=l (III.3.1.15) we obtain -j . N N n-1 w = —^ Z Z R ((k-X)n){ Z (l-^-)(R ((k-X)n+i) + R ((k-X )n-i) )+R (.(k-l)rijr nN2 k=l A =1 qx 1=1 N qy qy qy (III.3.1.16) After using equation (III.3.1.15) again, we find —x 9 N-1 n-1 v = ~ { Z (1- (nk)Rr] (nk)+ E (1- i)R (0)Rn (i) nN k=l n qx qy 1=1 n qx qy. n-1 N-1 , . + E Z (1--)(1-£)R (nk)(R (nk+i)+R (nk-DH^R (0)R (0)} (III.3.1.17) •Tii n Nq q q 2 q q 1=1 k=l x ny ny x ^y We assume in what follows that the x-channel is sampled at Nyquist-rate. Since R (nk)=R (nk) = 0, ([II. 3.1.16) becomes qx qy w2 = {2 Z (1- i)R (i) + R (0)} R (0) (III.3.1.18) nN . , n q q q i=l y y nx 41 and as for u-*» (equivalent to x-channel sampled at Nyquist-rate, y-channel unsampled), — T w2 = N^f~ Ro (0) o 5 (1" (T)dT (III.3.1.19) s x s y where Tg = is the sampling-interval R (T) is given by (III.2.4.4) qy and R (0) by III.2.2.4. The asymmetric degradation factor Da 3x5 (w) is therefore found n_1 i 2 . (Rq (0)[2 E (1- ^)Rq (i) + Rq (0)J)Z Da (n)=2-/F x 2 2 (III.3.1.20) 12 12 12 - — P - — P _ipz n_i 2 / 2 yl .v 2 y2 " e - (e J + (K-l)e ' ) -E p d) i=0 S For n 1_ 1_ (R (0)B /2B(1-2BT-)R (T)dT)2 Da,v, •(«) = 2- - y = = (III. 3.1. 21) jxi 1212 1 Z — - -=- P „ ,2B e 2 (e 2 yl +(,c-l)e 2 Y2 )B 7 p(x)dT o s Since R (T) and p (x) are functions of 2irBx, Da,vC- (°°) becomes q s Jxj-y independent of B. The stepwidths P, P^ and P 2 c&n be optimized. Since the x-channel is sampled at Nyquist-rate, Da.jxc;(K) can be expressed as the product of an x and a y-part. P for the x-channel J opt (3-level-side) is equal to p0pt for a 3-level quantizer at Nyquist-rate. (=0.612). The values of P , ^ and P „ are slightly higher yljopt y2<)opt 6 thanP i ^ and P 0 ^ at Nyquist-rate. In Figure III.3.3.1 and III.3.3.2 however yljOpt .y2jOpt J 6 the Nyquist-rate values P .. ^ = 0.422 and P „ - 1.266 are used, JH yl opt y2 opt ' (see Chapter V.1.4) since the error of D is small enough to be neglected. 42 III.3.2 Symmetric Sampling The x-samples are taken to be synchronized with the y-channel samples. Further we consider the center of a group of y-channel samples to be coincident with an x-channel sample (see Fig. III.3.2.1). Fig. III.3.2.1 Signals x(t) and y(t), symmetrically sampled The output w of the correlator is found to be _1_ E Z q (<2k-^n+1) q ((k _ !)n +i) (HI. 3.2.1) W nN i=l k=l X 1 y Therefore the expected value of w is given by equation (III.3.2.2) n N w = — Z Z q x nN . . . . ^x 2 i=l k=l ((2k-l)n+1)qy((k _ + ±) — I R (i n . - q q 1=1 ^xny n+1, (III.3.2.2) The cross-correlation function of the signals and qy(t), R (T) was obtained in (III.3.1.11) x^y 2 0 Therefore, 2 P P - 1 2 2 - — - - -2?- n n+l w = - - a e 2(e 2 + (ff-l)e 2 ) Z p (i - (III.3.2.3) n TT s . , s 2 i=l 2 The variance is again calculated for noise-inputs only. Therefore, w2 = a2 (III.3.2.4) w 2 1 * » ,2. N ,((2k-l)n+l), ,((2^-l)n+l), w = "V 2. ? 2, I qx( 2 )qx( 2 ) n N i=l j=l k=l ^=1 qy((k-l)n+;)qy((A-l)n+j) 1 n n N N = = 2 2 ESS Z R ((k-A)n) • R ((k-A)n + i - j) (III.3.2.5) i=l j=l k=l X=l qx qy The expression (III.3.2.5) is the same as (III.2.1.13) for asymmetric sampling. Therefore w is given by (III.3.1.17) and the symmetric degrada tion factor Ds„ (n) becomes then 3x5 n-1 -W (0)-[2S (1- i)Rn (i) + R (0)]j2 I q " n q q J Ds (n) = y /n" — 1 1 (III.3.2.6) p2 _ 1 p 2 _ 1 p 2 e--[e"2 yl +(K-l)e"2 y2 ) £ o (i- ^) 1=1 Using (III.3.1.18) and taking the limit of (III.3.2.2) as n-x*> we find for an infinite sampling-rate on the y-channel 1 f 2B 1 2 JR (O)-BJ (1-2BT) R (T) dx) I qx 0 qy I Ds, •(») = £ *• " X (III.3.2.7) 35 1212 12 f — e 2 [e 2 YL + (K-l)e 2 Y2 JB p (x) dx _ 11 S 4B Again R (x) and p (x) are functions of 2TTBX, i.e. DSjx^ (ro) is independent qy s of B. A high degradation factor can be expected where the signal s has 44 most of its spectral power near the bandlimit B. For symmetric sampling Ds „ .(n) is proportional to r1 ^ ,. n+1. ,-1 [n JPS(1-T)] i=l and, since PG(T) = Pg(-T) 1S a monotonically decreasing function between 0 and -~ for any power density spectrum Sg(f), it can be seen that symmetric sampling always results in a lower degradation than asymmetric sampling. The largest degradation factor is obtained where Sg(f) has only one spectral line at f = B, i.e. Pg(T) = cos 2TTBT. As n-*30 Ds .-x^ (°°) becomes proportional to I cos x dx] 1 = — 2 and does not go to infinity as in the asSymmetric case. It is of interest to note, however, that the optimum values of P, P^ and P^ are the same for the as$ymmetric and the symmetric case. Comparison of asymmetric/symmetric sampling for a white signal s(t): Under the assumption of s(t) having a flat power density spectrum over the bandwidth B, its autocorrelation function p (T) becomes s p (x) = Sa(27rB-r) (III.3.2.8) s r2B" For asymmetric sampling J p (x)dx is the integral over CT s the sampling-function from the origin to the first zero-crossing, (see Figure (III.3.2.2). 45 Fig. III.3.2.2 Autocorrelation function of s(t) For symmetric sampling the integral is the shaded area illustrated in Figure (III.3.2.3). fsCT) 26 4B Fig. III.3.2.3 Autocorrelation function of s(t) It is easy to see that 4B _1_ J_ 4B f 2B p (x)dT > . P ( T) dt s Is 0 (III.3.2.9) From (III.3.1.20) and (III.3.2.7) it follows that the symmetric case will result in a lower degradation factor than in the asymmetric case. III.3.3 Conclusions Da3x^(n) and Ds3x^(n) have been computed and plotted in Figures 46 III.3.3.1 and III.3.3.2 respectively, for a flat and for a triangular spectrum which we arbitrarily assumed. As expected, asymmetric sampling results in a higher degradation factor than symmetric sampling for both shapes of Sg(f). For the two shapes considered, asymmetric sampling is not a good method to use in determining the autocorrelation function of s(t). Considering F,^. HL3.3,2 » we see that for the triangular shape the degradation factor D is smaller than the D at Nyquist-rate at only one particular rate, namely twice the Nyquist-rate (n=2). Symmetric sampling results in about a 2% smaller degradation factor beyond n=4 for the flat spectrum, and about an 8.5% reduction for the triangular spectrum. Generally speaking, unequal, symmetric sampling is advantageous only if Sg(f) has most of its spectral power at lower frequencies. But it •may be preferable, in this case, to neglect the high frequency components and operate with a smaller bandwidth, considering only the relevant spectral lines of Sg(f). III.4 Degradation for Overquantized Correlators We have, seen in Chapter III. 2 that, at a given sampling-rate, the degradation factor D becomes smaller as the number of levels in the quantizers increases. The lower bound (D=l) corresponds to an infinitely fine quantization or no quantization at all. However, more levels means a greater variety of products, q^, to be handled by the averager, and this results in a greater complexity of the radiometer. In practice, therefore, there is a limit to the number of different products one is willing to handle. Given this limit, the question arises whether it might be advantageous to quantize each signal to many more levels, but to Da,., W Decision levels: optimized values K = 2 taken IS • <»7a 75 + U 9 V / / a = asymmetric case s = symmetric case / / / .7s I '6 ' i 10 Figure III.3.3.1 Degradation factor D versus oversampling factor n for a flat spectrum a Decision levels: optimized, values at K = 2 taken / \ • o — o. 2d 2* a = asymmetric s = symmetric -2s— t ' 75 ' 22 ' 2"T Figure III.3.3.2 Degradation factor D versus oversampling factor n for a triangular spectrum 49 merge some of the resulting "products" to obtain only the desired number of different outputs. for a 4x4 level correlator where the least significant products were neglected. The present chapter is based on Cooper's technique and extends this technique to the 5x5 level correlator. Later, allowing only three different products, q^, we consider, in section III.4.2, a 5x5 level correlator with the number of products reduced to 3 and, in section III.4.3, an analog correlator, where the quantization to three levels is done after the multiplication. This last case is interesting, as it represents a limiting case and tells us what we could gain for a given number of pro ducts by "overquantization". III.4.1 Multiplication Using Four Possible "Products" One case of 'incomplete multiplication" was investigated by Cooper , (a) Cooper's Scheme , 4x4 levels The function f(a., P.) for a 4-level quantizer has been found in III.2.5.1 : Pj £(.a±* P±) = 1 + (K-D e ~ 2 Therefore w for a 4x4 level quantizer is given by or, multiplied out, w = — 2 s o 2 2 (III.4.1.2) The following 6 different products have to be handled by the averager: ±1, ±K and ±K 2 50 2 Therefore the expression (III.4.1.2) has terms in 1, K and K where: 2 2 only the products ±K contribute to the terms in K , only the products ±K contribute to the terms in K and only the products ±1 contribute to the terms in 1. Deleting a product pair cancels the corresponding term in equation (III.4.1.2) Omitting the terms in ±1, which are the least significant terms, (III.4.1.2) becomes 2 2 2 P P P_ w~ = - s2 Ke~ T (2(1- e~ T) + K e~ T) (III.4.1.3) e IT o where the subscript e refers to a correlator with the least significant products eliminated. The variance of w for the least significant products 2 deleted is denoted as (a )e and calculated for sampling at Nyquist-rate. w Using equation (III.1.2.21), letting N = 2BT and recognizing that R (i) = 0 qz for i 4 0 we find (°w> 2BT Rq (0) (IH.4.1.4) e ^z The autocorrelation function R (0) is given by z R (0) = [R 2(0)] (III.4.1.5) q q e z x 2 where [R (0)]e is found multiplying out equation (III.2.5.5) qx [R 2(0)] = (l-2erfc(P))2 + 4K2(l-2erfc(P))erfc(P) + 4K2 erfc2(P) q e nx (III.4.1.6) and deleting the least significant term yields [R 2(0)] = 4K2[(l-2erfc(P))erfc(P) + K2erfc2(P)] (III.4.1.7) q e • x Equation (III.4.1.8) is obtained after substitution of (III.4.1.7) into (III.4.1.4) as (aw}e = 2BT 4K2{(erfc(P)(l-2erfc(P)) + K2 erfc2(P)} (III.4.1.8) The output signal-to-noise ratio, (—) is then found using N o,e (II.2.3), as 2 2 2 P P P _ We _ /2BT e" 2 (2(l-e" 2 )+Ke" 2) 2 ,_TT (TT) = — = s (III. 4.1.9) N o.e a TT j » » 1 o W,e /erfc(P)(l-2 erfc (P))+K erfc (P) (—)q for the analog correlator is SQ /2BT. The degradation factor, with the lowest order term deleted, becomes then: 1 (2)] = ^ {erfc(P)(l-2erfc(P))+K2erfc2(P)}2 (m.4.1.10) e" T (2(l-e~ T")+<e~ Deleting the least significant products leaves only those in ±K and 2 ±K , i.e. the averager has to handle only four different products instead of 6 for the usual 4x4 -level correlator. Figure (ID. .4.1.1) shows [D^x^(2)le versus the decision level, P, for different values of the stepwidth, K, as parameter. For comparison, D^x^,(2) for the regular 4x4 level correlator is plotted in the same figure. [^4x4(2)^ at the optimum decision level is about 0.97% higher than D^x^(2), which is a small price to pay for the advantage of having four instead of 6 different products of q to be entered into the averager. z (b) 5x5 level correlator yielding 4 "products" The function f(a_^, P^) for a 5-level quantizer was found in equation (III.2.4.1) as 2 2 P P 1f(aj., P ) = e 2 + (K - l)e 2 (III.4.1.11) 53 and the expected value of w is given by (III.1.2.14) p 2 p ^ p2 p2 2 9 9 11 _ 11 9 li. _ _i _1_ w = - s ((e~ 2 - e 2 ) + 2tce~ 2 (e 2 - e 2 ) TT O P 2 + Ke *2 ) (III.4.1.12) after deleting terms in ±1 we obtain 2 9 2 2 P P 1 P P • 2 1 2w~ = - s2 K e" ~~T [ 2(e~ ~2~ - e~~2~ ) + Ke~~2~] (III.4.1.13) e TT o 2 Assuming sampling at Nyquist-rate, the variance (o^ ) for the least significant terms deleted can be obtained using (III.4.1.4) and (III.4.1.5) where [R (0)] is found by multiplying out equation (III.2.2.4) as: qx 6 R2 (0) = 4[K4 erfc2(P0) + 2K2erfc(P0)(erfc(P.) - erfc(P„)) + (erfc(P.) a z / ± z i -x - erfc(P2)) ] (III.4.1.14) and deleting the least significant term. 2 The variance, a , is then obtained as w,e 2 o"w e2 = (K2erfc2(P2) + 2 erf c(P2) (erf c(P1) - erfc(P2))) (III.4.1.15) Therefore, the degradation factor [D (2)] is given using (II.2.3) and J^J e (III.1.6.1) 1 (K2erfc2(P0) + 2 erfc'(Pj(erfc(P.) - erfc(P0))^2 l»5x5(2)le-v -j? -ji -j2 (III.4.1.16) _2 _ _1 _ _2_ _ _1_ e" 2 [2(e 2 - e 2 ) + <e 2 ] The 5x5 level correlator with the least significant term deleted has also 2 4 different products (±K and ±K ) to handle. From Figure (III.4.1.2) we see that D5x5(2)e is 2.76% higher than Dj. ,-(2) at optimized decision levels, but, compared with the 4x4 level jX j correlator, still 2.46% lower than [D. ,(2)1 , which also has 4 different ' 4x4 e values of the quantizer-products, q^. The prices we pay for this lower degradation are an increase in complexity of both the quantizers and the "multiplier". III.4.2. Overquantized 3-Product-Correlator Under the general assumption stated in Chapter II we consider a sampled correlator with two 5-level quantizers as shown in Figure III.2.4.1 Seven products qx<3y are excited from the multiplier: — K —K, -1, 0, +1, +K, +K f Yx(i) f y(i) 5-level quant. 5-level quant. Product-merger N I., Z w Fig. III.4.2.1 Correlator with product-merger using the product-merger shown in Figure III.4.2.1. The logic scheme of this product merger is shown in Fig. III.4.2.2 and probability chart as in Figure III.4.2.3, the signal qz(i) retains only the three products -1, 0 and +1. V V\ k 1 0 -1 k i 1 0 -1 -1 i i 0 0 0 -1 0 0 0 0 0 0 -r -i 0 0 0 1 -k -i -1 0 1 1 Fig. III.4.2.2 Logic Scheme of Product merger f Fig. III.4.2.3 Probability chart of product merger The signal equals 0 if the signal-pair (x,y) is in the shaded area, equals +1 if (x,y) are in the area (1) or (3) and equals -1 if (x,y) are in (2) or (4). Therefore the expected value of the output, w, is w = q^ = Pr(l) + Pr(3) - Pr(2) - Pr(4) (III.4.2.1) According to the assumptions made in Chapter III.1.1, the Gaussian input-signal s(t) can be replaced by a d-c signal s(t) = s and since the signals x and y are statistically independent, their probability density function is Pxy(x>y) = Px(x) * Py(y) (III.4.2.2) where I 9 p (x) = _L e~ 2 (x - snr (III.4.2.3) /27 and similarly for Py(y) using the probability chart of Figure III.4.2.3, the probabilities of finding (x,y) in area (1) is V** f0* f P2 fP2 Pr(l) = p (x) p (y) dx dy - p (x) p (y)dx dy JX=P 4=P x y A=P1 Jy=P1 X y rP = [ I p (x) dx]2 - [J 2 p(x)dx]2 jpl _ pl = [erfc(P, - s )]2 - [erfc('Pn - s ) - erfc(P0 - s )]2 1 o 1 o 2 o = erfc(P0 - s ) [2erfc(P, - s ) - erfc(P^ - s )] (III.4.2.4) 2 o 1 o Z o Similarly Pr(3) = erfc(P0 + s )[2 erfc(P1 + s ) - erfc(P, + s )] Z o 1 o 1 o (III.4.2.5) For area (2) we have Pl P2 Pl Pr(2) = f p (x)dx f pv(y)dy - ( p(x)dx J p(y)dy *1 ~°° 1 2 = erfc(Pn-s )erfc(P9+s )+erfc(P„-s )erfc(P,+s )-erfc(P„+s ) lo ^ o 2 o lo Z o erfc(P„-s ) (III.4.2.6) Z o and exactly the same result is obtained for Pr(4) Pr(4) = Pr(2) (III.4.2.7) Since Sq<<1, the error-functions can be linearized using the first two terms of their Taylor-series around s^ = 0 2 2 2 2 P P P s 2 ^1 _2_ Pr(l) + Pr(3) = 2 erf c(P„) (2 erf c^P,) - erfc(P„)) + — e 2 (2e" 2 - e 2 2 1 2 TT (III.4.2.8) 2 2 2 P P P 2 _ 2 _ _1_ _2_ Pr(2) + Pr(4) = 2 erfc(Pj(2 erfc(Pj - erfc(Pj) + — e 2 (2e - e ) 2 1 2 TT (III.4.2.9) Therefore, p2 2 2 2 1 2 w = - s2 e~ ~2 (2 e 2~ - e 2~ ) (III.4.2.10) TT O 2 The variance a is obtained from the general equation (III.1.2.1). Con-w sidering sampling at Nyquist-rate, the autocorrelation function R (i) 2 qz differs from zero only at (i=0), and, using N = 2BT, a is found as w Rq (0) °w = -2BY- (III.4.2.11) 2 Since q^ has only the values 0 or 1, R (0) = q 2 q z z = Pr(qz = 1) + Pr(qz = -1) (III.4.2.12) 2 Since we consider noise inputs only for the evaluation of o^, and since the signals x and y are statistically independent. ~Pr(qz = 1) and Yriq^ = -therefore can be obtained by putting (Sq = 0) in the equations (III.4.2.4) to (III.4.2.7). Therefore, Pr(qz = 1) = [Pr(l) + Pr(3)](g =Q), (111,4.2.13) o Pr(qz = -1) = [Pr(2) + Pr(4)](g =Q) (III.4.2.14) o 59 and Pr(qz2=l) =qz2= [P1+P2+P3+P41(S=0) o + 4 erfc(P2)(2 erfc^) - erfc(P2)) (III.4.2.15) Using equation (III.4.2.11), the variance a is then found as w °l = BY erfc(p2)(2 erfc(P1) - erfc(P2>) (III.4.2.16) We denote by [D^ ^(.2)]^ the degradation factor for this correlator, where qz values have been merged to reduce them to three different values. [D5x5(2)]3= /2BT s (^-) aw - TT (erfc(P2)(2 erfc(P1) - erfc(P2))) P P P _ 11 _ _i -11 e 2(2e 2-e 2) (III.4.2.17) Figure III.4.2.4 is a plot of this degradation factor versus P^ with the decision level P2 optimized. III.4.3 3-Level Quantization After Analog Multiplication We now assume that in the correlator described in Chapter II the signals x(t) and y(t) are sampled but not otherwise processed before multiplication, but that the resulting products z(i) are then quantized into three levels as shown in Figure III.4.3.1 Yx(t) yy(t) x(i) e— 3-level quant. (0 1 N n2 4a" w Fig. III.4.3.1 Quantization after multiplication Figure III.4.2.4 Degradation factor versus decision level F for merged product correlator This arrangement is not very practical, but is considered as a limiting case of the process described in proceeding subsections, where we have "overquantization" with subsequent "merging" of products. The expected value of w is easily found as w = = Pr(z * P) - Pr(z <; -P) (III.4.3.1) Signals x and y have normal distribution, variance equal to unity, and normalized correlation coefficient r = s 2 (III.4.3.2) o Their joint probability-density is given by ( 2 0 2 4. 2\ (x -2 s xy + y ) ( \ i 2(i-s 4; ° Pxv(x,y) = ==; e o y 2TT/1-S 4 (III.4.3.3) Then P Pr(z >, P) = f / Pvv<x,y) dx dy + f° fX P (x,y) dx dy J0 P Y J J Y —OO —CO X = 2 J j° P^U.y) dx dy OP x 2 x P 2 r » -y s x - f e erfc(X S=T) dx (III.4.3.4) ir 'o >7 4 Using the first two terms of a Taylor-series around (SQ = 0), we obtain r x? 2 » 1 , 2 .P.2. Pr(z ^ p) J e erfc(£)dx +— xe dx (III.4.3.5) 0 x 77 0 and „ CO CO p Pr(z $ -P) •= / Jp p (x,y)dx dy + / J" p (x,y)dx dy —CO — — * 0 —00 X 2 J J P— ^x'y^ dx dy —oo — X 2 P . 2 x — + S X e 2 erfc(X ) dx (III.4.3.6) S O x2 s 2 1 - f e~ T erfc^dx - — J xe~ I (x2+(-)2)dx TT J x 11 "n x 0 u Therefore, 2 The variance a is found using (III.1.2.24) w 9 ' KBT-1 °w "Kif [2 ^ Rq ™ +\ x=l Hz ^z The autocorrelation function (III.4.3.7) 2 s r - j(x + (-) ) w = — j xe X dx (III.4.3.8) R (T) = q (t)q (t+r> (III.4.3.9) z can be found as follows: Let u(t) = z(t+T) (III.4.3.10) Then q (t) q (t+T) = 1 if uz :> P z z = -1 if uz $ -p = 0 if -P < uz < P (III.4.3.11) Therefore, R (x) = Pr(uz >,P) - Pr(uz <c -P) (III.4.3.12) qz Considering sampling at Nyquist-rate, we find that R (i) } 0 if i t 0 and R (0) = q 2 (III.4.3.13) yz qz Z 63 2 The squared signal, q^ , has the values + 1 with Prob Pr(|xy| >, P) 0 with Prob Pr(|xy| < P), Therefore, Pr(|xy| * P) = Pr(xy :> P) + Pr(xy $ -P) (III.4.3.14) The probabilities Pr(xy 5 P) and Pr(xy £ -P) can be found by putting SQ = 0 in (III.4.3.4) and (III.4.3.6) as Pr(xy Z P) f & 2 erfc(^) dx (III.4.3.15) and 2 o> _ x_ Pr(xy <: -P) =-/- J e 2 erfc (-) dx (III.4.3.16) V 7T Q X Therefore, from (III.4.3.13), 2 .00 X q2 = R (0) •= 2-P Je 2 erfc(-) dx (III.4.3.17) and using (III.1.2.22) for N = 2BT 2 °°_ x 1 % " 2BT >" 2 «fc£ d*}2 (IH.4.3.18) Let [Da(2)] be the degradation factor of the analog correlator with three level quantization after multiplication of the signals x and y. Then, for sampling at Nyquist rate, „ _ x_ I 2 P 2 _3 e erfc(—)dx) [Da(2)]„ = J2(j)U — (III.4.3.19) ' 2(x +(x} > xe dx [Da(2)]_ is plotted in Figure (III.4.3.2) as a function of the decision level P. 64 Figure III.4.3.2 Degradation factor versus decision level P 3-level quantizer after "multiplier" III. 4.4 Conclusions 65 Figure (III.4.4.1) combines the results on correlators using three values of the products, q^ which are to be averaged. It shows degradation factors, D, plotted against decision level, P, in three cases (1) for the regular 3X3 level correlator, employing no overquantizing" of the signals. (2) for the 5x5 level correlator where products are merged to three values as described in section (III.4.2), and (3) for the correlator studied in section (III.4.3), which uses an infinite number of levels before quantization and reduces the possible number of products to three afterwards. It is seen that some improvement of degradation factor can result from "overquantization", but that this is limited to about 4%. At optimum decision levels, the degradation for case (3) is 4% less than that of the regular 3x3 level correlator, (1). 66 Figure III.4.4.1 Degradation factor for versus decision level P 3-product correlators IV. SIMULATION WITH RANDOM NUMBERS The simulation is a software model of the actual correlator oper ating under the assumptions stated in II.1. and has two main purposes: (a) To verify the theoretical results found in III.2. (b) To evaluate the degradation factor for complicated quantizers where a theoretical computation of D would be too difficult. The assumption s^ << 1 would make it necessary, as for a real radiometer, to correlate over extremely long intervals of time. A direct simulation of the whole system is therefore impractical. Since the expected value of w is relatively easy to compute and does not depend on the sampling-rate, we 2 restrict ourselves to a simulation to determine the variance. <S" . Accor-w ding to our original theoretical assumptions we neglect again the signal s(t) and consider only the two independent Gaussian white noise sources as inputs. Our noise-samples are generated by a subroutine which produces random numbers with normal distribution, which is available on the IBM-360. 14 It was found that 2 = (16384) samples are needed in order to determine 6" with sufficient accuracy, w IV. 1. Creation of Correlated . Samples The random numbers generated by the computer program can repre sent samples of band-limited Gaussian noise taken at the Nyquist rate 2B. For some calculations, we need to simulate samples taken at a higher rate, KB, where K > 2. Such numbers will show some auto-correlation. They were generated as illustrated in Figure IV.1.1. Part (a) of that figure shows the situation to be simulated; a noise source of bandwidth B is to be sampled at rate KB, where K > 2. a) Sx(£) Variance = 1 f = KB J-4 £fc> b) s;(f) • Variance = K 2 LP-Filter s 0-B x(t) x(i) c) 'x'U)! f = 2B' s CH-B' (KB x'(t) x(t) x(i) d) x'(i) e) Random-numb er-Fas t-Fo uri er-Transfo rm / \ generator Variance = 1 FFT ) K. 1 X Variance = K Figure IV.1.1 Generation of correlated noise samples Part (b) shows an equivalent situation, where the noise source is thought of as having initially a wider bandwidth B' = — B, and the sig nal is then reduced to bandwidth B by an ideal low-pass filter. Note that the average power of the hypothetical source x'(t), should be (B'/B) times that of the real source, x(t). Part (c) is equivalent to (b). Here a sampling switch at rate 2B* is inserted, with the samples passed through an ideal filter of band width B', which allows the signal to be recovered completely. Of course, a filter of bandwidth B' preceding one of lesser bandwidth, B, is redundant and can be removed, as is shown in part (d). Finally, part (e) shows how the situation shown in (d) is simu lated. Random numbers with Gaussian distribution are generated to repre-1 sent samples of the hypothetical signal, x'(t) at intervals ,). They / 1 are multiplied by-yK/2 to give them the required variance. Band-limiting is achieved by taking a Fast Fourier transform of a sequence of N = KBT such samples, rejecting components above frequency B, and performing an inverse Fourier transform on the remainder to recover the required time-samples. IV. 2. Simulation of the Variance Two independent sets of noise samples n(i) are used as inputs on the x-and y-channel, as shown in Figure IV.2.1. The samples in one set are correlated to represent samples of a bandlimited, white noise as dis cussed in IV.1. The samples for the x-channel are called x(l) x(NM) and for the y-channel y(l) ....y(NM). 70 XII)... XlWM) 4(d).. 4 CUM) x(i) yd) qx(D - a. if P.<;x(i)<P. , , 1 X 1+1 qy(i) = a . if P.<cy(i)<P. l l ^ l+l = -a. 1 if • -pi+l ^x(i)<-P. l = -a. I if -•Wy(1)<-pl qx(i) Figure IV.2.1 Simulation model of a quantized, sampled correlator The expected value of w is zero since x and y are statistically independent, zero-mean samples and the quantizers are symmetric. Therefore 2 . 2 the variance is equal to the expected value of w and, denoting W as the 2 2 average over M values of w , the expected value of W , is found as W w = 6", w (IV.2.1) We are interested in (Sfaiy the standard deviation of W , 4 2 2 <V) = w - (w > (IV.2.2) in order to estimate the accuracy of ^ as found by the simulation method. The products, q (i) , are zero-mean random numbers with a non-Gaussian pro bability-density distribution. In fact, p (z) is a set of Dirac delta qz functions symmetrical about = 0. However, the central limit theorem states that the probability-density function of the sum of a large number of random variables with arbitrary probability-density functions tends to become Gaussian in the central region. Applying this to our case, we see that w(k) has a normal probability-density function in the central region, assuming that N is large enough. 4 The expected value of W is given by M M W4 = -\ E E w2(i) w2(j). (IV.2.3) M i=l j=l For i = j we get w2(i) w2(j) = w4 = 3£ 4 (see 1, pg. 148) (IV.2.4) w and for i 4 3 we get w2(i) w2(j) = w2(i) w2(j) = (w2)2 (IV.2.5) 2 where the values of W are assumed to be independent Therefore, M M M 2 4 1 4 1 2 W = ~y E w + -y 2 Z E (w ) M i=l M i=l j=i+l = | w4 +^ (w2)2 (IV.2.6) q V 4 M^w t .2 1 _ 4 , M-l 4 4 (q) = — 3 <g H — s" _ 6 M w M w °w (IV.2.7) Therefore the standard deviation of W is obtained as FT 2 ^(W2) "VM ^W (IV.2.8) IV.3 Results of Simulation Runs In chapter III it was seen that the degradation factor, D, is proportional to the standard deviation, a , of the output of the quantized w correlator. Specifically it follows from equations III.1.2, III.1.2.14 and III.1.6.1 that ^ TT Y2BT ,r.. „ 1N D = -r- —7 ^ -. r-r- x a (IV.3.1) 2 f;(a.,P.)- f (a.,P.) A w Lx 1 1 ^y 1 1 A series of M simulation runs, each using N = KBT samples in each channel, 2 ~2 2 gives a result W whose expected value, W , equals a . Hence we can re-w write equation (IV.3.1) in terms of N and as 2 ' — D2=-~| tfx(a.,P.). fy(a±,V±)}-2 W* (IV.3.2) ~2 For any given correlator, all terms on the right hand side except W can be easily calculated. The quantity W" is found from the M simulation runs with an uncertainty given by equation (IV.2.8) as ^r1 S civ-3-3) w 2 Hence the simulation runs can give us a value of D with the same relative uncertainty • - ^ -f OV.3.4) H2 ^ or a value of D with uncertainty " e =  I V) = JL (IV 3 5) To get an accurate value of D then requiries a very large number, M, of simulation runs. Each run also requires a large number, N, of sam ples if it is to represent a practical application where BT >> 1. It was found that N = 180 is a reasonable number for this purpose. The combina tion requires NM samples in each channel and this can soon produce exorbi-14 tant computing times. For the actual computer runs, 2 = 16,384 samples were used in each channel. If these are "band-limited" samples, it requires 14 about 32 sec of CPU time in the IBM 360 computer to generate the 2x2 samples and a further 8 seconds to execute the simulation program, for a total of 40 seconds. If the samples are then regarded as batches of N = 180 samples each, the number of batches will be M = -^flf4- = 91 (IV. 3.6) resulting in an uncertainty in the value of D given by equation (IV.3.5) as t = = 7.4% (IV.3.7) •2x91 This is too large an error for practical purposes. Reducing it by a fac tor of four requires an increase by a factor of 16 in the number of samples and hence in the computing time, which would increase to about 649 seconds. Another more efficient way, which has proved very useful, is des cribed here: The samples x. and x. are correlated; with normalized auto correlation co-1 J efficient, f(i-j), given by ^xT = x2 f (i-j) = x2 s^2^1"^ ) (IV.3.8) Therefore | f (i-j) | * ^flj^ - (IV. 3.9) Therefore, taking two samples x^ and x_. far enough apart, their correlation coefficient is so small that the samples can be considered uncorrelated (and independent, since they are Gaussian). We now rotate the samples on one channel k steps and repeat the process, obtaining again 91 independent 2 results of w . We can repeat this rotation 16384:k times and get there-2 fore 91(16384:k) results, w, which can be considered to be independent, as suming that k is large enough. For our program, k = 1000 was used which results in a correlation coefficient for x.x. i J 205U7- (IV'3'10) The highest sampling rate used was Y = 16, and even in this case, the cor--3 relation coefficient is less than 2.5 x 10 , which is small enough to be 2 neglected. With this method we obtain M = 1456 independent values of w and from equation (IV.3.5), 2 = = 1.84% (IV.3.11) /2xl456 Therefore the error of D is 1.84%, which is more tolerable. This second way results in the same error s as that taking 16 times more samples, but only the execution part of the program runs longer: 14 Production of 2*2 bandlimited samples: 32 sec Execution of the simulation program: 120 sec Total: 152 sec CPU-time The total CPU-time needed is 152 sec, amounting to about $23, at a charge of $560/hr. We see therefore that this second scheme is roughly four times less expensive than taking 16 times more samples. Figure (IV.3.1) shows a comparison of calculated and simulated results for the degradation factor, D, for various normalized sampling rates, K.• The solid lines are the calculated results for five different quantized correlators. The shaded areas show the domain of D within 1.84% of the calcu lated values for the 2x2 and 2x3 and 3x3 level correlators. The error domains for the other two correlators are not drawn to avoid confusion. The dots are the results obtained from simulation. It is seen that the simulation runs verify the calculated results to the anticipated accuracy. The same simulation technique could therefore be relied on to provide approximate degradation factors for other schemes where calculations are impractical. 76 decision levels: optimized values at rate K = 2 taken , j' j,nvn factor versus sampling rate K IV.3.1 Simulated degradation factor ve V. RESULTS OF MORE THEORETICAL INTEREST V.l Optimization of Decision-Levels In chapter III.l x^e have stated that the decision levels P_^ of a quantized correlator can be optimized. If in a 3X3 level correlator P is allowed to go to zero, we get a 2X2 level correlator with a higher degra dation factor than for 3X3 levels for any K as shown in Figure (III.2.1). As P goes to infinity, q and q tend to zero for all values of x and y and x y the correlator gives no information about the signal s(t). Therefore D goes to infinity. It follows that P must exist. Plotting D versus P as in J opt Figure V.l.l , we see that in every case D(P) has only one PQpt The 2x3, 3x3 and 4X4 level correlator have only one parameter, P, to optimize. For higher level correlators several parameters P^ have to be simultaneously optimized, which leads to a nonlinear optimization problem of multiple parameters. One method of solving this problem was found for the 3*5 level correlator and is discussed in section V.l.4. V.l.l Optimum Decision. Level for the 3X3 Level Correlator The degradation factor D _(P) is a concave function for P in 3X j (0 $ P < 00) or equivalently, 2 ^ > 0 (V.l.1.1) dP Therefore a minimum degradation factor for an optimum value of P^, denoted by P ^ occurs for J opt § / P - 0. (V.l.1.2) opt Figure V. 1.1 Degradation factor versus decision level P for a 3x3 level correlator with sampling rate as a parameter Using (III.1.2) in (III.1.6.1) we find D = v^2BT s o — w where only w and w are functions of P. We define the characteristic function f(P) as 1 f(P) = -A Xw_ ) K } dP ^ w } (V.l.1.3) (V.l.1.4) where f(P) ' opt (V.l.l.5) or, equivalently, dw dP w 1 2 ' opt dw dP w opt (V.l.1.6) The expected value of w of a 3*3-level correlator was found in (III.2.2.2) Therefore, dw  dP w -2P (V.l.1.7) The variance a was found in (III.1.2.21), where the autocorrelation func-w tion R (i) = R 2(i), z nx (V.l.1.8) since both quantizers are equal. R (i) was found in (III.2.2.3) and x (III.2.2.4). Taking the derivative of w , dR (o) 72 _ r N-1 . dRq. (i) ft" = I 2 2 ^ Rq (±) + \ (0) ~oF-j^-1-1"9) l x J. x x and using Leibniz1 rule OO 00 J f(P,x) dx = J d£^,x) dx - f{P,P) (V.1.1.10) to calculate the derivative of R (i), X = -2 dP and dRq (0) P2 -df— = 6" 2 (V.l.1.12) The characteristic function f(P), defined in (V.1.1.4), is then found as dw2 f(P) = + 4P. (V.1.1.13) w Equations (III.1.2.21), (V.l.1.3) and (V.l.1.9) substituted into (V.1.1.13) y±eldS f N-1 . dR (i) dR (o) "\ 2^2 E Rn (i) qx + Rn (o) qx L i=l N qx ~~dP qx ~~dP J f(P) = 1 _ • — ; + 4P 2 E (1- §) R 2(i) + R 2(o) 1=1 N qx qx (V.1.1.14) This nonlinear equation was solved numerically using the subroutine RTWI is *SSP (IBM 360). Figure (V.1.1.1) shows P versus the sampling-rate K for the opt 3x3 level correlator. P increases about 16% as K grows from 2 to 12. opt The fastest increase in P occurs between K = 3 and K = 5. The variation opt of optimum decision level with sampling rate is an interesting and unex pected phenomenon, and there seems to be no obvious qualitative explanation for it. 82 V.1.2 Optimum Decision Level for the 2*3 Level Correlator The expected value of w was found in (III.2.3.1). Therefore, _ dw dP - P (V.l.2.1) w The variance r^- was found in (III.1.2.21) where ^w R (i) = R (i) • R (i) (V.l.2.2) z qx y R (i) is the autocorrelation function of the 3-level quantized signal and qx given in (III.2.2.3) and (III.2.2.4) for T = iTg, where the derivative of R (i) with respect to P was found in (V.l.1.11) and (V.l.1.12). qx The autocorrelation function for the 2 — level quantized signal, R (i), was found in (III.2.1.8) to (III.2 .1.10). Since only R (i) is a function of P, the derivative of w with qx respect to P becomes ri ( N-i . dEq (1) dR, <o) § • H21,(1~«' *«•(i) + %(o) <v-1-2-3) v i=l y y J The function f(P) was defined in (V.1.1.4). Therefore, , 2 dw f(P) = -JL- + 2 P (V.l.2.4) w and using (III.1.2.21), (V.l.2.2) and (V.1.2.3) in (V.l.2.4) we obtain •MI'" dR (i) dR (o) N-I . q q 2 l (1- h R (i) X-— + R (o) X N q dP q v ' dP f (p) - y i + 2 p 2 E (1_ N> Rq (1)Ra (1) + Rq (o) Ra (o) 1=1 X T X y (V.l-2.5) Again the solution f(P) = 0 was found for different values of the sampling rate K and plotted in Figure V.1.2.1. V.1.3 4x4 Level Correlator The expected value of w was found in-(III.2.5.2). Therefore, *£ v _ 1 dP = -2P —£ (V.1.3.1) w e2 + OC-1) 2 Using (III.1.2.21) with R (i) = R (i) yields qz qx - I 12 s, (1" s> \ (i) -Sr— + \ <o) -w— 1 (v-1-3-2) 1=1 LX X where the autocorrelation function R (i) is given by (III.2.5.3) and qx (III.2.5.5) for T = i T . s Taking derivatives, we obtain dRq (i) p2 " ~2ff p-2" Ar-n { (V-U [.FIRF, /P. /EM^Rrfnrp. /I±2S 7 e"^ oc-i) U-l) [erfc (P^^)-erf c(P^^-) and + 1-2 erfc ( \ ) 1 (V.1.3.3) 1-f (i) J dR (6) pf —J = -(/-1)J|T e~ 2 (V.1.3.4) The function f(P), defined in (V.1.4.1), becomes then N-1 . dR (i) _ dR (o) Y. (1- R (1) qx + R (o) N q„ —«— O X ,fm _ o izi Hx dP Hx dP XT- 1 f (p) _ 2 : + 2p 2 S (1- f> Rq 2(i) + * 2(°) X"1 + e~ 2 i=l 4x Hx The equation f(P)/p = 0 was solved and plotted versus the sampling-rate opt K in Figure (V.l.3.1). V.1.4 3x5 Level Correlator The quantizers for the x- and y-channel are shown in the figures (V.l.4.1) and (V.l.4.2): -P -1 -*»x -P -P 2 1 -1 P P 1 2 Figure V.1.4.1 3-level quantizer Figure V.1.4.2 5-level quantizer In this case, the decision levels P, P^ and have to be optimized sim-ultanously. It can be shown that D(P, P^, P^) is analytic for (0 ^ P <«? ; 0 ? P1«» ; 'P1 P2<0° ) and that in dp2 > 0 , d2D dP. 2 > 0, dp' > 0 (V.l.4.1) Defining P as an array of the variables P, P^ and F^, it can be shown that opt exists such that opt 1 opt 2 opt' (V.l.4.2) dD dP 0, dD dP, ' opt 0, dD dP, opt = 0 (V.l.4.3) ' opt From V.i 1.4.1 it folloxjs that there is only one min. at P .We define J opt 87 the partial derivatives of D with respect to P, P^ and P^ as ^(P) . = f1(P,P1, P2) = || (V.l.4.4) f2(P) = f2(P, P1, P2) = || (V.l.4.5) -* dD f3(P) = f3(P, Vv P2) = — (v.1.4.6) where P ^ has to be found such that opt fl<V> " f2<V> = f3(PTPt> = °- (V.l.4.7) We consider the special case first, where we sample at Nyquist-rate. This case can easily be calculated, since P _ does not change much opt for higher sampling-rates. The solution.Pq at Nyquist-rate can be used as the initial vector for an iterative method, which is discussed later on in this chapter. The degradation factor at Nyquist-rate (equation III.1.6.3) can be expressed as a product of 2 functions which depends only on the x-and y-channel respectively: x i i f (a.,P.) y l l As R (o) is not a function of P^ arid P2 and R (o) does not depend on P, ^x qy we see that f1 = ^(P) , (V.l.4.9) £2 = £2(P1' V (V.1.4.10and f3 = f3(P1, P2) • (V.1.4.11) and therefore that both channels can be optimized separately. The optimi zation with respect to P has already been done for the 3x3-level correlator in chapter V.l.l and P A was found to be opt P „ = 0.612. opt 88 Evaluating (V.1.4.5) and (V.1.4.6) we obtain 2 q. dP £2<V V * Pl + 6  2 rife (V»l.*.12) and . ' dR (o) P 2 P 2 qy 1 2 1  — — dP f3(Pr P2) = P2 + 6 2 + f"1} 6 2 -. R (2) • (V.l.4.13) -P1 q 2(K-l)e— 7 From (III.2.4.4) we obtain (o) = 2 (erfc(P1) + (tf2 - 1) erfc(P2>). (V.1.4.14) R qy Therefore, and dR (o) 2 _i IT HP VT e 2 (V.1.4.15) 1 dR (o) 1 P2 —^ = 0< ~D e (V.1.4.16) A necessary condition that f2(P^, P2) and fg(P-j_» simultaneously is found dividing (V.1.4.12) by (V.l.4.13) as dR (o) q 12 2 2L Pl -f (P2 " Pl > dPl P" = ^« 6 dR (o) (V.l.4.17) l q IT dP2 Using (V.1.4.15) and (V.1.4.16) in (V.1.4.17), we obtain \ = TT7 (V.l.4.18) 89 This simple relationship allows us to reduce the joint equations = 0 and f = 0 to one equation of one variable only. Solving the remaining equation, we finally obtain P. ^ = 0.422 and P„ = 1.266. 1 opt 2 opt In the general case, where K > 2, f^, and f are functions of all three variables P, P^ and P^. All three equations f^ = 0, f = 0 and f^ = 0 have to be solved simultaneously. However, it has been found that P opt does not change much (only a few %) for K > 2. Therefore we can use PDpt at Nyquist-rate as a first approximation for an iterative method to solve f, = f2 = f3 = 0. A gradient method has proved to be useful to find P (K): opt Given the degradation factor as a function of P, P^ and P^, we have to find P = CP P P opt ^ opt' 1 opt' 2 opt'' such that the partial derivatives defined in (V.l.4.4) to (V.l.4.6) disap pear for P = P opt Defining P^ as the optimum decision vector at Nyquist-rate, the degradation factor in the neighborhood of Pq is then D(P - AP) = D(P ) - f. AP - f_ AP. - f_ AP„ (V.1.4.19) o o 1 2 13 2 We choose AP = AP^ = AP^ = h, the stepwidth we want to move i.n- the direction of the gradient. The best value of h has to be found by trial and error. If h is too large, the sequence P. . P...P,,, P,.\ does not converge (o)' (1)/ (2)i (i) 5 to P . If h is too small, the convergence is slow, opt b We normalize h = T-2 2 2, (V.l.4.20) P +P1 +P2 and choose the desired accuracy f± 2 + f22 + f32 $ 10 5 (V.1.4.21) A suitable stepwidth s was found as 0.65. For s = 1 no convergence occured and for s = 0.5 the convergence was too slow. The iteration produces the sequence P P P P CD' (2)' *(3) •••• (n) where P, s can be close to P with any desired accuracy, (n) opt Starting from the initial array P , above sequence can be computed using = V n " fwp v p *\ Vi (V.l.4..22) (l) 1(P(i-l)' Pl(i-1)' P2(i-1)) ' h Pl(i) = Flfi-1) " f2(P P P ) ' H (V.l.4.23) lu; . (i-D' l(i-l)' 2(i-l); P0,M = P0,. ,v - f,,p _ _ . • h (V.l.4.24) 2(x) 2(i-l) 3(p(1_i)ra Pi(i-i)' P2(i-l)) The results P , P and P^ were computed and are plotted in Fig. opt opt 2 opt V.l.4.3 vs. the sampling-rate K. All three curves P . , PN W.„ r 6 opt (K) 1 opt(K) and P„ ,„s show the same characteristics, i.e. the P ^ increases by 2 opt (K) opt 1.65% to 1.8% between Nyquist-rate and rate K = 6. It can be seen that PQ found at Nyquist rate is a good approximation for any K >2. V.2 Decomposition into Single Channel Correlators Under certain circumstances D can be expressed as the product of two single-channel factors, D and D : to x y D = D D (V.2.1) x y D and D are the degradation factors of a correlator which has only one x y b quantizer in the x- and y-channel respectively, the other channel carrying analog signals. We investigate in this section the circumstances under which such decomposition is valid, and the errors that occur if the decomposition is assumed to be always true. It will be recalled that the degradation factor of a quantized correlator depends on its output signal to noise ratio: a D cc (V.2.2) w Now it is shown in Appendix A2 that for small signals, w can always be decomposed into an x-component and a y-component. w=q=qq=qq (V.2.3) Hz Hx^y nx y-It remains to see whether it is possible to write o in the same way. w 2 ~~2 For small signals and symmetrical quantizers., o = w and, from equations w (III.1.2.24) and (III.1.2.21) , we can express this variance as ~2 1 N_1 wZ = £ £ (1-i/N) R (i) • R (i) (V.2.4) N i=-(N-l) qx qy and since lim R (i) = lim R (i) = 0 (V.2.4q.q i-*-°° x 1-*00 y w ~ ^ S B (i) Rn (i) (V.2.5) -(N-1) qx qy for large N can be expressed as a product of a function of R (i) and a qx function R (i), only if qy N-1 N-1 2 E R a> Ra a) - o (v.2.6) i=-(N-l) j=-(N-l) . qx qy x.e. if R (i) and R (I) are orthogonal, q q ° nx y It can be seen that for any symmetric quantizer sign (R (r))= sign(f(T) ) (V.2.7) x where x is the normalized autocorrelation function of the unpro cessed signal x(t).SoR (T) has the same zero-crossings as f(x). Since qx x(t) and y(t) have the same autocorrelation function, N-1 . 2 (1- |) R (i) R (i) >/0 ; (V.2.8) i=l x 4y equality holds only if j°(i) = 0 for all i 4 0, i.e. if the samples are uncorrelated. For the bandlimited signal this is only the case if we sam ple of Nyquist-rate where w2 = | R (o) R (o) (V.2.9) x y Therefore D can be expressed as the product of 2 single channel degradation factors only if we sample at Nyquist-rate. V. 2.1 Single Channel Correlation Factors and Decomposition Error Define D^^CK) to be the "single-channel degradation factor", be ing the degradation factor to a correlator with an n-level quantizer in the x-channel and no quantizer in the y-channel. The expected value of the correlator output, w, is given by equations (III.1.2.3) and (III.1.5.4) as w = s q (i) (V.2.1.1) o Hx 2 and the expected value of w is obtained using equations (III.1.2.21), (III.1.2.36), (III.1.2.37) and (III.1.5.7). Using these equations in (III.1.6.1) and (11.2.3) we find 94 N-1 -•D (K) =| X *- (V.2.1.2) VK f (a.,P.) x 1 1 For Nyquist-rate sampling we define D = D (2). (V.2.1.3) x nx°° Using (V.2.1.2) for K = 2 we obtain \m { (a.,P.) • <V-2-1-« V X 1 1 Similarly, the single-channel degradation factor for the y-channel con taining an m-level correlator, is ^Rq (o)' Dy = \*~(2) ^/f Fu7xr- - (v-2-1-5) J V yv x' x An n. m-level correlator, sampled at Nyquist-rate then has the degradation factor rr— . T-T-. /R (o) R (o) D (2) = D D = — _ * _ , y-—; — - (V.2.1.6) nxm xy 2f(a.,P.)f(a.,P.) v ' xxx y x x Equation (V.2.1.6) agrees with (III.1.6.3) for sampling at Nyquist-rate We define a "decomposition error", ?nm(K)> as D (K) - D D £™<» -cir^—--• ~'•' '• <v-2-1-7> n xm This value measures the relative error made by decomposing the degradation factor D (K) into a product of two single channel degradation factors, nxm D (K) • D (K). This "decomposition" error is plotted in Figure v .2.1.1 nxoo mxoo ' * against the normalized sampling frequency, K. There is no error at K=2. For higher values, the error soon settles down to a constant value ofsj.3%.. Decomposing the degradation factor under these circumstances leods to a small error, but is still 1. 3X3 levels 2. 3*5 levels decision levels: optimized values for K = 2 4 , f , 1 . h 1 • 1 .——i . 1 . 1 -* 22 ^> 8 70 12 1C 16 Figure V.2.1.1 Decomposition error £^ versus sampling rate K for 3x3 and 3*5 level correlator 0.7 0.6 .. 0.5 OJr .. 0.3 0.2 .. 0.1 useful as a first approximation. V.3 Degradation for Strong Signals In all previous calculations the signal power was assumed much less than the noise power. This is true for most cases of practical inter est. However, it seemed advisable to consider in at least one case how the degradation factor changes -when the signal power is no longer small. This is done for a 2 x 2 level correlator at infinite sampling rate, i.e. a polarity coincidence detector. The following two assumptions, made for small signals, do not hold for strong signals: (a) The Gaussian signal s(t) can be replaced by a d-c signal s = a . o s (b) The variance a can be computed in absence of the signal, s(t). For arbitrary signals, s (t), the general definition of D, given in equation (III.2.5) must be used. The degradation of a 2 x 2 level correlator for strong signals has been treated by Cheng'', with the difference that he used on RC-network for the integration. (His "degradation factor, r", is the square root of D defined in this work). Fig. V.3.1 shows the block diagram of a polarity-coincidence correlator as used for strong signals. n X ss(f) y +•: + x(t) y(t) -1 Figure V.3.1 2x2 level correlator for strong signals We assume the signals s(t), n (t) and n (t) are bandlimited, statiscally x y independent, Gaussian, zero-mean signals, and have a flat power-density spectrum within B. Therefore, the normalized autocorrelation function J°(T) is given by Rn (T) Rn (T) R (x) f(-r)=-^2— = —= -^r- (V.3.1) a 2 a 2 n s V.3.1 Unquantized Correlator for Strong Signals The expected value of w is found as dt = xy - 1 w = — T I x(t) y(t) o = n n +n s+n s + s^ x y x y (V.3.1.1) Since n (t) andn (t) are assumed to be zero-mean, « y The signals s(t), n (t) and n (t) are assumed to be ergodic. Therefore x y 2 the expected value of w can be found as the time-average w2 = — j f z(a) z(3) da d3 T X o o 2 T I (1~Y) RZ(t) dT (V.3.1.3) where the autocorrelation function of z(t), R (T) = z(t) z(t +x) z R (x) R (T) + R (T) R (T) + R (T) R (T) n. n n x n s x y x y + s2(t) s2(t + T) - og4 (V.3.1.4) 2 2 1 The expected value of s (t) s (t+x) was found in as s2(t) s2(t+T) = as4(l+2p2(x))' (V.3.1.5) Therefore, Rz(T) = p2(x) (as4 + (an2 + a/)2) (V.3.1.6) 2 = a 4 p2(x) (~~) (V.3.1.7) d where a 2 d = —~ j (V.3.1.8) a + a s n Using equation (V.3.1.7) in (V.3.1.3) we finally obtain w2 =| as4 • j (1-i) P2(x)dx + as4 (V.3.1.9) d o assuming the power density spectra of s(l), n (t) and n (t) as which x y bandlimited to B, we obtain P(t) = Sa (2TTBX) (V.3.1.10) The standard deviation of \<r is defined as o =^/w2 - (w)2 (V.3.1.11) w Therefore, 1 2n/l+? 0WA/T °s " d2 [ fd-f) P2(T) dt]2 (V.3.1.12) and using the definition (II.2.4) MDS . 0w (fd, . 0w Wf [ /(I- f) P2(X> dx]1 (V.3.1.13) assuming a large integration time T, we obtain, using (V.3.1.10) lira (l-±) pZ(x) dx = J Sa (2^Bx) dx = (V.3.1.14) Therefore, (MDS) . = a 2"^£±2^==i^ (V. 3.1.15) analog s d Aj/B.i 5 For large input signal-to-noise ratio, as (^)^ goes to infinity (MDS)analog = ^WF • (V.3.1.16) is independent of d. V.3.2 Application to a 2 x 2 Level Correlator The two signals x(t) and y(t) (see Figure V.3.1) have the joint probability density = , C-4 2 e" 20 2 (1-r2) ^2-2-y+y2) (V.3.2.1) ' 2ir./l-r a n n where ? a "r r = d = —^ 2 (V.3.2.2) a + a s n 100 The expected value, of w is given by w = qxqy = J J Pxy(x,y) dxdy + [ J Pxy(x'y) dxdy o 0 -t» -co ~Xf/Pxy(x'y) dxdy (V.3.2.3) ° -co Evaluating these integrals, we find Pxy(x,y) dxdy = | " ^ arctg Vlzi_ (V.3.2.4) and jo . / ^ J J"pxy(x,y) dxdy = ~ arctg (V.3.2.5) © -1ST* Therefore, 2 '\/ 1 d 2 w = 1 - — arctg — = — arcsin d.. (V.3.2.6) IT d TT 2 The expected value of w is given by the time average T T w2^ (\ £ qz(t)dt)2 = f jf (I" \ (T) dx (V.3.2.7) and R (T) can be found as'' R (T) = ~ [(arcsin P(x)) - (arcsin p(x)d) ] (V.3.2.8) Therefore, T 1 °w -^7 "(?) = \ (f / (1-^) {(arcsinp(x))2 - (arcsin(p (x) d)) 2}dx)2 o . (V.3.2.9) — 2 The derivative of w with respect to OG is given by w = - ——=jL==~== (V.3.2.10) d(o ) .0/1 + 2 (—) s s^ a 2 Using equation (II.2.4) with R = ag we find 101 2 MDS = + 2(—)2 (| J" (i - i) { (arc sinp(x))2 -(arc sin (p(x) • d))2} dx)2 (V.3.2.11) Using the definition of the degradation factor stated in equation (II.2.5), the degradation factor of the strong signal correlator, denoted by D (°°) is obtained as ss2x2 T. / >. MDS D (°°) SS2X2V <MDS>analog rT T 2 ^ (l-7p) [(arcsinp(x) - (arcsinp (x) • d) ]dx |2 1 + d T a-d)(i+d2) J (1 _ T} P2(T) DT (V.3.2.12) For large T, using (V.3.1.5) and (V.3.2.11) in (II.2.5) we find f r D (o°) •= 2 I 1 + d B I [(arcsinp(x))2 -2x2 |^(l-d)(l+dZ) i (arcsin p(x)d)]dx } 2 (V.3.2.13) Since p(x) = Sa (2nBx) (equation V. 3.1.10), D (°°) is independent of B. SS2x2 Bounds for D (°°) were found by Cheng^: SS2.*2 1+d , , , 2 9 T 1+ ,2 * ss9x.(») ${ o (^T - (arcsin d)Z)}Z (V.3.2.14) (l-d)(l+d ) 4 The lower bound shows D ^ 1 for any d in [0,°°), which is a very loose bound. Using equation (V.3.2.13) for small input signal-to-noise ratios (d << 1), we obtain D (») = 2 »¥ [ J (arcsinp(x)2dx]2 ss2x2 Jo 1 •00 =^J^ I J (arcsin Sa(x))2 dx]2 (V.3.2.15) Equation (V.3.2.15) agrees with the expression for D~ ,(°°) found in (III.2, ZXZ 1.13) which was evaluated to. be D2x2(oo) = 1'253-In Figure V. 3.2.1 D (°°) is plotted versus the input signal-to-noise a • SS2><2 / sN 2 ratxo (—) . a n We notice that D goes to infinity for high input signal-to-noise ratios, which can be explained as follows: a „ s 2 In the limit, where (—) goes to infinity, we find n x(t) = s(t) and y(t) = s(t) so that x(t) and y(t) always have the same sign. Therefore q^E 1, so that q^ never changes and the output w has zero variance. In this limiting case, the correlator gives no information about the signal s(t) and ob viously D (°°) goes to infinity, as the input signal-to-noise ratio goes SS2x2 to infinity. This is not true, however, if more than 2 quantizer levels are taken, because even in the absence of noise the multiplier-output is a function of the signal amplitude. The strong signal degradation factor for higher level correlators has not been investigated, because the power of signals investigated in radio astronomy is always far below the noise power. s 2 D stays virtually constant up to (—) 0.25, i.e. that our as-2 2 n sumption a << a is valid for a < 1/4 a . This result was also found s n s n by Yerbury"'"2. Figure V.3.2.1 Strong signal degradation' factor versus input signal-to-noise ratio for a 2x2 level correlator 104 VI. OVERALL CONCLUSIONS In designing a digital correlation spectrometer, various possible logic schemes can be considered. Those employing fine quantization with many digital levels degrade the measurements very little, but are costly and complicated to instrument. Simple schemes employing coarse quantization degrade the signal-to-noise ratio appreciably. A balance must be struck between excessive complexity and excessive degradation. The results in this thesis help the designer make such a choice by giving him the degradation factors for a variety of logic schemes, not only for the Nyquist sampling rate but also for higher sampling rates. In the course of the calculations, some interesting theoretical results were found, particularly concerning the variation of the optimum decision levels with sampling rate, and the possibility of decomposing the degradation factor into components. The numerical results are of practical interest. For example, it is shown that a 3-level x 3-level logic scheme sampling at 4 times the bandwidth has a slightly lower degradation than the 3-level x 5-level 12 scheme sampling at the Nyquist rate which is used by Whyte . Such a 3x3 level scheme is probably also easier to build. The numerical results are expected to fill all foreseen needs. As integrated circuit technology advances, it becomes progressively more practical to use many-level multiplication and averaging, and the degrada tion due to sampling then becomes insignificant. 105 APPENDIX Al Proof that D '= 1 for all K v. 2 for the Uriquaritized Correlator Claim - An unquantized dual channel correlator which is sampled at Nyquist-rate or faster has a constant degradation factor D = 1 for any sampling-rate K. In particular we claim that 2 Sa2(%4 = K (Al.l) l=-oo Proof - We pass white Gaussain noise n(t) through 2 ideal low-pass filters, Iv The first filter has a bandwidth y B, the second filter has a bandwidth B. The output of the second filter n"(t), is sampled at a rate, KB, as shown in Figure Al.l nil) >-r\\t) r ~1 , fsf KB > I \, K8 " % KB 2 -B B ^» n Co Figure Al.l White noise bandlimited in two lowpass filters The function n'(t) can be completely reconstructed from its samples taken 1/KB apart using the relation CO ~ (A1.2) n'(t) = Y ai Sa(n(KBt-i)) i=-co a;' = n'(ix) 1 T -KB (A1.3) (A1.4) (for reference see 2 pg. 49) We assume the average power of the signal n'(t) to be 1, i.e. KB f 2 S (f)df = 1 (A1.5) JKB N "2 or for (A1.6) KB 2 106 It follows that n'(t)2 = R.(0) = 1 (A1.7) n where the autocorrelation function of n1(t) is given by the Fourier transform of S'(f) n R'(T) = 2rl S'(f) = Sa(KBTrx) (A1.8) n T n / Since we sample at a rate KB the Nyquist-rate for the bandwidth KB/2, the a_^'s are independent. Now we process the signal n'(t) found in equation (A1.2) through the second lowpass-filter. The average signal power of n''(t) is proportional to the bandwidth of the second filter, since n'(t) is white. Therefore, n"(t)2 = J ^ df = | (A1.9) The signal n" (t) can be reconstructed from its samples taken 1/KB apart, because n"(t) is bandlimited to B and sampled at a rate K > 2. Therefore, OO n"(t) = Yl aiK Sa<2irB(t * 1 iif>> i=-oo = )~ |a.' SaTr(2Bt - — i) (ALIO) X=_oo and where n"(0) = | a* Sa(2Tri/K) (Al.ll) (=-oo 1 CO 00 K !=-«> J=-oo (A1.12) (A1.12) The samples a|,andaj are taken at Nyquist-rate for the first filter output and therefore are independent, i.e. a' . a' . = 0, if i j j 13 1, if i = j, (A1.13) since a.'2 = R '(0) = 1 (A1.14) in 2 Therefore, the expected value of n" (0) is given by K i=-°° and since n(t) is an ergodic process, n,,2(o) = \ YI Sa2(-ir> <A1-15> n,,2(t) = n"2(0) = n"2 (A1.16) t Using the equation (A1.9) in (A1.15) we finally get 00 2 4 K K i=-oo or CO K 12 Sa2(^) = \ (A1.18) l=-oo (A1.18) is equal to our claim (Al.l), which completes the proof. The above relation, used in (III. 1.5.9} shows that the output signal-to-noise ratio of an analog correlator becomes independent of K, and therefore, the degradation factor becomes equal to unity for any K $ 2. 108 APPENDIX A2 Replacement of General Signals by d-c in the Calculation of Degradation factors The general model of a correlator shown in Figure II.2.1 shows two signals s (t) and s (t) , with a cross-correlation factor, R = s • s x y x y No assumptions are made about the spectral or statistical nature of the signals, except that they are - ergodic -small compared with the noise -independent of the noise sources -limited in frequency to the range 0 to B Calculations of degradation factors using such general signals are clumsy, and it is shown in this appendix that the degradation factors so found are identical with those obtained in a much simpler "d-c case", where it is assumed that s (t) = s (t) = s = constant (A2.1) x y o so that R = s • s = s (A2.2) x y o It will be recalled (Equations II.2.5 and II.2.4) that the degradation factor, D, is defined as the ratio of two minimum detectable signals D mS (MDS) analog and that the MDS for any given system can be written as «„/<§> Now, as long as the signal is small, the standard deviation of the output, a , is determined entirely by the noise sources and is independent 109 of the nature of the signals. It is usually evaluated assuming s = s = 0. x y It remains then to show that for every correlator, the quantity dw (-75O is the same under these two assumptions (1) s (t) = s (t) = s = constant, - in the d-c case; and x y o (2) s (t) and s (t) have arbitrary spectra and statistics, subject x y to the limitations set out above, - the general case. A2.1 Evaluation of for the d-c Case , ^ In the absence of the signal (when s = 0), the x-processor has at its input white Gaussian band-limited noise, with a mean value x = 0. We assume, for simplicity, that the transfer function of the processor is symmetrical about zero volts. In that case its output, q , will also have mean value, q =0. "X If now a small d-c signal, s , is present, the variance of x(t) will not change but its mean value will no longer be zero, x(t) = s (A2.1.1) o Hence the output of the x-processor will no longer average exactly zero, but will have a mean value dq~ q = s (—£) (A2.1.2) nx o .-dx where the derivative is a function of the processor used and of the amount of noise at the input. Since the signals are small, it can be evaluated at x = 0. Similarly ^ = 8o^> (A2-1'3) Now the correlator output will have a mean value given by 110 w = q = q • q (A2.1.4) But q^ and q^ are statistically independent, since their variations are due to statistically independent noise sources. Therefore 2 dqv dq. w = q • q = s • (—) • (—(A2.1.5) y dx dy 2 and, since R = s , we have o dq dq § - <-T> ' ("3 (A2.1.6) dx dy A2.2 Evaluation of ^ for the General Case In the general case we consider first the situation when the signal s happens to lie within interval ds of value s.., and simultaneously s x x 1' . J y lies within interval ds^ of value s^. The joint probability of this happening we can write as joint probability = Pg s (si>S2^ * ds * ds (A2.2.1) x y x y Given these assigned values for s and for s , what . is now x y' the expected value of w? We denote this as (w). „. It is given by 1, 2. (W)1,2 = ^1,2 - (%c * Vl,2 (A2'2-2) again, for fixed values of s and s , the fluctuations in q and q x y' x y are independent, so that we can decompose the last expression into (^1,2 = Ql ' (S}2 dq dq = s1(-=^) • s2(-T7-) (A2.2.3) dx dy To find now the overall average value of w for all possible combinations of sx and s^, we multiply each expression like that above by the probability of its occurrence, and integrate over all values of s , sy. Thus * =J 1 (^x,y ' Ps s, (s ,s ) . ds .ds X y ff dq^ dq~ = J J ^^F0 ' Sy'(^ "pv (Sx'Sy) 'dSx'dSy d^ dT; r r = ( ) (—JL) Is • s p (s ,s ) ds • ds ,- ,- x y *s s x' y x y dx dy J J J x y J J &• H dq dq " ^'^Vy (A2'2-4) dy 7 But R = s s , so that x y ,- dq dq 4|« <-*) • (A2.2.5) dR - -dx dy This is the identical equation to (A2.1.6) above. This then proves that the degradation factor of a given system with arbitrary signals sx and s^ can be evaluated using the simpler d-c case where both signals are made equal to a small constant. 112 REFERENCES 1. Papoulis, A., Probability, Random Variables arid Stochastic Processes McGraw-Hill, 1965. 2. Lathi, B. P., Introduction to Random Signals and Communication Theory, International Textbook Co., 1968. 3. van Trees, H. L., Detection, Estimation and Modulation Theory, J. Wiley and Sons Inc., 1968. 4. Gradshteyn, I. S. and Ryzhik, I. M., Table of Integrals, Series and Products, Academic Press, 1971. 5. Bowers, F. K., "Multi-level Correlation Spectrometer for Radioastronomy", Convention Digest, I.E.E.E., March 1971. 6. Burns, W. R. and Rao, S., "Clipping Loss in the 1-Bit Correlation Spectral Line Receiver", Radio Science, Vol. 4, No. 5, pp. 431-436, May 1969. 7. Cheng, M. C., "The Clipping Loss in Correlation Detectors for Arbitrary Input Signal-to-noise Ratios", I.E.E.E. Transactions on Information Theory, Vol. 1T-14, No. 3, May 1968. 8. Cooper, B. F. C, "Correlators with 2-Bit Quantization", Aust. J. Phys., 23, pp. 521-527, 1970. 9. Ekre, H., "Polarity Coincidence Correlation Detection of a Weak Noise Source", I.E.E.E. Transactions on Information Theory, Vol. 1T-3, pp. 18-23, January 1963. 10. Watts, D. G., "A General Theory of Amplitude Quantization with Applications to Correlation Determination", The Institution of Electrical Engineers, Monograph No. 481M, November 1961. 11. Weinreb, S., "A Digital Spectral Analysis Technique and its Applications to Radio Astronomy", M.I.T. Technical Report No. 412, Cambridge, Mass., August 1963. 12. Whyte, D. A., "A Multi-level Digital Correlation Spectrometer", M. A. Sc. Thesis, Department of Electrical Engineering, University of British Columbia, January 1972. 13. Yerbury, M. J., "Amplitude Limiting Applied to a Sensitive Correlation Dectector", The Radio and Electronic Engineer, July 1967. 14. Brenner, N. "Cooley - Tukey Fast Fourier Transform,.FOURT1; M.I.T. Dept. of Geophysics, IBM Contributed Program Library, No. 360D - 13.4.001. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0101675/manifest

Comment

Related Items