A Modification of OPM — A Signal-Independent Methodology for Single-Trial Signal Extraction. by Steven George Mason B.E.Sc. The University of Western Ontario, London, 1987 B.Sc. The University of Western Ontario, London, 1987 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF ELECTRICAL ENGINEERING We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA November 1990 © Steven George Mason, 1990 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department The University of British Columbia Vancouver, Canada Date A / 0 ( / c?^{ n*>Q DE-6 (2/88) Abstract Initial investigations of the Outlier Processing Method (OPM), first introduced by Birch [1][2][3] in 1988, have demonstrated a promising ability to extract a special class of signals, called highly variable events (HVEs), from coloured noise processes. The term HVE is introduced in this thesis to identify a finite-duration signal whose shape and latency vary dramatically from trial to trial and typically has a very low signal-to-noise ratio (SNR). This thesis presents a modified version of the original OPM algorithm, which can generate an estimate of the HVE with significantly less estimation noise than the original OPM algorithm. Simulation experiments are used to identify the strengths and limitations of this modified OPM algorithm for linear and stationary processes and to compare the modified algorithm's performance to the performance of the original algorithm and to the performance of a minimum mean-square-error (MMSE) filter. The results of these experiments verify that the modified algorithm can extract an HVE with less estimation noise than the original algorithm. The results also show, that the MMSE filter is unsuitable for extracting HVEs and that its performance is generally inferior to the modified algorithm's performance. The experiments indicate that the modified algorithm can extract HVEs from a linear and stationary process for SNR levels above -2.5dB and can work effectively above -7.5dB for HVEs with certain characteristics. ii Table of Contents Abstract ii List of Tables v List of Figures vi Acknowledgments viii 1 Introduction 1 2 Review of Signal Extraction Methods 3 2.1 The Signal Extraction Problem 3 2.2 Common Signal Extraction Methods 4 2.2.1 Spectral Separation 4 2.2.2 Wiener Filters 5 2.2.3 Ensemble Averaging 6 2.2.4 Linear MMSE Filters 6 2.2.5 Autoregressive Modeling with Exogenous Inputs (ARX) 9 2.3 A Comparison of Methods 11 3 Theory of OPM 13 3.1 The Original OPM Algorithm 13 3.1.1 Noise Model Determination 14 3.1.2 The Cleaner 15 3.1.3 Comments on OPM 17 3.2 The Modified OPM Algorithm 18 3.2.1 Comments on the Modified OPM Algorithm 20 iii 3.3 A Comparison of Methods 21 3.4 A Linear and Stationary OPM Implementation 23 3.4.1 Description of Implementation 23 3.4.2 Summary of Implementation Details 33 4 Experimental Description 34 4.1 Description of Experimental System 35 4.1.1 Design Details of the Modified and Original OPM Modules 38 4.1.2 Design of the MMSE Filter Module 38 4.1.3 Performance Measures 38 4.2 Experimental Results 40 4.2.1 The Performance of the Modified OPM Algorithm 47 4.2.2 The Relative Performance of the Modified Algorithm 52 5 Conclusions 58 5.1 Summary of Major Results and Related Conclusions 58 5.2 Significant Contributions 61 5.3 Areas for Future Research 61 6 Glossary 63 References 66 Appendix A Autoregressive Noise and Event Models 69 Appendix B Selecting the OPM System Parameters 70 Appendix C Simulation Results 76 iv List of Tables 2.1 A summary of the existing signal-extraction techniques 12 3.1 A summary of the existing signal-extraction techniques compared to the OPM techniques 22 4.1 Experimental system parameters for the modified OPM algorithm 38 4.2 Summary of performance limits at OdB 51 4.3 Average error per sample for zero event experiments 56 A.1 Noise and event models 69 v List of Figures 1.1 A typical highly variable event (HVE) observed during three trials. . . . 1 2.1 Typical signals 4 2.2 ARX model of y(t) generation 9 2.3 Model of x(t) estimation using calculated transfer functions A(z), B(z) and C(z) 10 3.1 The original OPM methodology 13 3.2 Example parameter estimator 14 3.3 The OPM Cleaner 16 3.4 An example influence function 17 3.5 Example influence functions for two different times: ti and t2 18 3.6 The modified OPM algorithm 20 3.7 Example 3-part influence function 27 3.8 The time-varying IF with thresholds controlled by I(t) 30 3.9 I(t) as a function of W(t) 30 3.10 a) Cleaner's influence on pure noise sequence as a function of IF a threshold, b) The distribution of W(t) values for a threshold set at X. Note that the W(t) is centered at 1.0 because the variance of the noise sequence was 1.0.. . 32 4.1 Experimental System 35 4.2 The power spectra of the three noise models used in the simulations: a) AR(8) model, b) AR(12) model and c) white noise model 36 4.3 The power spectra of the four event models used in the simulations: a) event #4; b) event #5; c) event #6; d) event #7 37 4.4 Defining primary and secondary regions of interest (ROIs) 39 4.5 Results of experiment with event #7, 30% length, in white noise at +5dB. . 41 vi 4.6 Results of experiment with event #7, 20% length, in AR(8) noise at +5dB. Example of excessive estimation noise due to a non-optimal WPE setting. . 42 4.7 Results of experiment with event #4, 20% length, in white noise at OdB. . 43 4.8 Results of experiment with event #7, 10% length, in AR(8) noise at -5dB 44 4.9 Results of experiment with event #7, 10% length, in AR(8) noise at -lOdB 45 4.10 Results of experiment with no event in AR(8) noise 46 4.11 Example performance measures for 20% signals in white noise 48 4.12 Modified algorithm performance at OdB 49 4.13 Example of effects of suboptimal WPE setting 51 4.14 The utility thresholds for the three different noise models and events #4 and #7 53 4.15 Performance of all three methods for event #4 at a 10% signal length . . 54 4.16 Performance of all three methods for event #7 at a 10% signal length . . 55 B.l Experimental system parameters for the OPM algorithms 70 B.2 a) Cleaner's influence on white noise sequence as a function of IF a threshold, b) The experimentally determined W(t) distribution when the a threshold was set to ao 72 B.3 a) Cleaner's influence on AR(8) noise sequence as a function of IF a threshold, b) The experimentally determined W(t) distribution when the a threshold was set to ao 73 B.4 a) Cleaner's influence on AR(12) noise sequence as a function of IF a threshold, b) The experimentally determined W(t) distribution when the a threshold was set to ao 74 B.5 a) Correlation performance as a function of IF a threshold for various SNRs; b) absolute error performance as a function of IF a threshold for various SNRs 75 vii Acknowledgments First of all, I would like to thank my colleagues for the use of their ears once and a while and for their patience when I occasionally occupied two (or more) Sparc workstations. I want to express my appreciation to Rob Ross, Dave Gagne and the others who managed to hold the network together and keep the Suns up and running. Thanks guys. I am grateful to Jian Liu from U.B.C.'s statistical consulting service, SCARP, for his advice on statistical analysis of my experimental results. I must also thank Joe Jackson for his spiritual support throughout this thesis, especially after those early morning sessions. I extend my deepest appreciation to both of my supervisors: Dr. G.E. Birch and Dr. M.R. Ito. I want to thank Dr. Ito for his continual support throughout my thesis and for his valuable advice which really helped me define and present this work. I want to especially thank Dr. Birch for his enthusiastic guidance through the project and for his dedication and patience through our meetings, which usually stretched well beyond the scheduled times. Finally, I want to thank my wife Sarah, who politely nodded every time I tried to explain my project to her. viii Chapter 1 Introduction The Outlier Processing Method (OPM) [1][2][3] is a general technique for extracting finite-duration signals (henceforth referred to as events) from a coloured noise process. The strength of OPM is that it can effectively extract a special class of events, referred to as highly-variable events (HVEs), from any coloured noise process. As illustrated in Figure 1.1, HVEs are events whose features and latency vary dramatically from trial to trial. These types of events are common in many applications such as physiological signal analysis and no known method exists for extracting these types of signals. (Section 2 reviews the several common signal-extraction methods and explains why they are generally unsuitable for HVEs.) The original OPM approach is different from conventional signal extraction techniques because it attempts to extract the noise sequence, instead of the signal, from the observed process. The estimate of the signal is then calculated as the difference between the observed process and the extracted noise estimate. In order to estimate the noise, OPM requires a model of the noise sequence and a method to robustly estimate the model parameters from the observed data. The heart of the OPM approach is a predictive filter, known as the Cleaner, that is used to generate the noise estimate from the observed process. This filter uses time-invariant influence functions to 'clean off the parts of the observed Amplitude Figure 1.1: A typical highly variable event (HVE) observed during three trials. 1 Chapter J: Introduction sequence that do not fit the noise model. Since the OPM approach models the noise sequence, it is independent of the signal being extracted and as a result, it can effectively extract HVEs. The original OPM, however, can rarely generate the optimal event estimate because the Cleaner, with a constant level of sensitivity due to the time-invariant influence functions, cannot be set to be both a) sensitive enough to detect and remove the event, and b) insensitive enough so that it does not erroneously remove parts of the noise sequence. Thus the performance of the Cleaner is compromised due to the opposing conditions. This work presents a modification of the original OPM. The modified OPM algorithm introduces adaptive time-variant influence functions which increase the Cleaner's sensitivity when the event is present. Since the Cleaner in the modified algorithm has an adaptive sensitivity level, it is able to satisfy both of the desired conditions mentioned above and so it is able to generate better event estimates than the original algorithm. The modified algorithm requires filtering the same signal twice. During the first pass, a time-invariant influence function is used to locate the general vicinity of the signal. During the second pass, a time-variant influence function (controlled by the location information obtained in the first pass) is used to extract the signal. A review of the original OPM algorithm and a description of the modified OPM algorithm are presented in Chapter 3. Simulation studies were performed on a linear and stationary implementation of the modified OPM. The performance of the modified algorithm was evaluated and compared to the original OPM algorithm and to a minimum mean-square error (MMSE) filter. Chapter 4 describes the experimental setup and results. The purpose of this work is to demonstrate that the modified OPM is a useful technique for extracting HVEs and that it performs better than the original OPM. Optimizing the performance of the technique is left for future research. 2 Chapter 2 Review of Signal Extraction Methods This chapter reviews the most common signal extraction methods: spectral separation, Wiener filters, ensemble averaging, MMSE filters, and autoregressive modeling with exogenous inputs (ARX). The general strengths and limitations of each of these techniques will be discussed with comments about how applicable each is for extracting HVEs. Before the details of any particular method is described, a review of the general signal-extraction problem is presented. 2.1 The Signal Extraction Problem For the signals being studied, the observed signal is assumed to be modeled by the well known additive noise model y(t) = x(t) + n(t) (1) where y(t) is the observed signal sequence, n(t) is the additive coloured noise sequence and x(t) is the event signal, which is independent of n(t). The first three signals of Figure 2.1 are examples of a typical noise sequence, an event and an observed signal. The signal extraction problem is how to estimate the event, x(t), given the observed signal, y(t) and some a priori information about the event and/or the noise. Many techniques have been developed in attempt to solve this problem. The most common signal extraction techniques are reviewed in the next section. Note that none of these techniques were designed for HVEs so they can not effectively extract HVEs, although some are useful for applications where the event's latency is relatively constant. The goal of a signal extraction technique is to generate an accurate estimate of the event with minimum noise in the event estimate. In this work, the noise in the event estimate is referred te-as 'estimation noise'. Excessive estimation noise may lead to 'false positives', i.e., noise that is mistaken as part of the event. 3 Chapter 2: Review of Signal Extraction Methods The Noise The Event The Observed Signal (Noise + Event) An Event Estimate estimation noise Figure 2.1: Typical signals A typical event estimate is shown as the last signal in Figure 2.1. This event estimate x(t) can be compared to the actual event x(t) to see how well the technique performed. Typical performance measures are the cross-correlation between x(t) and x(t), which indicates how well the pattern of x(t) is extracted, and a measure of absolute error (e.g., error = f(x(t)-x(t))), which indicates how accurate the estimate is at each sample point 2.2 Common Signal Extraction Methods 2.2.1 Spectral Separation For applications where the event and noise spectra are essentially nonoverlapping, it possible to design a band-pass filter that passes the desired signal while attenuating the unwanted noise components. There are many well-known procedures for designing such niters; the reader is referred to the book by Rabiner and Gold [4] for examples. 4 Chapter 2: Review of Signal Extraction Methods Even though a band-pass filter cannot be used to extract the signal in applications with heavily overlapped spectra, the frequency band of interest is usually known, so a band-pass filter is commonly used as a data preprocessor to remove the unwanted frequency components. 2.2.2 Wiener Filters The Wiener filter has been widely used by researchers to separate a stationary signal from stationary noise. Although few researchers currently use it as an independent filter for extracting signals from noisy backgrounds, many use it as a data preprocessor to improve the signal-to-noise ratio. The underlying assumptions for this filter to be optimum, in the minimum mean-squared error sense, are that the signal and noise be additive, statistically independent and that each is a sample from a wide-sense stationary random process [5]. The general Wiener filter for a signal mixed in noise is given by where H(u) is the frequency response of the filter being designed, $yy(u>) is the power spectrum of the observed signal, $xx(w) is the power spectrum of the event and $„„Co;J is the power spectrum of the noise. In one sense, the Wiener filter can be viewed as a sophisticated method of spectral separation. McGillem, et. al. [5] provide a detailed review of how Wiener filters are used to extract signals and they emphasize that a major source of difficulty is the ability to accurately estimate the spectral densities of the signal and the noise. Thus the effectiveness of these filters is directly related to how well the spectra can be estimated. As an example, one way of estimating these power spectra is to use the power spectrum of an ensemble average of y(t), denoted $yy(u>), and the ensemble average of the sample power spectra $yy(u>), denoted $yy(w), in the following formulas to estimate $xx(v) and $nn(u) [6]: (2) N 1 N - 1 N-l (3) 5 Chapter 2: Review of Signal Extraction Methods Walter [7] and Nogawa et. al. [8] used this approach to design Wiener niters for signal extraction applications although they used a incorrect formulation of H(u>), which was corrected by Doyle [6] 2.2.3 Ensemble Averaging One of the simplest and most popular method of signal extraction is ensemble averaging. This technique averages together several (typically 30 to 100) observed segments. Based on the assumptions that the noise is a random process with a mean of zero and that the signal is time invariant, the averaging of several signals decreases the contribution of the noise and thus increases the signal-to-noise ratio. The noise will be reduced as a function of the square root of the number of trials [9]. In principle, averaging can extract an arbitrarily small signal relative to background noise amplitude, if a large number of invariant trials are averaged. Ensemble averaging generates a good estimate of the signal when the signal feature variations over the ensemble of signals are small, but when these variations, especially in latency, are significant, serious errors result [10]. Although methods have been proposed to deal with latency correction [11][12][13] and shape and latency variation[10][14][15][16][17], these only compensate for small variations. A major disadvantage of ensemble averaging is that the result is just an average of a signal — the uniqueness of each signal is lost — so it cannot be used for extracting single-trial information Another disadvantage is that a relatively large sample size is needed in order to increase the SNR to a reasonable level. Thus ensemble averaging is only applicable to applications where the latency is relatively stable, the uniqueness of each signal is not important and where a large set of sample signals are available. 2.2.4 Linear MMSE Filters Linear minimum mean-square error (MMSE) niters have been designed for signal extraction in many areas. As an example, McGillem, et. al. [5] and Yu and McGillem [18] have used these types of filters to extract evoked potentials (EPs) from EEG. These filters are designed to generate the optimal event estimate in the mean-square error sense. 6 Chapter 2: Review of Signal Extraction Methods Both time-invariant and time-variant versions of these niters are available. The use of the time-invariant version is restricted to applications where both the signal and noise are wide-sense stationary. The time-variant version was created [18] to deal with non-stationary signals. The design of both types of filters are detailed below. The design of these linear MMSE filters relies on knowledge of the auto-covariance matrix of the observed signal, y(t), and the cross-covariance matrix of the observed signal, y(t), and the event signal, x(t). These are usually estimated experimentally or are computed assuming some underlying model for the event. The performance of these filters is heavily dependent on the accuracy of the covariance matrices estimates [5]. 2.2.4.1 Linear Time-Invariant MMSE Filters Based on our assumed additive model of the observed signal (equation 1), a linear, time-invariant M M S E filter, with an impulse response h, can be designed so that m x(t) = M»X* - *) = hTy (4)" i——m where x = [x(t — m) • • • x(t) • • • x(t + m)]T h = [h{-m)---h(0)---h(m)]T (5) y = [y{t-m)---y{t)---y{t + m)]T and x(t) is the estimate of x(t), y(t) is the observed sequence and m is the order of the filter. The filter's impulse response vector h can be derived by minimizing the mean-square error under the stationarity assumption. The result is [19] h= K " / g (6) where g, the cross-correlation vector of the event and observed signal, is calculated from the measured data via g{n)= E{y(t)x(t - n)} - m <n < m (7) 7 Chapter 2: Review of Signal Extraction Methods and Kyy= E{ y yT] is the auto-covariance matrix of the observed signal. The resulting mean square error of the estimate is given by Cmin= XTX-&TKjg1g (8) which provides a way to evaluate the theoretical performance of the filter. Normally N observed signals are analyzed when designing these filters. If N is large, the expected value of the mean-squared error (MSE) is minimized instead of the error itself. In these situations, Kyy is calculated as the ensemble average of the N auto-correlation matrices and the vector g is calculated as the ensemble average of the N cross-correlation vectors between each observed signal, y(t), and the event, x(t). Since the expected value of the MSE is minimized, the g vector can be calculated as [5] g(n) = E{x(t)x{t - n)}. (9) 2.2.4.2 Linear Time-Variant MMSE Filters Effectively, the time-variant MMSE filters are simply TV time-invariant niters 'stacked' together, where N is the number of data points in the observed sequence. In this way these types of filters can accommodate non-stationarities in the signal. Following this approach, the filter, with an impulse response h*, is designed so that the best estimate of the signal, x(t), is generated by N *(t) = ] T hk(t)y(t) = hfy (10) t=i The filter's impulse response matrix H can be derived by minimizing the mean-square error. The result is [19] H=[hi,h2,...,hN] = K „ K - / (11) where Kyy is the auto-covariance matrix of the observed signal and K y^ is the cross-covariance matrix of the observed data and the event signal. 8 Chapter 2: Review of Signal Extraction Methods This filter will in general be linear, time-varying and noncausal [18]. The minimum mean square error achieved is given by [18] (12) 2.2.5 Autoregressive Modeling with Exogenous Inputs (ARX) For the ARX approach to signal extraction, the observed signal is assumed to be generated by the model in Figure 2.2. In this model, u(t) is an estimate of the event, x(t), e(t) is a white noise source, and y(t) is the observed signal. A(z) and B(z) are polynomial functions of order n and m respectively with parameters a\, 02 . . . , an and b\, b?,... b m . The strength of the ARX technique is that it can account for one (or more) additional noise source(s). In Figure 2.2, s(t) represents a single additional noise source and C(z) is its corresponding transfer function of order p. e(t) u(t) B(z) i L + s(t) C(z) 1/A(z) y(t) Figure 2.2: ARX model of y(t) generation. This model can be represented with the following equation, 2/(0= - Y a>y(l - 0 + 2 bju(t - j) + Y, ck4t - k)+e(t) }=1 k=l i-1 (13) The system functions A(z), B(z) and C(z) are determined in the following way. Using the ensemble average estimate of x(t) for u(t) (or some other estimate of the event, x(t)) and several measured s(t) and y(t) samples, the parameters of A(z) (namely, 0,1,0,2 .. .,a n), B(z) (61,62, • • -bm) and C(z) (c\,ci,.. .cp) are tuned to make e(t) as white as possible. 9 Chapter 2: Review of Signal Extraction Methods With A(z), B(z) and C(z) identified, x(t) can be estimated by rearranging equation 13 as follows: m n p *(0= - £ - i) + S a , J^ -o-2 c**('" ^ " e ( t ) ( 1 4 ) i=; .=/ *=/ This relationship can be illustrated as the block diagram shown in Figure 2.3. e(t) y(t) A(z) 1/B(z) x(t) s(t) C(z) Figure 2.3: Model of x(t) estimation using calculated transfer functions A(z), B(z) and C(z). The main power of the ARX technique is its ability to remove the effects of secondary noise sources (known or measured). For example, Cerutti, et. al. [20], used an ARX model where the observed signal was composed of an evoked potential, x(t) (the signal of interest), background EEG activity, e(t) (the noise), and deterministic EOG activity, s(t) (the additional noise signal). By using an ARX model, they were able to handle the additional EOG 'noise' and successfully extract the evoked potential from the observed signal with superior accuracy then other systems which did not account for the EOG contamination. ARX can effectively handle small variations in signal features and latency because the transfer functions, A(z), B(z) and C(z), can account for variations of the signal features and latency. The ability to handle these variations is limited by the size of variations and the complexity of the transfer functions. Generally the transfer functions cannot account for large variations without being extremely complex. The more complex the functions are, the harder it is to accurately estimate their orders and parameters, which in turn leads to abated performance, so there is a limited amount of variation that ARX can withstand. 10 Chapter 2: Review of Signal Extraction Methods Notice that the performance of the ARX technique is dependent on the accuracy of u(t). Inaccuracies in estimating this signal template will directly cause inaccuracies in the system model, which in turn will decrease the accuracy of the generated event estimate, x(t). 2.3 A Comparison of Methods The following table summarizes key features and dependencies of the methods described above. 11 Chapter 2: Review of Signal Extraction Methods Existing Signal Extraction Techniques Spectral Separation Wiener Filtering Ensemble Averaging Time-invariant MMSE filters Time-variant MMSE filters ARX Event depend-ent know-ledge Requires a model or template yes1 yes1 no yes yes yes Requires power spectrum estimation yes yes no no no no Limited to stationarity events yes yes no yes no yes Noise depend-ent know-ledge Requires a model or template yes1 yes1 no no no yes Requires power spectrum estimation yes yes no no no no Limited to stationarity signals yes yes no yes no yes Useful for more than one signal yes2 yes2 yes no no no Expects a signal to be present no no no yes yes yes Significant a priori information required yes yes no yes yes yes Requires relatively constant latency no no yes yes yes yes Can be used for single-trial extraction yes yes no yes yes yes Design complexity moderate moderate simple moderate moderate complex Applicable to HVEs no no no no no no 1. normally a model or template of the signal is required in order to determine the power spectrum estimate. 2. only if the signals have the save power spectra. Table 2.1 A summary of the existing signal-extraction techniques 12 Chapter 3 Theory of OPM Theoretically, the OPM methodology is suitable for all types of signals (i.e., linear or non-linear, stationary or non-stationary signals) where the event is uncorrelated with the noise sequence. Because of this, the OPM algorithms are described at a functional level to maintain generality. The first two sections of this chapter provide the functional description of the original OPM algorithm and the modified OPM algorithm. The last section provides the theory specific to an implementation for linear and stationary signals. 3.1 The Original OPM Algorithm The general approach of OPM is to model the noise, n(t), then using this model, extract an estimate of the noise from the observed signal. The difference between the observed signal and the extracted noise estimate is used as the estimate of the event. determine the noise model estimate the noise sequence, n(t), bom y(t). I calculate £(t) = y(0-S(t) Figure 3.1: The original OPM methodology Figure 3.1 provides an overview of the original OPM methodology. As you can see, the method has three functional blocks: noise model determination, noise estimation and the event estimate calculation. The last block is simply the subtraction of the noise estimate from the observed signal. The first two blocks are more sophisticated so their functionality is described in more detail below. 13 Chapter 3: Theory of OPM 3.1.1 Noise Model Determination The function of the noise model determination block is to identify the best model of the noise from the observed signal. Modeling includes selecting an appropriate model type, orders and parameter values. Several papers and books have been written on time-series modeling and Section 3.1.1.1 has been included for a review of the most applicable modeling techniques. Since the observed signal may contain an event, the noise modeling technique must be robust against the existence of events if it is to be useful. That is, the technique should ideally generate consistent models regardless of whether an event is present The concept of robust modeling is elaborated in the next section. Note, in certain applications the complete noise model or parts of the noise model are known. In these applications, the noise model determination step can be restricted to identifying the parts of the model that are unknown. For example, if the noise is known to fit a 10th order autoregressive (AR) model, only the model parameters need to be determined to specify the noise model. So the Noise Model Determination block can be reduced to the parameter estimation block shown in Figure 3.2. 3.1.1.1 Review of Modeling Stochastic Time Series The goal of time series modeling is to represent the structure of a stochastic time sequence with a mathematical model. For stochastic time series, only part of signal has a structure that can be modeled, the rest (i.e., the residual) is purely random. So if the sequence could be modeled perfectly, the residual of the modeled sequence would be a purely random sequence. If the residuals from the regression model are not strictly random, there would be further structure in the time series which had not been "explained" by the model. Modeling requires two steps: a) selecting the appropriate model type and orderfs) and b) estimating the model parameters. y(t>-W Estimator general Figure 3.2: Example parameter estimator 14 Chapter 3: Theory of OPM Many books and papers are available for identifying the appropriate model type and order(s) such as [21][19][22][23]. The reader is directed to two excellent books by Priestley [24][25] which provide a complete review of modeling linear and stationary signals and non-linear and non-stationary signals. For each type of model, there are many methods for estimating the model parameters from a signal. For example, some common methods for linear autoregressive (AR) models are least squares (LS) regression, the maximum entropy method (MEM) and the maximum likelihood (ML) method. These techniques, as well as several others, are thoroughly described in [24]. Note that these techniques are optimal when applied to Gaussian random processes [1]. The performance of these techniques has been shown [1] to breakdown when the signal being modeled is contaminated with additive noise. For example, in the case of OPM, the signal used to estimate the parameters is the observed signal, which is the noise signal plus a (amtaminating) event Note, the event, in terms of the noise signal, appears as additive outlier content Thus OPM requires a robust parameter estimation method that can provide good paramter estimates of the noise despite the additive effect of the event. Hogg [26] provides a good background tutorial on robust parameter estimation methods for AR models and indicates that the most promising techniques are those given by modifications to the ML estimator. The generalized maximum likelihood (GM) estimator, described in [1] and [27], is an example of such an estimator. 3.1.2 The Cleaner The heart of the OPM algorithm is the Cleaner, a predictive filter which performs the noise estimation function, that is, it generates an estimate of the noise sequence from the observed signal. The term Cleaner is a simplified version of the term cleaner-filter originated by Martin and Thomson [27] to describe such a predictive filter. The Cleaner generates the noise estimate at time t (i.e., A(t)) via a two step process. 1. First, the noise model (identified in the 'Noise Model Determination' step) is used to predict the next observed value, y(t), from the previous observed values, i.e., y(t-l), y(t-2) and so on. The difference between the predicted observed value, $(t), and the actual observed value. 15 Chapter 3: Theory of OPM = (n(<) + *(*)) " (n(<) + <*(<)) (15) = n(t) - n(i) + *(<) - d(t) where d(r) is a difference term defined by d{t) = y(<) " (16) Note that when an event is not present, that is x(t)=0 and y(t)=n(t), d(t) is zero and the PE sequence will simply be the residual of the noise model. When an event is present, the value of d(t) is relatively small compared to x(t) and hence, the PE sequence is effectively the residual term, n(t)-A(t), plus the event. The fact that the presence of an event changes the magnitude of the PE sequence is used to identify and remove the event using influence functions as described below. 2. Second, the value of n(t) is predicted using the noise model, the past noise estimate values and the past prediction errors (stored in the PE sequence). What makes OPM unique is that the Qeaner uses a time-invariant influence function to decide how much influence the prediction error values contribute to the prediction of n(t) in the second step. These functions effectively down weight prediction error values which are larger than a predetermined value. For example, Figure 3.4 illustrates a typical influence function (IF) used in a Cleaner. This IF contains three linear regions defined by two thresholds. Above threshold b and below -b, the PE value is considered to be abnormally large and should be ignored, so the IF value is zero. Between thresholds -a and a, the PE value is considered to be acceptable, so the IF returns the PE value unchanged. Between thresholds a and b and -b and -a is the grey area where increasing PE magnitudes have been assigned a decreasing weight. Note that when these thresholds are set too large, Cleaner Figure 3J: The OPM Cleaner 16 Chapter 3: Theory of OPM the Cleaner will 'accept' large PE(t) variations, including those caused by the presence of an event In this case, the Cleaner is considered insensitive because it cannot distinguish the event from the noise signal. On the other hand, if these thresholds are too small, the Cleaner will reject medium and large PE(t) variations. In this case the Cleaner is considered to be very sensitive. Note that if the Cleaner is too sensitive, parts of the noise sequence will be removed along with the event and if the Cleaner is not sensitive enough, the event will not be removed and the noise estimate will contain the event *<re> • -/4=FB V / t b PB / Figure 3.4: An example influence function 3.1.3 Comments on OPM The following are a list of limitations, assumptions and dependencies associated with the original OPM: 1. The original OPM can rarely generate the optimal event estimate because the Cleaner, with a constant level of sensitivity due to the time-invariant influence functions, cannot be set to be both sensitive enough to detect and remove the event and insensitive enough so that it does not erroneously remove parts of the noise sequence. Thus the performance of the Cleaner is compromised due to these opposing conditions. 2. OPM is heavily dependent on the ability to accurate model the noise. 3. OPM assumes that a Cleaner is available for the type of noise. For example if the noise is non-linear, a Cleaner that can handle non-linear signals is required. 17 Chapter 3: Theory of OPM 3.2 The Modified O P M Algorithm In the modified OPM algorithm the Cleaner was modified to use time-varying influence functions. With these functions, the cleaner is not limited to the compromised performance of the original OPM algorithm (mentioned in Section 3.1.3, item 1) because the Cleaner can be become more sensitive when the event is present and less sensitive when no event is present For example, consider a Cleaner at two different times: t] and 12 and assume that the EFs at ti and t2 are as illustrated in Figure 3.5. Because the size of the IF used at ti (i.e., JPi) is smaller than the IF used at t2 (i.e., IF2), the Cleaner at ti is more sensitive to parts of the signal that do not fit the model than the Cleaner is at t2. In other words, at t2 the Cleaner is more bi l l ing ' to accept large PE values as part of the noise sequence. Figure 3.5: Example influence functions for two different times: ti and t2. In order to use time-varying influence functions, the general location of the event, which is assumed to be unknown, has to be determined so that the sensitivity of the Cleaner can be increased at the appropriate times. This can be done as follows. Theoretically, the PE sequence will be purely random when there is no event present and less random when an event is present. Therefore, the information contained in the characteristics of the PE sequence can be used to locate the vicinity of an 18 Chapter 3: Theory of OPM event. For example, if PE„ is denned as the PE sequence produced by the cleaner when the observed signal is strictly noise, i.e., y(t) = n(t), then the PE sequence produced by the Cleaner should have the same mean, variance and autocorrelation sequence (i.e., power spectrum) as PE„ when there is no signal present and it should have a measurably different statistics when there is an event present. Note, this approach relies on the assumption that the noise is modeled adequately so that the PE sequence is purely random when no signal is present Figure 3.6 provides a overview of the modified OPM algorithm. Before this algorithm can be described, the non-whiteness function, W(t), and the influence function's (IF) sensitivity control function, I(t), need to be introduced. W(t) is a control function which represents the difference between the PE sequence characteristics and the PE„ characteristics. When W(t) is small, the characteristics of the PE sequence match the characteristics of PE„ and so no event is assumed to be present When W(t) increases, the characteristics of PE differ from PE„ and this difference is assumed to be caused by the presence of an event. Since W(t) implicitly represents the position of the event, it is used to control the sensitivity of the Cleaner. The specific formulation of W(t) is dependent on the implementation. See Section 3.4.1.5 for a description of W(t) for the linear and stationary implementation used in the simulation studies. The IF control function, I(t), defines the size of the influence function (see Section 3.4.1.3) and, hence, the Cleaner's sensitivity for each time instant For example, if I(t) is large, the influence function at time t is large and hence the Cleaner is highly insensitive to additive events. Where as when I(t) is small, the influence function at time t is small which means that the Cleaner is more sensitive to additive events. I(t) is inversely related to W(t) and is calculated from W(t) before the second pass through the Cleaner. As in the case for W(t), the specific formulation of I(t) is dependent on the implementation. Refer to Section 3.4.1.6, for an example of I(t) for a linear and stationary implementation. The first and last block of the modified algorithm (see Figure 3.6) are identical to the original OPM. The main difference is that in the noise estimate stage, the observed signal is passed through the Cleaner twice. The first pass is used to locate the vicinity of the event and the second pass is 19 Chapter 3: Theory of OPM used to generate the final noise estimate. Prior to the first pass, the IF-control function, I(t), is set to a constant, initial value, which makes the influence function time-invariant for the first pass. After the first pass through the Geaner, the characteristics of the PE sequence are measured relative to PEn and this information is stored in W(t) as mentioned above. This information is then translated into the IF-control function, for the second pass. The Cleaner then processes the observed signal for a second time with the Cleaner's influence function size controlled by I(t). The noise estimate generated from the second pass is used as the final noise estimate. determine the noise model Noise Estimator set I(t) to initial value i Clean y(t): generate n(t)andPE(l) I calculate W(t) = g(PE(t)) calculate I(t) = fiCW(t)) A Clean y(t): generate S(t)andPE(t) calculate x(0 = y(t)-S(0 Figure 3.6: The modified OPM algorithm 3.2.1 Comments on the Modified OPM Algorithm As mentioned above, the modified OPM algorithm is not restricted to sub-optimal event estimates like the original algorithm. It is, however, as heavily dependent on the ability to accurate model the noise as in the original algorithm. 20 Chapter 3: Theory of OPM 3.3 A Comparison of Methods Table 3.1 displays the summary from the end of Chapter 2 with the OPM methods included. Note that spectral separation and Wiener filtering have been omitted due to lack of space. 21 Chapter 3: Theory of OPM Existing Signal Extraction Methods OPM Ensemble Averaging Time-invariant MMSE niters Time-variant MMSE niters ARX Orig. Mod. Event depend-ent know-ledge Requires a model or template no yes yes yes no no Requires power spectrum estimation no no no no no no Limited to stationarity events no yes no yes no no Noise depend-ent know-ledge Requires a model or template no no no yes yes yes Requires power spectrum estimation no no no no no no Limited to stationarity signals no yes no yes no no Useful for more than one signal yes no no no yes yes Expects a signal to be present no yes yes yes no no Significant a priori information required no yes yes yes no no Requires relatively constant latency yes yes yes yes no no Can be used for single-trial extraction no yes yes yes yes yes Design complexity simple moderate moderate complex complex complex Applicable to HVEs no no no no yes yes 1. normally a model or template of the signal is required in order to determine the power spectrum estimate. 2. only if the signals have the save power spectrum. Table 3.1 A summary of the existing signal-extraction techniques compared to the OPM techniques 22 Chapter 3: Theory of OPM 3.4 A Linear and Stationary OPM Implementation The previous sections have described the functionality of the original OPM and the modified OPM algorithms. This section demonstrates an example of how these algorithms can be implemented for linear and stationary signals. Actually, the simulation experiments (described in Chapter 4.1) are based on the components described in this section. 3.4.1 Description of Implementation Referring to Figure 3.1 and Figure 3.6, the following elements need to be defined to completely implement the OPM algorithms: 1. a noise modeling routine for linear and stationary noise 2. a linear and stationary time-invariant Cleaner (for the original OPM algorithm) 3. a linear and stationary time-variant Cleaner (for the modified OPM algorithm) 4. a formulation of W(t) (for the modified algorithm) 5. a formulation of I(t) (for the modified algorithm) This implementation assumes that the noise can be modeled by an pth order AR model, where p is known. Thus only the model parameters need to be determined from the observed signal so the noise modeling routine can be reduced to a parameter estimation routine. The GM1 estimator (an extension to the robust generalized maximum-likelihood (GM) estimator) was selected for this purpose because of its superior performance to GM. The Cleaner used in this implementation was taken direcdy from Martin [27]. The modification only affects the influence functions used within the Cleaner and does not change the structure Martin's filter otherwise. The following sections provide the detailed workings of the GM1 Estimator and Martin's Cleaner and the formulation of W(t) and I(t). 23 Chapter 3: Theory of OPM 3.4.1.1 Martin's Cleaner In this section, the internal workings of the Cleaner are described. Remember that the function of the Cleaner is to provide an estimate of the noise sequence from the observed signal. The cleaner-filter put forward by Martin and Thomson [27] was chosen for this implementation because it was the Cleaner used in Birch's implementation [1] of the original OPM algorithm. It relies on the additive-outlier (AO) model which for convenience is repeated below y(t) = x(t) + n(t) (17) Martin's Cleaner is based on a pth order AR model for the noise, n(t). In this model, the noise is recursively defined as p n(t) = Y akn(t -k) + c(r) (18) fc=i where the a*s are the p AR model parameters and c(t) is the residual error. Thus, the observed signal can be written as p y(t) = x(t) + ]T akn(t -k) + e(i) = *(*) + a(z)n(t) + e(t) (19) Jt=i where p a(*) = 53a**-fc <20> Jfe=l and the z operator is defined as z-kn[i) = n(i - Jfc). (21) Martin's algorithm relies on the pth order autoreggressive approximations to a noise process, nt, represented in the following state-variable form: n ( = A n t ^ - u , (22) 24 Chapter 3: Theory of OPM where = ( n t , n t _ 1 , ' « - , n t _ p + 1 ) u t = (6 t,0,-.-,0) A = o 3 • a p 1 0 • • 0 0 0 • • 1 • •• 0 • 0 • • 0 • 0 • • • 1 • 0 (23) The estimate n, of the vector nr is calculated according to the following recursion: (24) where mt= mt/$t with m, being the first column of the p xp prediction-error covariance matrix M,. M, is computed recursively as Mt+1 = A P t A T + Q where the filtering-error covariance matrix P, is denned as (25) (26) The psi function and weight function w() are the influence functions described above and they are selected to optimize the performance of the Cleaner. Both these functions should be bounded and continuous, and it is highly desirable that both ¥ ( ) and w() have zero values outside the bounded, symmetric interval around the origin [27]. Boundedness insures that no single observation can have an arbitrarily large effect or influence on the filter. Continuity insures that many rounding or quantization errors will not have a major effect. Furthermore, ^l(t) should look like t for small values of f. Actually, Martin, et al., [27] suggest that the weight w function can be chosen so that w(t) - y(t)/t. Note that since this relation is used in this implementation, only the psi function needs to be denned. 25 Chapter 3: Theory of OPM The matrix Q (defined in equation 25) is all zero except for the one-one element whose value is equal to the variance of the residual error sequence, i.e., Qu = The time-varying scale st, which is denned by Martin as = mn | ( , is important for keeping the Cleaner 'locked' onto the noise sequence. The symbol y\~i, based on y t _ 1 = • • ,yt-\), denotes a robust one-step prediction of yt and is given by Under the additive noise model (see equation 19), with xt and nt independent, the best predictor of y, is also the best predictor of n, and so the robust one-step predictor x\~l of nt satisfies h\~l = y\~l Finally, the cleaned data at time t is given by which is the first element of n,. It is important to recognize that Equation 24 is effectively a robust version of a Kalman filter. That is, if %() were the identity function (i.e., ^S(t)= t), w was set equal to one and st was redefined as 5j = mn,t + o~\ (where <r02 is the variance of x(t)) the above recursions would be those of the Kalman filter. Correspondingly, M t would be the prediction-error covariance matrix and P t would be the filtering-error covariance matrix. Unfortunately, the Kalman filter is not robust. That is, a single outlying observation y(t) can spoil not only the estimate at time t, but also the estimates after time t [27]. 3.4.1.2 The Modified Cleaner Filter In the modified algorithm the Cleaner's influence functions are changed to be time-dependent. This means equation 24 changes to (An t - i ) (27) nt = (nt), (28) (29) 26 Chapter 3: Theory of OPM and equation 26 becomes (30) These are the only changes required to Martin's algorithm described above. 3.4.1.3 Selecting the Cleaner's Influence Function The IF selected for the implementation was the following 3-part redescending function This IF, which is drawn in Figure 3.7, was taken from Birch [1]. It is worth noting that the shape of this EF is not necessarily optimal. The thresholds a, b and c were picked so that they maintained a set ratio of 1.0 : 1.2 : 1.8 (i.e., a:b:c). These were the approximate threshold ratios used by Birch [1]. Thus the shape of the IF function is constant and the size of the IF can be set by selecting the a threshold (or either one of the other two thresholds). This concept of defining the function's size (and hence the Cleaner's sensitivity) with one parameter underlies the definition of the IF-control function, I(t), for this implementation. 1(f) is denned as the a threshold at time t Thus, for any time t, the size of the Cleaner's IF and the sensitivity of the Cleaner is defined by I(t). The I(t) function for this implementation is denned in Section 3.4.1.6. (31) IF / Figure 3.7: Example 3-part influence function. 27 Chapter 3: Theory of OPM 3.4.1.4 The GM1 Estimator The GM1 AR parameter estimation method, a variation of the GM parameter estimation method (see Section 3.1.1.1), was used in this work to obtain robust parameter estimates for the noise model. The GM1 method was selected because Birch [1] has shown that the GM1 method can generate better parameter estimates from autoreggresive noise contaminated with an additive signal, such as an HVE, than the GM method or the MEM method. Calculating a GM1 estimate, as described by Birch [1] and Martin, et.al., [27] is a composite of three steps: 1. calculate a GM estimate of the parameters from the observed signal 2. 'clean* off the large outliers in the observed signal using a Cleaner filter and the noise model denned in step 1 3. recalculate the GM estimate of the parameters from the 'cleaned' signal (i.e., the output of the Cleaner). So a GM1 estimator uses a GM estimator to estimate the parameters and a Cleaner filter to remove large outliers. To completely define a GM1 estimator a GM estimator needs to be defined and the influence function used in the Cleaner needs to be defined. The GM estimator used in this implementation was taken directly from [1]. Martin's Cleaner filter was chosen and the IF used in the Cleaner was taken as the 3-part IF defined in Section 3.4.1.3. 3.4.1.5 The Formulation of W(t) The role of the non-whiteness function W(t) is to indicate when an event is present. The way W(t) is defined, a low value indicates that there is no event present and a larger value indicates that an event is present This information is then mapped through I(t) to control the sensitivity of the Cleaner. Recall that W(t) is a measure of the difference between the PE characteristics and the PEn characteristics. Three obvious characteristics of the PE sequence are its mean, variance and power spectrum, so in the process of formulating W(t) each of these characteristics were experimentally 28 Chapter 3: Theory of OPM evaluated. The autocorrelation series of the PE sequence, which is directly related to the sequence's power spectrum, was found to be too inconsistent to be used in W(t). The variations in the mean of the PE sequence were also too inconsistent to be useful. The variance, however, was found to be a good indicator of the difference between the PE sequence and the ideal PE„. The variance was calculated at time t as where n was the window size. It was desirable to keep n small so that the variance represented localized changes in the PE sequence characteristics, but the smaller the window size, the less stable the variance was over time. To overcome this problem, the W(t) function was defined as the moving average of the variance, i.e., where m is the window size used to calculate the moving average. In this implementation, the two window sizes, i.e., n and m were set equal to reduce the number of definable system parameters. As indicated in Chapter 5, more research is required in order to determine the optimal formulation of W(t). 3.4.1.6 I(t) as a Function of W(t) As stated in Section 3.4.1.3, the IF-control function, I(t), defines the size of the Cleaner's IF size, and hence, the Cleaner's sensitivity for all t. For example, Figure 3.8 illustrates how the time-varying IF, IF(t), defined in Section 3.4.1.3, is related to I(t). Before the first pass through the Cleaner, I(t) is set to a constant level, a0, which makes the Cleaner's IF time-invariant Before the second pass, I(t) is calculated from the non-whiteness function, W(t), (defined in the previous section). Since the desired action is to increase the Cleaner's sensitivity when the event is present and decrease it when the event is not present, a step function is used to represent this desired binary functionality. The relationship between W(t) and I(t) is depicted in Figure 3.9. At time t, if the non-whiteness function rises above a certain threshold, WPE, (theoretically (32) (33) 29 Chapter 3: Theory of OPM IF(t) -c -b -a f a b \ c -1.8I(1)\ -121(0 -KO «0 1.2K0 1.8KO -PE Figure 3.8: The time-varying IF with thresholds controlled by I(t). 1(0 "doe o ^ne" * P E W(t) Figure 3.9: I(t) as a function of W(t). indicating that an event is present), the IF-control function is set to a relatively low a threshold (i.e., dine) resulting in a small IF, which increases the sensitivity of the Cleaner. If W(t) is below WPE, then I(t) is set to a relatively high a threshold (i.e., adec) resulting in a large IF, which decreases the sensitivity of the Cleaner (see Figure 3.8). The following describes how to select the four parameters, WPE, a0, ainc and Odtc- The parameters WPE and Odtc are dependent on the value of a0, so the procedure to select a0 will be described first Selecting a0: Theoretically, a0 should be set as low as possible without influencing the PE sequence of a pure noise signal (i.e., when y(t) = n(t)). This way, when a pure noise signal (i.e., when no event is present) is observed, the Cleaner will not influence the PE sequence and the resulting noise estimate will be the observed signal, which means the event estimate 30 Chapter 3: Theory of OPM is zero (as desired). But when the Qeaner is filtering a signal with an event added to the noise, the increased PE sequence caused by presence of the event will be influenced by the Qeaner. For example, Figure 3.10 a) displays the average value of W(t) as a function of the IF's a threshold for a pure noise signal. (Recall that the W(t) function indicates how much influence the Qeaner had on PE sequence.) Notice how the W(t) value increases below the point marked X. This indicates that the PE sequence for the pure noise input was influenced by the Qeaner for a thresholds less than X. So do should be set equal to X. Selecting wpg: Figure 3.10 b) displays the distribution of W(t) values when the Qeaner's IF a threshold was set to aa (denned above). For this implementation, the WPE threshold was selected so that 95% of the W(t) values caused by white noise would not cause the Qeaner to apply any influence on the signal. A 5% error is tolerated because the 95% level was found through trial and error to be the best trade-off between signal extraction performance and suppression of estimation noise when the observed signal contained events. Selecting a<£,c: The Odec threshold should be significantly greater than the a0 so when the algorithm 'decides' that no event is present, the Qeaner can be desensitized for that part of the signal to improve the estimation noise suppression. For this implementation, the adec threshold was arbitrarily set to four times a0-Selecting a^: The ainc threshold should be selected as the level where the Qeaner is best able to extract the event. The best way to determine the optimal a^c threshold is to experimentally determine the performance of the modified algorithm for various a thresholds, then select amc as the a threshold which gives the best performance. Note, as demonstrated in Appendix B, the optimal is dependent on the SNR of the observed signal. 31 Chapter 3: Theory of OPM Average W(t) amplitude 4.00 3.00 2.00 1.00 o.oo • o.oo 0.50 1.00 1.50 a) I 2.00 ^ I 2.50 3.00 a threshold Probability (x 1000) 40.00 -30.00 20.00 10.00 0.00 0.00 1.00 b) 2.00 W(t) amplitude Figure 3.10: a) Cleaner's influence on pure noise sequence as a function of IF a threshold, b) The distribution of W(t) values for a threshold set at X. Note that the W(t) is centered at 1.0 because the variance of the noise sequence was 1.0. 32 Chapter 3: Theory of OPM 3.4.2 Summary of Implementation Details The main components of the modified algorithm have been defined above for linear and stationary signals. The following list summarizes the system parameters that were not assigned specific numerical values because they are dependent on the noise signal. • a0, dine, ddec and WPE for I(t) • n, the window size used in the calculation of W(t) • the IF a threshold used in the G M 1 estimator These system parameters are identified in Appendix B for the noise signals used in the simulation experiments. 33 Chapter 4 Experimental Description Simulation experiments were performed on the linear and stationary implementation of the modified OPM algorithm described in Section 3.4. The purpose of these simulation experiments was to evaluate the strengths and limitations of the modified OPM algorithm and to compare the modified OPM to a) the original OPM, and b) time-invariant MMSE filters. The goal of this study was to determine how the performance of the modified algorithm varied with event length, SNR level and spectral similarity of the event to the noise sequence. This included experiments to determine how well the algorithm performed when no HVE was present Linear and stationary signals were evaluated because the study has been restricted to one type of implementation (i.e., linear and stationary implementation) in order to keep the evaluation manageable. This implementation, however, is of considerable importance since many applications assume linear and stationary signals. Hence, there is a strong interest in this type of system. The time-invariant MMSE filter was chosen as the second method for the comparison for three reasons: 1. time-invariant MMSE filters are suitable for the simulated linear and stationary signals 2. MMSE filters have been used by other researchers [5][11][14][18] to extract events from coloured noise. 3. the MMSE filter had a straight forward implementation that can be applied to HVEs. Techniques, such as ARX, were avoided because they model variations of an event and hence this would require a complex simulation protocol just to systematically represent variations in the events. Since the focus of this work is to study the extraction of HVEs with a minimum of a priori information about the event, these techniques are not well suited for comparison with the OPM in this context Also, note that a system designed with inaccurate event variations would contain a bias, which would be related to how the events were simulated. 34 Chapter 4: Experimental Description Event • SNR Scaler Scaled Noise Noise Generator Figure 4.1: Experimental System 4.1 Description of Experimental System The experimental system, illustrated in Figure 4.1, was used to evaluate the performance of the modified OPM and the relative performance compared to the original OPM and a time-invariant MMSE filter. In this system a predetermined event signal is scaled by a desired signal-to-noise ratio (SNR) then added to a simulated noise sequence to create an observed signal (as shown in the diagram). This observed signal is then submitted to each extraction technique. The event estimates from each of the techniques are then individually compared to the original scaled event to determine how well each technique performed. In these experiments, 21 different event signals and three different types of noise were used. The specific performance measures are described below. The detailed design of the modified OPM, the original OPM and the MMSE filter modules are described in Section 4.1.1. The three different types of noise were specified by three AR models with different orders. An 8th and a 12th order AR model of human EEG were used because they represented real noise signals. A white noise model (i.e., a Oth order AR model) was selected as the third type of noise because of the wide range of applications with white noise. The parameters of these models are defined in appendix A and the power spectra of each are displayed in Figure 4.2. The 21 event signals were chosen to test the limits of the techniques with respect to 1. the length of the event, 35 Chapter 4: Experimental Description 0 0 a) b ) 0 c) Figure 4.2: The power spectra of the three noise models used in the simulations: a) AR(8) model, b) AR(12) model and c) white noise model. 2. the SNR with respect to the noise sequence, and 3. the spectral overlap of event's power spectrum and the noise's power spectrum. Each of these event signals represented a different combination of length and spectral overlap. The first event (of the 21) was an all-zero event This special-case signal was used to test how much error the techniques would produce when no signal is present The other twenty signals were generated from four AR models defined in appendix A. Each model represents a different spectral composition as seen in the power spectra shown in the Figure 4.3. Each of these event spectra has a different amount of spectral overlap when compared to the noise model spectra shown in Figure 4.2. For example, event #4 has the most spectral overlap with the AR(8) and AR(12) noise models and event #7 has the least For each event spectrum, there were five signals of different lengths: 10%, 20%, 30%, 40% and 50% of the length of the noise sequence. Note, these lengths are measured relative to the length of the noise sequence for generality. Hence, this results in the set of 20 events: five different length times four different event models. 36 Chapter 4: Experimental Description o o c) d) Figure 4.3: The power spectra of the four event models used in the simulations: a) event #4; b) event #5; c) event #6; d) event #7. The size of each of the 20 non-zero events was varied between -lOdB and +10dB to test the techniques at various SNRs. For the simulations, the following seven SNRs were found to be adequate for testing the SNR dependence: -lOdB, -7dB, -5dB, -3dB, OdB, 5dB and lOdB. To review, each experiment required: 1. one of the 21 event signals 2. one of the three noise models 3. one of the seven SNRs In total, 423 experiments were required to test all of the combinations of noise model, event and SNR. Each experiment was composed of 121 runs; a run is effectively one pass through the system displayed in Figure 4.1. During each run, the scaled event was combined with a new noise sequence (randomly generated using the noise model) and the resulting observed signal was submitted to the three techniques. Thus, for each experiment, 121 different observed signals were tested and 121 sets of performance measures (one for each run) were generated. The results are presented below. 37 Chapter 4: Experimental Description 4.1.1 Design Details of the Modified and Original OPM Modules The modified OPM algorithm and the original OPM algorithm were implemented as described in Section 3.4.1. As pointed out in Section 3.4.2, certain system parameters had to be defined for each of the three noise models. The parameter selection procedure was described in Section 3.4.1. The selection of these parameters is presented in appendix B. The final system parameters for the modified and original OPM algorithms are summarized in Table 4.1. Noise model System Parameter White Noise AR(8) AR(12) do 2.00 1.90 1.90 1(0 CLitc 8.00 7.60 7.60 aUK 0.80 0.80 0.80 Mod OPM WPE 1.19 1.17 1.21 W(t) definition variance of PE(t) variance of PE(t) variance of PE(t) smoothing window size 24 24 24 Orig OPM Cleaner's IF a threshold 0.80 0.80 0.80 Both Estimator's IF a threshold 2.5 2.5 2.5 Table 4.1 Experimental system parameters for the modified OPM algorithm. 4.1.2 Design of the MMSE Filter Module A unique filter was designed for each event and noise model combination, that is one filter per experiment Each filter was designed as described in Section 2.2.4.1, using 50 sample observed signals, each created from the event combined with 50 randomly generated noise sequences. Since these filters were created with the exact event and not an estimated template, it can be said that these MMSE filters were created with 'perfect* signal knowledge. 4.1.3 Performance Measures The Performance Evaluator block of Figure 4.1 calculates four performance measures between each of the three event estimates and the original scaled event for each run. Two of these measures 38 Chapter 4: Experimental Description are calculated for the primary region of interest (ROI) (that is during the time period when the event is present, see Figure 4.4) and two are calculated for the secondary ROI (that is during the time period when the event is not present). X Primary ROI Secondary ROI " Figure 4.4: Denning primary and secondary regions of interest (ROIs) The four performance measures are: 1. Primary Correlation (Cp): The primary correlation is the normalized1 correlation between event and event estimate in the primary ROI. This measure indicates how well the pattern of the event was extracted by the signal extraction technique. 2. Primary Correlation (Cp) minus Maximum Secondary Correlation (Cs): The maximum secondary correlation (denoted C s) represents the maximum normalized correlation between the event and the noise in the secondary ROI of the event estimate. The difference between the primary correlation and the maximum secondary correlation measure (i.e., Cp-Cs) is one test of the quality of the event estimate. For example, if the Cp-C, value is zero, then the event estimate in the primary ROI and part of the noise in the secondary ROI are indistinguishable in terms of correlation. Thus even if the primary correlation is high, the Note, each correlation performance measure is normalized by the energy in the event That is, correlations are calculated as E * ( « l ) - * ( « 2 ) T -£ * ( « i ) 2 i=l where r is the correlation measure, z(l) is (he actual event signal and i(t) it an estimate of the event signal. (34) 39 Chapter 4: Experimental Description event estimate is relatively poor because part of the estimation noise is indistinguishable from the event, i.e., a 'false positive'. Note, due to the method used to calculate C„ the C, values for the experiments with signal lengths of 50% were corrupted, so they were not included. 3. Relative Primary Absolute Error (Ep): This measure is the average absolute error per sample in the primary ROI scaled by the square root of the SNR. The Ep measure indicates how accurately the event estimate is on a sample by sample basis relative to the size of the event. By scaling the error measure, the primary error can be compared to a standard event size of 1.4 units (i.e., the average event size at OdB) regardless of the SNR. Thus, a Ep value equal to 1.4 indicates that the average absolute primary error is equal to average event magnitude regardless of the SNR. 4. Secondary Absolute Error 03s): This measure is the average absolute error per sample in the secondary ROI of the event estimate. E , indicates the amount of estimation noise present in the secondary ROI of the event estimate. Since the event signal is zero in the secondary region, the maximum E, value would occur if all the noise sequence was included in the event estimate. Thus the maximum E, value would be 1.4, i.e., the average size of the noise sequence for all experiments. For each experiment, the mean of each of these performance measures is calculated over the 121 runs. A good event estimate will have a large Cp and Cp-C, values (limited to 1.0) and low Ep and E s values (ideally, zero). A poor quality event estimate is indicated by low C p or C p -C , values or large Ep or E , values. 4.2 Experimental Results Figure 4.6 through Figure 4.10 are examples of the simulation results. The performance results for all 423 experiments are summarized in appendix C. The major findings are summarized below. 40 the observed signal, y(t) the noise sequence, n(t) the event, x(t) W(t) Kt) modified algorithm's event estimate, $(t) on event M M S E filter's event estimate, X(l) time Figure 4.5: Results of experiment with event #7, 30% length, in white noise at +5dB. 4 5-a Chapter 4: Experimental Description the observed signal, y(t) the noise sequence, n(t) the event, x(t) W ( i ) modified algorithm's _ event estimate, ft(t) original algorithm's event estimate, X(t) M M S E filter's event estimate, X(t) 1 i i i i r 100 200 300 400 500 time Figure 4.7: Results of experiment with event #4, 20% length, in white noise at OdB. a e. o the observed rignal, y(t) the noise sequence, n(t) the event, x(t) — -— K/^^^ -W(t) I(t) modified algorithm's_ event estimate, $(t) M M S E filter's event estimate, $(t) •ift-V .^ v « ' T » ^ " ^ y ^ f * c r i g m a l a l g o n t W s ^ - ^ i V . y / U y j V V v ^ / r ^ ^ ^ . A^/Wyvv^YvVyV"•y>rT*v^ _ 100 200 300 400 500 time Figure 4.8: Results of experiment with event #7, 10% length, in AR(8) noise at -5dB. I fi. n si 4 5-a the observed signal, y(t) the noise sequence, n(t) the event, x(t) W(t) modified algorithm's. event estimate, X(t) original algorithm's event estimate, $(t) M M S E filter's event estimate, $(t) i_ru 0 100 200 300 400 Figure 4.9: Results of experiment with event #7, 10% length, in AR(8) noise at -lOdB. 500 time fi. G I-6-a the observed signal, y(t) the noise sequence, n(t) the event, x(t) W(t) 1(0 modified algorithm's. event estimate, x"(t) original algorithm's event estimate, x(t) M M S E filter's event estimate, x(t) 100 200 300 400 500 time E. Figure 4.10: Results of experiment with no event in AR(8) noise. 6-Chapter 4: Experimental Description 4.2.1 The Performance of the Modified OPM Algorithm As an example of the observed trends in the performance measures for the modified algorithm, Figure 4.11 displays the performance measures for signals with a length of 20% in white noise. As discussed in Section 4.1, keep in mind that event #4 has the most spectral overlap with the coloured noise models and event #7 has the least spectral overlap. Each line in the four graphs represent one of the event models. The relationships displayed in the curves of Figure 4.11 are typical of all the experimental results. It is important to note that there is not enough data to draw strong conclusions about an individual experimental result since the results of each experiment (consisting of 121 runs) are dependent on a given event-noise combination used in the experiment. Instead the conclusions related to given event-noise combinations are focused on the performance trends, since these results are tested across many experiments. One of the first results noticed was that the modified OPM algorithm worked consistendy better for white noise compared to coloured noise. This is expected since extracting signals from coloured noise is more difficult than from white noise. Between the two coloured noise models, there was no consistent performance difference observed. The modified algorithm's performance at OdB, shown in Figure 4.12, is used as basis for the discussion of the results at and above OdB. To keep the figure manageable, only event #4 and event #7 are displayed since they represent the extremes of the spectral overlap between the event and the noise. Below OdB, the modified OPM algorithm encountered performance limits. These limits are discussed later. One of the major goals of this study was to evaluate how sensitive the modified algorithm was to the amount of spectral overlap between the event and the noise. The first observation was that the results for white noise were constant for all experiments, which is expected since the overlap of the white noise spectrum and each of the four of the event spectra are relatively equal. For coloured noise, all the performance measures except C, were dependent on the amount of spectral overlap. For example, the less the spectral overlap, the higher the primary correlation, e.g., 47 Primary Correlation Legend: event #4 event #5 event #6 event #7 _ L -5dB OdB SNR +10dB Chapter 4: Experimental Description Primary Correlation minus Maximum Secondary Correlation -10dB - 5 d B OdB SNR *5dB +10dB \ Relative Primary Error (scaled abs. error per sample) -10dB -5dB OdB Legend: event #4 event #5 event «6 event #7 _ L O90 am 0.70 0.60 ass O40 030 020 - lOdB Secondary Error (abs. error per sample) _ L Legend: event #4 — event #5 - " event#6 event #7 *•"-"" -5dB OdB SNR SNR Figure 4.11: Example performance measures for 20% signals in white noise. • lOdB 48 Chapter 4: Experimental Description Primary Correlation IJOO 1 1 1 1 1 0.90 020 /White « r r : : : : : : : : : w " » — 0.70 - """^ AR(12r -0.60 -0.50 - -0.40 -030 Legend 0.20 Event #4 Event #7 0.10 f» IYi U . W J 1 1 1 1 1 10% 20% 30% 40% 50% signal length (% of obs. signal length) Primary Correlation minus Maximum Secondary Correlation 140 1 1 1 1 1 030 /rVhite — 0.80 -jSS****^ _ . < J 0.70 ~~AR(12) -0J60 -050 -OM OX - -Legend 020 Event #4 Event#7 0.10 — — 0M 1 1 1 1 1 10% 20% 30% 40% 50% signal length (% of obs. signal length) Relative Primary Error (scaled abs. error per sample) 3.00 -2J0 -zoo -1.50 -1.00 0.50-0.00 10% 20% 30% 40% 50% signal length (% of obs. signal length) Secondary Error ^ (abs. error per sample) IJOO oso 020 0.70 0.60 050 0A0 030 0JO 0.10 OJOO Legend ~ Event #4 ' " Event #7 10% 20% 30% 40% 50% signal length (% of obs. signal length) Figure 4.12: Modified algorithm performance at OdB. 49 Chapter 4: Experimental Description the primary correlation values for event #7 were, on average, greater than 0.1 more than the values for event #4 and the primary error for event #7 was slightly higher (< 0.1) than event #4. The secondary error (E«) in the experiments with coloured noise was up to three times larger for experiments with event #7 than for experiments with event #4. Large E s values are usually caused by a non-optimal WPE threshold setting, which results in over estimating the size of the event, which in turn results in an erroneous extraction of part of the noise sequence as signal. As an example to illustrate the effects of a non-optimal wpe setting, consider the two experiments of Figure 4.13. Experiment A has an optimal WPE threshold, which can accurately estimate the size and position of the event. In experiment B, using the same WPE threshold, the Qeaner is more sensitive to the additive effect of the event and thus its non-whiteness function, W(t), is larger. The non-optimal WPE threshold, in experiment B, results in an oversized estimate of the event. Therefore, the large E, values for event #7 indicate that the WPE threshold used in the experiments with event #7 in coloured noise was less optimal than the experiments with event #4 in coloured noise. This suggests that the modified algorithm's WPE threshold can be optimized for different amounts of spectral overlap. Also, these results indicate that size of W(t) is larger in the vicinity of the event for signals with less spectral overlap. Note that as SNR decreases, the size of W(t) (defined in these experiments as the variance of the PE sequence) decreases. So, as SNR decreases, the size W(t) for highly overlapped signals will 'flatten out' (and hence become useless as an event locator) at a higher SNR level then the W(t) for less overlapped signals. In other words, the algorithm should be able to operate at lower SNR for situations with relatively low amounts of spectral overlap. This is confirmed by the utility threshold analysis described below for the cases with a SNR less than OdB. The results also show various amounts of dependence on signal length. See, for example, Figure 4.12. The two performance measures for the primary ROI showed no dependency on the signal lengths tested. In the secondary ROI, the difference between the primary correlation (Cp) and the maximum secondary correlation (C,) appeared to increase as length increases. This is slightly deceptive since C f values were not calculated for 50% long signals as was indicated in Section 4.1.3. For the shorter signals, however, the drop in Cp-C, is reasonable since shorter signals are more likely to correlate by chance with the estimation noise in the secondary ROI than longer signals. As for the secondary 50 Chapter 4: Experimented Description Experiment A Experiment B §|f Error in 111 Secondary ROI Figure 4.13: Example of effects of suboptimal WPE setting white noise coloured noise event #4 and #7 event #7 event #4 Cp >0.87 >0.80 >0.70 Cp-Cs >0.75 >0.70 >0.60 Ep <0.6 < 1.0 <0.9 Es <0.07 <0.17 <0.35 Table 4.2 Summary of performance limits at OdB. error (E,), it increased as the signal length increased because the over sizing of the events due to sub-optimal parameter, WPE, (described above) is more extreme for longer events. This indicates that the system parameters can also be optimized for signal length. The Table 4.2 summarizes the performance limits at OdB. The large primary error values combined with the large correlation values indicates that the general shape of the event estimate is intact and the estimate contains large erroneous peaks which bias the average error measure. 51 Chapter 4: Experimental Description The modified algorithm's performance improves as the SNR was increased above OdB. In general, as the SNR increased, C p increased, C p -C , increased, Ep decreased and E , increased. Note the increase in Es was primarily due to the exaggerated windowing discussed above. Below OdB, the performance of the modified algorithm dropped off. In order to compare the different results, a performance limit was defined which was termed the utility threshold. This threshold is defined as the minimum SNR where Cp > 0.70 and Cp-C, > 0.50. Above this threshold, the system extracts the event well enough such that it would be reliably detected by a correlation classifier. Figure 4.14 displays the utility thresholds for event #4 and event #7 for the three different noise types. These thresholds do not appear to be very sensitive to the length of the signal, although there appears to be a dependence on noise type and the amount of spectral overlap between the noise and the event For white noise, the average utility threshold was less than -3.5dB regardless of the amount of spectral overlap. For coloured noise, event #4 displayed an average utility threshold less than -2.5dB and event #7 displayed an average threshold was less than -7.5dB. These results indicate that for coloured noise, there was a performance dependency on the amount of spectral overlap. Near these utility thresholds the amount of error in the primary ROI, Ep, was found to be 0.75 to 1.60 times the average event amplitude per sample, but since the primary correlation at these thresholds was relatively large (i.e., greater or equal to 0.70) it is reasonable to conclude that the event estimates contained large localized peaks and troughs which biased the average error per sample, but did not distort the overall pattern of the event The secondary error decreased as the SNR decreased below OdB for all experiments. At the utility thresholds, the E , values were near the minimum E, levels observed. 4.2.2 The Relative Performance of the Modified Algorithm Figures 4.15 and 4.16 show the results of all three methods for two 10% events in AR(12) noise. These plots are typical examples of how the performance of the modified OPM algorithm compared to the performance of the original OPM algorithm and the MMSE filter. The plots for the other experiments are in appendix C. 52 Chapter 4: Experimental Description 10% 20% 30% 40% 50% Signal Length (percentage o f obs. signal length) Figure 4.14: The utility thresholds for the three different noise models and events #4 and #7. The relative performance between the three methods depended on the noise type, event length, the amount of spectral overlap and the SNR level. For example, with white noise, all performance measures of all three methods were constant for all signal lengths and spectral overlap combinations except for the M M S E above OdB. For the MMSE above OdB, as signal length increased, Cp increased to a maximum of 80% of OPM methods and E , increased to a maximum that is more than 6 times the modified algorithm's E , . 4.2.2.1 The Performance Compared to the Original OPM Algorithm As illustrated in Figures 4.15 and 4.16, the performance in the primary ROI was nearly identical above the original algorithm's utility threshold for all experiments. The secondary error CEg) for the modified algorithm was always less than the original algorithm's secondary error and was an average of three to five times less then the original algorithm's E , . The 53 Chapter 4: Experimental Description Primary Correlation minus SNR 3.00 r-Relative Primary Error ^ (scaled abs. error per sampled | 100 \-- lOdB - 5 d B Legend: Mod O P M Orig O P M M M S E filter OdB SNR +5dB tlOdB 1.00 0SO OM 0.70 OAI 0-50 040 030 O20 0.10 000 Secondary Error (abs. error per sample) Legend: Mod O P M Orig. O P M M M S E filter - lOdB - 5 d B OdB SNR +5dB Figure 4.15: Performance of all three methods for event #4 at a 10% signal length HOdB 54 Primary Correlation 0.70 -0M -O 5 0 -0 4 0 -0 J 0 -0.20 -0.10 -0.00 • Legend: — Mod. O P M - " O t i g O P M •—'MMSEM« -lOdB OdB SNR fSdB +10dB Chapter 4: Experimental Description 1.00 aw ato O70 060 050 O40 030 O20 O10 a oo Primary Correlation minus Maximum Secondary Correlation Legend: Mod. O P M O r i a O P M MMSE tier — / - lOdB -5dB OdB SNR t5dB tlOdB 55 Chapter 4: Experimental Description Noise Original Algorithm Modified Algorithm Original/Modified White 0.424 0.025 17.0 AR(8) 0.585 0.042 13.9 AR(12) 0.621 0.025 24.8 Table 4.3 Average error per sample for zero event experiments difference between the two technique's E , values decreased as signal length increased for SNRs above OdB. This is not significant since the E , of the modified algorithm can be lowered by optimizing the system parameter settings (e.g., WPE as described above). Table 4.3 presents the amount of error per sample for the special zero event experiments. Notice that the average error per sample for the modified algorithm was at least 13.9 times less than for the original algorithm. Computationally, the Cleaner required approximately five times the number of multiplications and additions that a GM estimator required. Thus the modified algorithm, which uses a GM1 estimator (including two GM passes and a Cleaner pass) and two additional passes through the Cleaner, required approximately 17 times the number of multiphcations and additions of a GM estimator and the original algorithm required 14 times number of multiplications and additions. Thus the modified algorithm requires approximately 42% (1-17/12) more computations than the original algorithm. 4.2.2.2 The Performance Compared to the MMSE Filter Generally, the MMSE always performed worse than either of the OPM methods as seen in Figures 4.15 and 4.16. Even when the MMSE generated its highest Cp value (90% of the OPM methods), it was generating a larger Ep value than the OPM methods and its Es value that was larger than the original OPM algorithm's and at least two times larger than the modified algorithm's. From Figures 4.15 and 4.16, the MMSE filter seems to have the best Ep and E» for SNRs less than +5dB. These results are deceiving because each experiment used a unique MMSE filter that was designed specifically for the event and noise encountered whereas the modified algorithm was designed to accommodate a wide variety of event characteristics. So the MMSE filter's Ep is better than the modified algorithm because in each experiment, the MMSE filter was specifically designed 56 Chapter 4: Experimental Description to minimize the MSE of the encountered event Note that this is another indication that the modified algorithm's system parameters can be optimized if prior information about the event is available. In addition, as the SNR level decreased, the average amplitude of the MMSE filters' event estimates decreased. Thus, the E s values decreased because the magnitude of the event estimates were smaller as the SNR decreased. 57 Chapter 5 Conclusions 5.1 Summary of Major Results and Related Conclusions The simulation results detailed in the previous chapter indicate that the modified OPM algorithm can extract HVEs from linear, stationary noise as well as the original OPM algorithm with much less estimation error. This is important since there are no known single-trial techniques, other than the original OPM, that can effectively extract a relatively low SNR (i.e., less than +5dB) HVE from a coloured signal. The experimental results have identified that above its utility threshold the modified OPM algorithm had nearly identical performance compared to the original OPM algorithm within the primary ROI. Within the secondary ROI, the modified algorithm performed better than the original algorithm with at least 1.2 times less estimation error at +10dB (the worse case) to at least 5 times less estimation error around the utility threshold. This is significant, since it shows that the modified algorithm's secondary error is always less than the original algorithm even though the non-optimal setting of the WPE threshold used in the modified algorithm resulted in an excessively large E s values as discussed in Section 4.2.1. The procedures used to select the system parameters implicitly selected the optimal ao and a nearly optimal WPE threshold for the zero event experiments, so the results of the zero event experiments provided insight to the achievable levels of secondary error reduction with nearly optimal system parameter settings. These experiments showed that, for the zero event experiments, the modified algorithm had at least 13.9 times less error than the original algorithm. This is another indication that the modified OPM algorithm has notably superior estimation noise suppression and hence a much reduced chance of false positives in a signal detection application. Given the superior performance of the modified algorithm over the original algorithm, the increased computational load of approximately 42% identified in Section 4.2.2.1 is well justified. Note that part of Section 5.3 discusses the possibility of reducing the modified OPM algorithm to a 58 Chapter 5: Conclusions single pass through the Cleaner. This would reduce the modified algorithm's computational load to no more than 105% of the original algorithm's load. The simulation studies have identified the general operating range of the modified OPM algorithm. Specifically, the modified OPM algorithm performed well for all noise and event combinations at and above OdB. Below OdB, the utility threshold, which was defined as the SNR level where the Cp > 0.70 and Cp-Cg > 0.50, was dependent on noise type and the amount of spectral overlap between the noise and the event. For white noise the utility threshold was -3.5dB. For coloured noise, the utility threshold was -2.5dB for heavily overlapped spectra and -7.5dB for the least amount of spectral overlap. Near these utility thresholds the amount of error in the primary ROI was found to be 0.75 to 1.60 times the average event amplitude per sample, but since the primary correlation at these thresholds was relatively large (i.e., greater or equal to 0.70) it is reasonable to conclude that the event estimates contained large localized peaks and troughs which biased the average error per sample, but did not distort the overall pattern of the event. These large errors may make the modified OPM algorithm unsuitable near the utility thresholds for applications which require overall accuracy in the event estimate (i.e., a low Ep). The performance of the modified algorithm for events in white noise was relatively independent of event signal length and spectral overlap. The independence of spectral overlap is expected since the spectra of the four event models are equally different from the white noise. Compared to the coloured noise results, the modified algorithm produced less error and higher correlation values for white noise. There was no noticeable performance difference between 8th and 12th order AR noise. This is not surprising since the two AR noise models had fairly similar power spectra. This study has shown that the modified OPM algorithm is sensitive to the amounts of spectral overlap between a given event and noise power spectra. Specifically, the algorithm performed better as the amount of spectral overlap decreased. This was expected since the more similar the event model is to the noise model, the harder it is for a predictive filter, like the Qeaner, to differentiate between the event and the noise. Signal length had no effect on the algorithm's performance in the primary ROI. In the secondary ROI, the modified algorithm displayed various amounts of dependence on the length of the event, but 59 Chapter 5: Conclusions these results were primarily due to sub-optimal system parameter values, in particular, E , increased as length increased, but this was due to a sub-optimal wpe parameter value. When compared to the linear time-invariant MMSE filter, the modified OPM algorithm out performed it for all experiments. Actually, the MMSE filter performed poorly for all experiments. This poor performance was surprising since many other researchers have used MMSE filters on time-locked events at these SNR levels and reported better performance. Actually these researchers were able to restrict the observed signal to the vicinity of the event because the event was time-locked. Thus, they had longer events (from 70% to 100%) relative to the observed signal length then used in the simulation experiments. Since the events used in these simulations were short relative to the observed signal length (i.e., they were all 50% or less of the of the observed signal length), the MMSE filter's design was biased by the large number of zero amplitude samples before and after the event. The compromised filter designs are responsible for the extremely poor performance observed. This compromised performance indicates that the popular MMSE filtering technique is poorly suited to extracting HVEs. It must be re-emphasized that the experimental results are not optimal for all event and noise combinations since the modified OPM system was not optimized for any particular noise and event combination. Only three experimental OPM systems were configure (one for each of the three noise types) and each configuration was independent of the event length, latency, SNR and the amount of spectral overlap. The only a priori information assumed was that the events would be in the test range -lOdB to +10dB and that the noise sequence fit an autoreggresive model of known order. For most applications, however, some a priori information is known about the event In such cases this extra information can and should be incorporated to improve the performance of the modified OPM. For example, if the SNR, the events's power spectrum or signal length are known, the modified OPM algorithm configuration can be optimize. As an example, the discussion in Section 4.2.1 identified situations where the modified algorithm's performance was compromised because the WPE was not optimized for SNR, signal length and spectral overlap. 60 Chapter 5: Conclusions 5.2 Significant Contributions The following list details the primary contributions of this research: • A modification of Birch's OPM algorithm [1] was developed to improve the OPM algo-rithm's ability to extracting HVEs from coloured noise process. • The strengths and limitations of this algorithm, i.e., the modified OPM algorithm, were identified for linear and stationary signals using simulation experiments. • This work described and evaluated the first known implementation of adaptive IFs in time series applications. • The results of the simulation analysis for the original algorithm have extended the knowledge of the original OPM algorithm's performance for various event lengths, SNRs and amounts of spectral overlap and for white noise processes. 53 Areas for Future Research There are many interesting aspects of the modified OPM algorithm not explored by this research. The following items are suggested areas where future research efforts should be focused: 1. Run more experiments (that is, more data is needed) around the utility thresholds to more accurately define the performance limit of the modified OPM algorithm. These should include enough experiments to adequately define the confidence limits at the utility thresholds (and at other points). To determine these confidence limits, each experimental run would have to be statistically independent from the others. This means, each run would have to use a different event This contrasts the current study, which only used one event for all 121 runs. Another possible method to determine the confidence limits is to first determine the underlying distributions of the performance measures via a Monte Carlo simulation. The confidence limits can then be determined from the performance results and these distributions. 2. Perform a study to optimize the shape of the influence functions for signal extraction applications. 61 3. Perform a study to optimize the functions W(t) and I(t) and the corresponding system parameters. 4. Perform a study of how sensitive the modified algorithm's performance is to inaccuracies in the noise model. 5. Perform a study of the modified algorithm's performance on short data segments (i.e., approximately 100 data points). This is of interest because one method of processing non-stationary noise signals is to assume that they are stationary for short periods of time. 6. Try to implement (and evaluate) a one-pass version of the modified OPM algorithm. At the end of this study, it seemed conceivable that the W(t) and I(t) function can be generated and used during (not after) the first pass through the Cleaner. This would provide a smaller and faster algorithm. 7. Develop an OPM implementation for non-linear & non-stationary processes. This includes finding good non-linear & non-stationary modeling techniques, a robust parameter estimator, and a robust cleaner for non-linear & non-stationary signals. 62 Glossary Term Description Symbol Estimation noise Erroneous signals in the event estimate (see example in Figure 2.1) Qeaner The predictive filter used within the OPM methodology to estimate the noise sequence from the observed signal Estmator Refers to the noise model parameter estimator. The function of the estimator is to robustly estimate the noise model parameters from the observed signal. Event Event estimate - A finite-duration signal. - The estimate of an event produced by a signal extraction technique such as OPM. Highly variable event IF-control function An event with a highly variable latency and shape. A function used within the modified OPM algorithm to control the sensitivity of the Qeaner* s influence function with respect to time. I(t) is calculated from the non-whiteness function W(t). HVE Kt) Influence function A function which controls the amount of influence the prediction error sequence has on the next prediction generated by a predictive filter such as the Qeaner. IF(x) Latency - The time from the start of the observed signal to the first non-zero sample. 63 Term Description Symbol Maximum secondary correlation One of the four performance measures used in the simulation experiments. Refer to Section 4.1.3 for a complete description. C, Non-whiteness function A function used within the modified OPM algorithm. This function indicates how dissimilar the PE(t) sequence is compared to a random sequence. W(t) is used to generate I(t). W(t) Oudier Processing Method (the) The signal extraction technique developed by Birch [1] to extract HVEs from coloured noise signals. OPM Prediction error sequence The residual sequence from fitting the noise model to the noise. PE(t) Primary correlation One of the four performance measures used in the simulation experiments. Refer to Section 4.1.3 for a complete description. Primary ROI The time period where the event signal is non-zero. (See Figure 4.4) Relative primary absolute error One of the four performance measures used in the simulation experiments. Refer to Section 4.1.3 for a complete description. Secondary ROI The time period where the event signal is zero. In other words, all time outside the primary ROI. (See Figure 4.4) 64 Term Description Symbol Secondary absolute error One of the four performance measures used in the simulation experiments. Refer to Section 4.1.3 for a complete description. E , Utility threshold This threshold is defined as the minimum SNR level where Cp > 0.70 and Cp-Cs > 0.50. It represents a standard performance limit which can be used to compare the results. UT 65 References [1] G.E. Birch. Single Trial EEG Signal Analysis using Outlier Information. PhD thesis, The University of British Columbia, Canada, 1988. [2] G.E. Birch, P.D. Lawrence, and R.D. Hare. Single-trial processing of event related potentials, presented at the 28th annual international meeting of the society for physiological research, abstract published in the journal of psychophysiology. San Francisco, California: U.S.A., July, 1988. [3] G.E. Birch, P.D. Lawrence, and R.D. Hare. Extraction of motor related activity from single trial eeg. In Proceedings of the IEEE Engineering in Medicine and Biology Society, WthAnnual International Conference, New Orleans, Louisiana: U.S.A., Nov. 4—7, 1988. [4] L.R. Rabiner and B. Gold. Theory and Application of Digital Signal Processing. Prentice-Hall, Englewood Cliffs, N.J., 1975. [5] J.I. Aunon C D . McGillem and D.G. Childers. Signal processing in evoked potential research: Applications of filtering and pattern recognition. CRC Crit. Rev. Bioeng., 6:225-265, 1981. [6] D.J. Doyle. Some comments on the use of wiener filtering for the estimation of evoked potentials. Electroencephalography and Clinical Nerophysiology, 38:533-534, 1975. [7] D.O.Walter. A posteriori "wiener filtering" of average evoked responses. Electroencephalog-raphy and Clinical Nerophysiology, 27:61-70, 1969. [8] T. Nogawa, K. Katayama, Y. Tabata, T. Kawahara, and T. Ohshio. Visual evoked potentials estimated by wiener filter. Electroencephalography and Clinical Nerophysiology, 35:375-392, 1973. [9] A.F. Kramer M.G.H. Coles, G.Gratton and G.A. Miller. Principles of signal acquisition and analysis. In M.G. Coles, E. Donchin, and F.W. Porges, editors, Physiological Systems and their Assessment, pages 183-221. Gilford Press, New York, 1986. 66 [10] M.A.B. Brazier. Evoked potentials recorded from the depth of the human brain. Ann. N.Y. Acad. Sci., 112:33-59, 1964. [11] J. Aunon C. McGillem and C.A. Pomalaza. Improved waveform estimation procedures for event related potentials. IEEE Trans. Biomed. Eng., BME-32(6):371-380, 1985. [12] C. McGillem and J. Aunon. Measurement of signal components in single visually evoked brain potentials. IEEE Trans. Biomed. Eng., BME-24(3):232-241, 1977. [13] C D . Woody. Characterization of an adaptive filter for the analysis of variable latency neuroelectric signals. Med. Biol. Eng., 5:539-553, 1967. [14] J.I. Aunon C D . McGillem and D.G. Childers. Signal processing in evoked potential research: Averaging and modeling. CRC Crit. Rev. Bioeng., 5:323-367, 1981. [15] R. Halliday E. Callaway and R. Heming. A comparison of methods fo measuring event related potentials.. Electroenceph. Clin. Neurophysioi, 55:227-232, 1983. [16] J.P.C. deWeerd. A posteriori time-varying filtering of averaged evoked potentials, i. introduc-tion and conceptual basis. Biol. Cybernetics, 41:211-222, 1981. [17] J.P.C. deWeerd. A posteriori time-varying filtering of averaged evoked potentials, ii. mathe-matical and computational aspects. Biol. Cybernetics, 41:223-234, 1981. [18] K. Yu and C D . McGillem. Optimum filters for estimating evoked potential waveforms. IEEE Trans. Biomed. Eng., BME-30(ll):730-737, 1983. [19] R. Deutsch. Estimation Theory. Prentice-Hall, Inc., 1965. [20] S. Cerutti, G. Chiarenza, D. Liberati, P. Mascellani, and G. Pavesi. A parametric method of identification of single-trial event-related potentials in the brain. IEEE Trans. Biomed. Eng., BME-35(9):701-711, 1988. [21] G. Kitagawa. Non-gaussian stat-space modeling of nonstationary time series. / . Am. Stat. Assn., 82(400):1032-1041, 1987. [22] R.L. Kashyap and A. R. Rao. Dynamic Stochastic Models from Empirical Data. Academic Press, New York, 1976. [23] P. Newbold and T. Bos. Stochastic Parameter Regression Models. Sage Publications, 1985. 67 [24] M.B. Priestley. Spectral Analysis and Time Series, 2 vols. Academic Press, London and New York, 1981. [25] M.B. Priestley. Non-linear and Non-stationary Time Series Analysis. Academic Press, London and New York, 1988. [26] R.V. Hogg. An introduction to robust estimation. In R.L Launer and G.N. Wilkinson, editors, Robustness in Statistics. Academic Press, 1979. [27] R.D. Martin and D.J. Thomson. Robust-resistant spectrum estimation. Proceedings of the IEEE, 70(9): 1097-1115, 1982. 68 Appendix A: Autoregressive Noise and Event Models All the noise and event models were autoregressive models of the form a(*) = - j r ^ (35) E <***-* k=l where the z operator is defined as z-kn(t) = n(t - k). (36) Each of the parameters for each model are given in the following table. Models Noise Events White AR(8) AR(12) Event#4 Event#5 Event#6 Event#7 p 0 8 12 4 4 4 4 ai - +0.838 +0.971 +0.720 +0.700 +0.600 +0.300 a2 - -0.417 -0.356 -0.310 -0.300 -0.200 -0.100 a3 - +0.638 +0.349 +0.150 +0.200 -0.200 -0.300 - -0.429 -0.338 -0.050 -0.200 +0.400 +0.400 as - +0.518 +0.178 - - ' - -ae - -0.304 -0.208 - - - -a7 - +0.182 +0.337 - - - -as - -0.243 -0.208 - - - -- - +0.157 - - - -aio - - -0.297 - - - -an - - +0.322 - - - -ai2 - - -0.109 - - - -Table A.1 Noise and event models 69 Appendix B: Selecting the OPM System Parameters This appendix details the selection of the following system parameters for the experimental implementation of the original and modified OPM algorithms: a0, ainc, cidec and WPE for I(t) • n, the window size used in the calculation of W(t) • the IF a threshold used in the GM1 estimator The final parameter selections, which are explained below, are summarized in the following Table. System Parameter Noise model White Noise AR(8) AR(12) Mod. OPM I(t) a0 2.00 1.90 1.90 adtc 8.00 7.60 7.60 0.80 0.80 0.80 1.19 1.17 1.21 W(t) smoothing window size 24 24 24 Orig. OPM Cleaner's IF a threshold 0.80 0.80 0.80 Both Estimator's IF a threshold 2.5 2.5 2.5 Figure B.l: Experimental system parameters for the OPM algorithms. Figures B.2 a), B.3 a) and B.4 a) show the experimentally determined relationship between the Cleaner's IF a threshold and the average W(t) value for white noise, AR(8) noise and AR(12) noise signals respectively. As discussed in Section 3.4.1.6, a0 should be set as low as possible without influencing the PE sequence of these noise signal. The selected a0 thresholds for the different noise types are represented on the graphs by vertical dotted lines. At these a0 thresholds, the distributions of W(t) were plotted for the different noise types. These are displayed in Figures B.2 b), B.3 b) and B.4 b). The WPE thresholds were selected so that 95% of the W(t) values were less than WPE as explained in Section 3.4.1.6. The selected WPE values for the different noise types are represented on the graphs by vertical dotted lines. 70 The cidec thresholds for the three noise models only have one restriction: they must be larger than do thresholds. So they were arbitrarily selected to be four times the a0 thresholds. As explained in Section 3.4.1.6, the thresholds should be selected to optimize the quality of the event estimate produced by the Cleaner. Figure B.5 a) represents the correlation of the event to the event estimate and Figure B.S b) illustrates the absolute error per sample for a thresholds in the range 0.0 to 1.5. These example graphs indicate how the correlation and absolute error vary with SNR. Since no a priori SNR information is assumed, the thresholds were selected at 0.80 in an attempt to optimize the trade off between a high correlation and a low error level. This 'optimum' threshold was also used for Cleaner's IF a threshold in the original algorithm. The JF a threshold used in the GM1 estimator was selected so that only extreme outliers would be removed by the Cleaner and the underlying noise signal would not be disturbed. Thus this value was selected slighdy above the a0 threshold, since the a0 threshold represented the miriimum level at which the noise signal is not disturbed. The selected thresholds are displayed in Table B. l . From experience, a W(t) smoothing window size, n, of 24 samples was found to provide adequate smoothing of W(t) for all noise models, so this window size was used in the experiments. 71 Average W(t) amplitude 2 . 0 0 - \ 1.50 H 1.00 H 0.50 —\ o.oo 0.00 0.50 l . O O 1.50 a) 2.00 2.50 3.00 IF a threshold Probability (x 1000) 0.00 0.50 1.00 1.50 2.00 W(t) amplitude b) Figure B.2: a) Cleaner's influence on white noise sequence as a function of IF a threshold, b) The experimentally determined W(t) distribution when the a threshold was set to a,,. 72 Average W(t) amplitude 4.00 — 3.00 2.00 — l.OO 0.00' o.oo I 0.50 1 l.OO I 1.50 a) 2.00 2.50 3.00 IF a threshold Probability (x 1000) O.OO 0.50 l.OO 1.50 2.00 W(t) amplitude Figure B.3: a) Cleaner's influence on AR(8) noise sequence as a function of IF a threshold, b) The experimentally determined W(t) distribution when the a threshold was set to a,,. 73 Average W(t) amplitude 3.00 — 2.00 —I 1.00 — 0.00 0.50 1.00 1.50 2.00 2.50 3.00 . IF a threshold Probability (x 1000) 40.00 -I 0.00 0.50 1.00 1.50 2.00 W(t) amplitude b) Figure B.4: a) Cleaner's influence on AR(12) noise sequence as a function of IF a threshold, b) The experimentally determined W(t) distribution when the a threshold was set to 4,. 74 SNR: "--lOdB —• -5dB — OdB - • +5dB "+10dB 0.00 0 5 0 1.00 b) 150 D7 a threshold Figure B.5: a) Correlation performance as a function of IF a threshold for various SNRs; b) absolute error performance as a function of IF a threshold for various SNRs. 75 Appendix C: Simulation Results The following pages contain the graphical results of the simulation experiments. Simulation Results page to page Results of the modified OPM - in white noise 77 - 81 algorithm's performance - in AR(8) noise 82 - 86 - in AR(12) noise 87 - 91 The modified algorithm's performance - in white noise 92 - 101 compared to the original algorithm and " n o i s e 1 0 2 " 1 0 1 , , „ , 0 _ £ , - in AR(12) noise 1 1 9 . 101 the MMSE filter - iz i 76 Performance of the Modified OPM Algorithm with - white noise - event length of 10% Legend: • M M 1 4 •vonttt •va i t tM •vent f 7 Primary Correlation minus -10dB -5dS OdB +5dB +10dB S N R 77 Performance of the Modified OPM Algorithm with - white noise - event length of 20 % Legend: • v » n t # 4 •vent 15 • v m t r t • v a m « 7 Primary Correlation minus Maximum Secondary Correlation Relative Primary Error (scaled abs. error per sample) Secondary Error (abs. error per sample) 78 Performance of the Modified OPM Algorithm with - white noise - event length of 30% Legend: • v » n t * 4 - " • w n t l S " • •writ 16 •v*nt<7 — Relative Prirnary Error (scaled abs. error per sample) -10dB aw a n aw 0.40 OM aw ato aoo] Secondary Error (abs. error per sample) OdB S N R + 10dB 79 Performance of the Modified OPM Algorithm with - white noise -eventlength of 40% Legend: •vent #4 •vent #5 •vent 16 •vent 17 Primary Correlation minus Maximum Secondary Correlation 80 Performance of the Modified OPM Algorithm with - white noise - event length of 50% Legend: imi •vwit ts • V W I t K •v*nt*7 Primary Correlation Primary Correlation minus Maximum Secondary Correlation -10dB -5dB • lOdB Relative Primary Error (scaled abs. error per sample) Secondary Error (abs. error per sample) 81 Performance of the Modified OPM Algorithm with - AR(8) noise -eventlength of 10% Legend: «v*ntf5 •vanttS •vent 17 Primary Correlation Primary Correlation minus Maximum Secondary Correlation 82 Performance of the Modified OPM Algorithm with - AR(8) noise - event length of 20% Legend: •vent #4 •vwitSS • v e n t M •vent 17 Primary Correlation -5dB + 10dB Primary Correlation minus Maximum Secondary Correlation SNR -SdB S N R Relative Primary Error 1 ahs. Secondary Error (abs. error per sample^ r 0.«9 0.70 0.60 0JO -lOdB -5dB OdB SNR +5dB tlOdB 83 Performance of the Modified OPM Algorithm with - AR(8) noise - event length of 30% L e g e n d : • v e n t M •vent#5 •v*m*6 event »7 Pr imary Corre la t ion minus -10dB -SdB OdB +5dB + 10dB SNR Relative Primary Error (scaled abs. error per sample) Secondary Error (abs. error per sample) 84 Performance of the Modified OPM Algorithm with - AR(8) noise - event length of 40% Legend: • w n t M •vantf5 • v » n t * 6 • v * m » 7 Primary Correlation Primary Correlation minus nimum Secondary Correlation Relative Primary Error (scaled abs. error per sample) •5dB + 10dB Secondary Error (abs. error per sample) 85 Performance of the Modified OPM Algorithm with - AR(8) noise - event length of 50% Legend: Mrr tM — • v » m » 5 - -•vwrtrt •••• •vantf7 ~ Primary Correlation minus -10dB -5dB OdB + 5dB tlOdB -lOdB -SdB OdB *SdB +10dB SNR 86 Performance of the Modified OPM Algorithm with - AR(12) noise - event length of 10% Legend: • v » n t « 4 • v * n t « 5 •vent *6 •vent • 7 Primary Correlation Primary Correlation minus Maximum Secondary Correlation -lOdB -5dB + 10dB Relative Primary Error (scaled abs. error per sample) Secondary Error (abs. error per sample) 87 Performance nf the Modified OPM Algorithm with - AR(12) noise - event length of 20% L e g e n d : • v e n t M •ventiS •ventte •vent 17 88 Performance of the Modified OPM Algorithm with - AR(12) noise -event length of 30% Legend: • w n t M • w n t f S •varnte •rant 17 Primary Correlation Primary Correlation minus Maximum Secondary Correlation Relative Primary Error (scaled abs. error per sample) -10dB -5dB +5dB 41OdB Secondary Error (abs. error per sample) -10dB +10dB 89 Performance of the Modified OPM Algorithm with - AR(12) noise - event length of 40% L e g e n d : H M M — •VWTtfS - ~ eventW •vent i 7 Pr imary Corre la t ion minus nirnum Secondary Corre lat ion 90 Performance of the Modified OPM Algorithm with - AR(12) noise - event length of 50% L e g e n d : • M M •vent 15 • v n n t M ' * v » n t « 7 Pr imary Corre la t ion Pr imary Corre la t ion minus M a x i m u m Secondary Corre la t ion 91 Performance Comparison of the Three Techniques for - event # 4 - white noise - event length of 10% I Legend: Mod. OPM -Orig. O P M ' MMSE filter " Pr imary Corre la t ion m i n u s M a x i m u m Secondary Corre la t ion Secondary Error (abs. error per sample) + 10dB 92 Performance Comparison of the Three Techniques for - event #7 - white noise - event length of 10% Legend: Mod. OPM ~ Orig. OPM * MMSE filter " 93 Performance Cnmnarison of the Three Techniques for - event #4 - white noise - event length of 20% Legend: Mod. OPM -Orig. OPM " M M S E filler " Primary Correlation minus Maximum Secondary Correlation + SdB S N R Relative Primary Error (scaled abs. error per sam] OdB S N R + 5dB + 10dB - l O d B Secondary Error (abs. error per sample) 94 Performance Cnmnarison of the Three Techniques for - event #7 - white noise - event length of 20% Legend: Mod. OPM — Orifl.OPM MMSE filter Pr imary Corre la t ion 1.00 k-Pr imary Corre la t ion minus M a x i m u m Secondary Corre lat ion S N R Relat ive Pr imary Er ro r (scaled abs. error per sample) -10dB - 5 d B OdB +5dB t l O d B S N R Secondary Er ro r (abs. error per sample) 95 Performance Comparison of the Three Techniques for - event #4 - white noise - event length of 30% Legend: Mod. O P M -Orig. OPM " MMSE filter " Primary Correlation "I — I Primary Correlation minus Maximum Secondary Correlation Relative Primary Error (scaled abs. error per sample) Secondary Error (abs. error per sample) + 1 OdB 96 Performance Comparison of the Three Techniques for - event #7 - white noise - event length of 30% Pr imary Corre la t ion minus Pr imary Corre la t ion M a x i m u m Secondary Corre la t ion - 1 0 d B - 5 d B OdB *5dB + 10dB S N R Legend: Mod. OPM ~ Orig. OPM -MMSE filter " Relat ive Pr imary E r r o r (scaled abs. error per samp - 1 0 d B U O d B Secondary E r r o r (abs. error per sample) 97 Performance Comparison of the Three Techniques for - event #4 - white noise - event length of 40% Legend: Mod. OPM ~ Orig. OPM " MMSE filter " - l O d B - S d B OdB +5dB + 10dB S N R Primary Correlation minus ^ M a x i m u m Secondary Corre la t ion I J O O -OJDO ' - 0 . 1 0 - -_1 I i i u - l O d B - S d B OdB » 5 d B + 10dB S N R Relative Primary Error (scaled ahs. error per san + 1 OdB Secondary Error (abs. error per sample^ ) + 10dB S N R 98 Performance Comparison of the Three Techniques for - event #7 - white noise - event length of 40% Legend: Mod. OPM ~ Olio.. OPM ' MMSE filter " Relative Primary Error , . Secondary Error 0K0 c — i » i (abs. error per samp - 1 0 d B - 5 d B OdB +5dB + 10dB - l O d B - S d B OdB +SdB +10dB S N R S N R 99 Performance Comparison of the Three Techniques for - event #4 - white noise - event length of 50% Legend: Mod. OPM -Orig. OPM -MMSE filter " Pr imary Corre la t ion Pr imary Corre la t ion minus t i m u m Secondary Corre lat ion Relat ive Pr imary E r r o r (scaled abs. error per sample) -10dB - S d B • SdB +10dB Secondary Error (abs. error per sample) • SdB + 10dB S N R 100 Performance Comparison of the Three Techniques for - event #7 - white noise - event length of 50% Legend: Mod. OPM ~ Orig. OPM -MMSE filter " 101 Performance Comparison of the Three Techniques for - event #4 - AR(8) noise - event length of 10% Legend: Mod. OPM ~ Orig. OPM " MMSE filter " 102 Performance Comparison of the Three Techniques for - event #7 - AR(8) noise - event length of 10% Legend: Mod. OPM — Orifl.OPM MMSE filter " ' Pr imary Corre la t ion minus - 1 0 d B - S d B OdB +5dB U O d B S N R 103 Performance Comparison of the Three Techniques for - event #4 - AR(8) noise - event length of 20% Legend: Mod. OPM -Orig. OPM " MMSE filler " Primary Correlation Primary Correlation minus Maximum Secondary Correlation S N R Relative Primary Error (scaled abs. error per same Secondary Error (abs. error per sample) 104 Performance Comparison of the Three Techninues for - event #7 - AR(8) noise - event length of 20% Legend: Mod. OPM ~ OriQ.OPM " MMSE filter " Pr imary Corre la t ion Pr imary Corre la t ion minus M a x i m u m Secondary Corre lat ion + 10dB 105 Performance Comparison of the Three Techniques for - event #4 - AR(8) noise - event length of 30% Legend: Mod. OPM -Orio.OPM " MMSE filter " Pr imary Corre la t ion Pr imary Corre la t ion minus M a x i m u m Secondary Corre lat ion n 1 "i 1 r • 10dB 106 Performance Comparison of the Three Techniques for - event #7 - AR(8) noise - event length of 30% Legend: Mod. OPM _ Orifl.OPM ' MMSE filler " Primary Correlation minus Maximum Secondary Correlation OdB + 5dB S N R Relative Primary Error L^bs^ rxor per sample) Secondary Error (abs. error per sample) -XOdB • lOdB 107 Performance Comparison of the Three Techniques for - event #4 - AR(8) noise - event length of 40% Legend: Mod. O P M -Orig.OPM • MMSE filter " Primary Correlation Primary Correlation minus Maximum Secondary Correlation Relative Primary Error (scaled abs. error per sample) + SdB +1 OdB Secondary Error (abs. error per sample) 108 Performance Comparison of the Three Techniques for - event #7 - AR(8) noise • - event length of 40% Legend: Mod. OPM ~ Orig. OPM -MMSE filter " Pr imary Corre la t ion minus - 1 0 d B - S d B OdB + 5dB t l O d B S N R Relat ive Pr imary E r r o r (scaled abs. error per sample') OdB » 5 d B +10dB S N R Secondary E r r o r (abs. error per sample) + 10dB 109 Performance Comparison of the Three Techniques for - event #4 - AR(8) noise - event length of 50% Legend: Mod. OPM ~ Orig. OPM " MMSE filter " Pr imary Corre la t ion "I 1 I r Pr imary Corre la t ion minus M a x i m u m Secondary Corre la t ion Relat ive Pr imary E r r o r (scaled abs. error per samp Secondary Er ro r (abs. error per sample) +5dB + 10dB 110 Performance Comparison of the Three Techniques for - event #7 - AR(8) noise - event length of 50% Legend: Mod. OPM ~ Orig, OPM • MMSE filter " Primary Correlation minus Maximum Secondary Correlation - S d B S N R Relative Primary Error (scaled abs. error per sample) - 1 0 d B Secondary Error (abs. error per sample) 111 Performance Comparison of the Three Techniques for - event #4 - AR(12) noise - event length of 10% Legend: Mod. OPM -Orifl. OPM -MMSE filter " Pr imary Corre la t ion minus - 1 0 d B - 5 d B OdB + 5dB +10dB S N R 112 Performance Comparison of the Three Techniques for - event #7 - AR(12) noise - event length of 10% Legend: Mod. OPM ~ Orig, OPM " MMSE filter " Pr imary Corre la t ion minus - 1 0 d B - S d B OdB +5dB +10dB S N R 113 Performance Comparison of the Three Techniques for - event #4 - AR(12) noise -event length of 20% Legend: Mod. OPM -Orig. OPM -MMSE filter " Pr imary Corre la t ion Pr imary Corre la t ion minus M a x i m u m Secondary Corre lat ion 114 Performance Comparison of the Three Techniques for - event #7 - AR(12) noise -event length of 20% Legend: Mod. OPM — Orig. OPM - • MMSE filter — Pr imary Corre la t ion minus M a x i m u m Secondary Corre lat ion Relat ive Pr imary E r r o r (scaled abs. error per sample) Secondary Er ror (abs. error per sample) - 1 0 d B 115 Performance Pnmnarison of the Three Techniques for - event #4 - AR(12) noise -event length of 30% Legend: Mod. OPM ~ Orifl.OPM • MMSE filler " Pr imary Corre la t ion minus M a x i m u m Secondary Corre la t ion Relat ive Pr imary Er ro r (scaled abs. error per samp Secondary Er ro r (abs. error per sample) S N R 116 Performance Comparison of the Three Techniques for - event #7 - AR(12) noise - event length of 30% [ Legend: Mod. OPM " Orig. OPM • MMSE filter " Pr imary Corre la t ion minus M a x i m u m Secondary Corre lat ion • 10dB Relat ive Pr imary E r r o r (scaled abs. error per sample) Secondary Er ro r (abs. error per sample) - 1 0 d B 117 Performance Cnmnarison nf the Three Techniques for - event #4 - AR(12) noise - event length of 40% [ Legend: Mod. O P M -Orig. OPM -MMSE filler " Pr imary Corre la t ion minus M a x i m u m Secondary Corre la t ion - l O d B - 5 d B OdB S N R Relat ive Pr imary E r r o r (scaled abs. error per sample) Secondary Er ro r (abs. error per sample) • SdB •10dB 118 Performance Comparison of the Three Techniques for - event #7 - AR(12) noise - event length of 40% Legend: Mod. O P M -Orig. OPM -MMSE filter " Pr imary Corre la t ion minus - 1 0 d B - 5 d B OdB +5dB +10dB 119 Performance Comparison of the Three Techniques for - event #4 - AR(12) noise - event length of 50% Legend: Mod. OPM ~ Orig. OPM " MMSE filter " Relat ive Pr imary E r r o r (scaled abs. error per sa + 1 OdB Secondary Er ror (abs. error per sample) + 1 OdB 120 Performance Cnmnarison of the Three Techniques for - event #7 - AR(12) noise - event length of 50% Legend: Mod. O P M -Orig. OPM -MMSE filter " Pr imary Corre la t ion minus M a x i m u m Secondary Corre lat ion Relat ive Pr imary E r r o r (scaled abs. error per sample) Secondary Er ro r (abs. error per sample) + 10dB 121
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- A modification of OPM : a signal-independent methodology...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
A modification of OPM : a signal-independent methodology for single-trial signal extraction Mason, Steven George 1990
pdf
Page Metadata
Item Metadata
Title | A modification of OPM : a signal-independent methodology for single-trial signal extraction |
Creator |
Mason, Steven George |
Publisher | University of British Columbia |
Date Issued | 1990 |
Description | Initial investigations of the Outlier Processing Method (OPM), first introduced by Birch [1][2][3] in 1988, have demonstrated a promising ability to extract a special class of signals, called highly variable events (HVEs), from coloured noise processes. The term HVE is introduced in this thesis to identify a finite-duration signal whose shape and latency vary dramatically from trial to trial and typically has a very low signal-to-noise ratio (SNR). This thesis presents a modified version of the original OPM algorithm, which can generate an estimate of the HVE with significantly less estimation noise than the original OPM algorithm. Simulation experiments are used to identify the strengths and limitations of this modified OPM algorithm for linear and stationary processes and to compare the modified algorithm's performance to the performance of the original algorithm and to the performance of a minimum mean-square-error (MMSE) filter. The results of these experiments verify that the modified algorithm can extract an HVE with less estimation noise than the original algorithm. The results also show, that the MMSE filter is unsuitable for extracting HVEs and that its performance is generally inferior to the modified algorithm's performance. The experiments indicate that the modified algorithm can extract HVEs from a linear and stationary process for SNR levels above -2.5dB and can work effectively above -7.5dB for HVEs with certain characteristics. |
Subject |
Estimation theory Signal processing Electronic noise |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2010-11-19 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0065453 |
URI | http://hdl.handle.net/2429/30024 |
Degree |
Master of Applied Science - MASc |
Program |
Electrical and Computer Engineering |
Affiliation |
Applied Science, Faculty of Electrical and Computer Engineering, Department of |
Degree Grantor | University of British Columbia |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-UBC_1991_A7 M37.pdf [ 4.78MB ]
- Metadata
- JSON: 831-1.0065453.json
- JSON-LD: 831-1.0065453-ld.json
- RDF/XML (Pretty): 831-1.0065453-rdf.xml
- RDF/JSON: 831-1.0065453-rdf.json
- Turtle: 831-1.0065453-turtle.txt
- N-Triples: 831-1.0065453-rdf-ntriples.txt
- Original Record: 831-1.0065453-source.json
- Full Text
- 831-1.0065453-fulltext.txt
- Citation
- 831-1.0065453.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0065453/manifest