UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Online removal of eye movement and blink artifacts from EEG signals without EOG Noureddin, Borna 2010

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2010_fall_noureddin_borna.pdf [ 13.43MB ]
Metadata
JSON: 24-1.0064930.json
JSON-LD: 24-1.0064930-ld.json
RDF/XML (Pretty): 24-1.0064930-rdf.xml
RDF/JSON: 24-1.0064930-rdf.json
Turtle: 24-1.0064930-turtle.txt
N-Triples: 24-1.0064930-rdf-ntriples.txt
Original Record: 24-1.0064930-source.json
Full Text
24-1.0064930-fulltext.txt
Citation
24-1.0064930.ris

Full Text

Online Removal of Eye Movement and Blink Artifacts from EEG Signals Without EOG by Borna Noureddin  B.Eng., University of Victoria, 1994 M.A.Sc., University of British Columbia, 2003  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies (Electrical and Computer Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August, 2010 c Borna Noureddin 2010  Abstract In this thesis, two novel methods are presented for online removal of ocular artifacts (OA) from EEG without the need for EOG electrodes attached to the face. Both methods are fully automated and can remove the effects of both eye movements and blinks. The first method employs a high speed eye tracker and three frontal EEG electrodes as a reference to any nonlinear adaptive filter to remove OAs without any calibration. For the filters considered, at some frontal electrodes, using the eye tracker-based reference was shown to significantly (p < .05) improve the ability to remove OAs over using either EOG or only frontal EEG as a reference. Using an eye tracker provides the means for recording point-of-gaze and blink dynamics simultaneously with EEG, which is often desired or required in clinical studies and a variety of human computer interface applications. The second method uses a biophysical model of the head and movement of the eyes to remove OAs. It only requires a short once-per-subject calibration and does not require subject-specific MRI. It was compared to four existing methods, and was shown to perform consistently over a variety of tasks. In removing both saccades and blinks, it removed more than 4 times as much OA as the other methods. In terms of distortion, it was the only method that never removed more power than was present in the original EEG. To carry out the above studies, several related original investigations and developments were needed. These included a novel algorithm to extract the blink time course from eye tracker images, a new measure of OA removal distortion, a high speed eye tracker recording system, a study to determine whether frontal EEG could be used to replace EOG for OA removal and studies of the frequency content of blinks, the effects of an electromagnetic sensor on EEG, and the effects of varying mental states on OA removal methods. In summary this thesis has helped pave the way towards a real-time EEG-based human interface that is free of OAs and does not require EOG electrodes in its operation.  ii  Table of Contents Abstract  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iii  List of Tables  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acknowledgments Dedication  ix  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv  Statement of Co-Authorship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1  Thesis Objectives  1.2  Thesis Contributions  1.3  Ocular Artifacts in EEG  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  4  1.4  Ocular Artifact Removal Methods . . . . . . . . . . . . . . . . . . . . . . . . . .  7  1.5  Evaluation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11  1.6  Data Collection  1.7  Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12  References  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  2 2  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15  2 Time-frequency Analysis of Eye Blinks and Saccades in EOG for EEG Artifact Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.1  Introduction  2.2  Methods  2.3  Results  2.4  Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24  References  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27  3 Fixation Precision in High-Speed Noncontact Eye-Gaze Tracking  . . . . . . 29  3.1  Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29  3.2  Background  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30  3.2.1  Eye Movements  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 iii  3.3  3.2.2  Fixation Detection and Filtering . . . . . . . . . . . . . . . . . . . . . . . 30  3.2.3  Eye-gaze Tracking Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 31  Methods  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33  3.3.1  Point-Of-Gaze Estimation  . . . . . . . . . . . . . . . . . . . . . . . . . . 33  3.3.2  Image Processing  3.3.3  POG Sampling Rate  3.3.4  Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37  3.4  Experimental Design and Results  3.5  Discussion  3.6  Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45  References  . . . . . . . . . . . . . . . . . . . . . . . . . . 40  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47  4 An Experimental Study to Investigate the Effects of an Electromagnetic Sensor for Motion Tracking on EEG . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.1  Introduction  4.2  Experimental System  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51  4.2.1  Electromagnetic Motion Tracking Sensor  4.2.2  Experimental Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52  4.3  Methods  4.4  Results  4.5  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 . . . . . . . . . . . . . . . . . . 51  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54  4.4.1  Calibration Experiments  4.4.2  EEG Recording Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 55  Conclusion  References  . . . . . . . . . . . . . . . . . . . . . . . . . . . 54  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60  5 Quantitative Evaluation of Ocular Artifact Removal Methods Based on Real and Estimated EOG Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.1  Introduction  5.2  Methods  5.3  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62  5.2.1  Performance Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62  5.2.2  EOG Approximation  5.2.3  Data Collection  Results  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65  5.3.1  Performance Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65  5.3.2  EOG Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65  5.4  Discussion  5.5  Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67  References  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68  iv  6 Effects of Task and EEG-based Reference Signal on Performance of On-line Ocular Artifact Removal from Real EEG . . . . . . . . . . . . . . . . . . . . . . 70 6.1  Introduction  6.2  Methods  6.3  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71  6.2.1  EOG Approximation  6.2.2  Data Collection  Results  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73  6.3.1  Performance Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73  6.3.2  Effects of Task and Choice of Reference Signal . . . . . . . . . . . . . . . 74  6.4  Discussion  6.5  Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75  References  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77  7 Online Removal of Eye Movement and Blink EEG Artifacts Using a High Speed Eye Tracker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 7.1  Introduction  7.2  Methods  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80  7.2.1  Data Collection  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81  7.2.2  Reference Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83  7.2.3  OA Removal Evaluation  . . . . . . . . . . . . . . . . . . . . . . . . . . . 85  7.3  Results  7.4  Discussion  7.5  Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90  References  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94  8 Dipole modeling for low distortion, automated online ocular artifact removal from EEG without EOG 8.1  Introduction  8.2  Methods  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99  8.2.1  Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99  8.2.2  Head Model  8.2.3  Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102  8.2.4  Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104  8.2.5  Other Methods  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107  8.3  Results  8.4  Discussion  8.5  Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112  References  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113  v  9 Conclusions 9.1  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116  Discussion  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116  9.1.1  Ocular Artifact Characteristics . . . . . . . . . . . . . . . . . . . . . . . . 116  9.1.2  High-speed Eye Tracking For Ocular Artifact Removal  9.1.3  Electromagnetic Sensors and EEG . . . . . . . . . . . . . . . . . . . . . . 117  9.1.4  Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117  9.1.5  Factors Affecting Ocular Artifact Removal  9.1.6  Eye Tracking-Based Ocular Artifact Removal . . . . . . . . . . . . . . . . 119  9.1.7  Biophysical Model-Based Ocular Artifact Removal . . . . . . . . . . . . . 119  . . . . . . . . . . 117  . . . . . . . . . . . . . . . . . 118  9.2  Strengths and Weaknesses  9.3  Comparison Against “Ground Truth” . . . . . . . . . . . . . . . . . . . . . . . . 121  9.4  Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123  References  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125  Appendices A Methods  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127  A.1 RLS Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 A.2 H∞ Method  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128  A.3 Distortion Metric  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129  A.4 EOG Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 References  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131  B Head Radius Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 C Research Ethics Approval  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133  vi  List of Tables 2.1  Frequencies required to capture 25%, 50% and 95% of the power at any given time for blinks (voluntary and involuntary) and saccades (medium and large) as measured by the EOG. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23  2.2  Comparison of frequencies required, on average, to capture 25%, 50% and 95% of the total power at any given time instant for each of the four tasks. . . . . . . 24  3.1  POG sampling sequences for HS P-CR and 3D POG estimation methods with 1:1 and 3:1 bright to dark pupil ratios. . . . . . . . . . . . . . . . . . . . . . . . . 39  3.2  Image sequence parameters for the HS P-CR POG method. . . . . . . . . . . . . 42  3.3  Filter order for each sampling rate and filter length for the HS P-CR and 3D POG estimation methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43  3.4  Fixation Precision for each system configuration. . . . . . . . . . . . . . . . . . . 43  5.1  Comparison of metrics, using actual measured EOG. . . . . . . . . . . . . . . . . 65  5.2  Results using EOG approximated from 55 EEG channels. . . . . . . . . . . . . . 65  5.3  Results of using measured EOG and EOG approximated from different configurations of EEG using the RLS algorithm. . . . . . . . . . . . . . . . . . . . . . . 66  5.4  Results of using measured EOG and EOG approximated from different configurations of EEG using the H∞ algorithm. . . . . . . . . . . . . . . . . . . . . . . . 66  6.1  Comparison of metrics, using mean and standard deviation over all subjects. EOG ref indicates when EOG electrodes were used as a reference signal, and EEG ref when 3 EEG electrodes were used as a reference for the algorithm. . . . 73  6.2  ANOVA of Q metric. “T” is the effect of task (EM, MRP or idle), and “R” is the effect of the choice of reference signal (EOG-based or EEG-based). “TxR” is the interaction between the two factors. Values in parentheses are for the R part of the metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74  7.1  Data collection tasks. Numbers in brackets show how many times task was performed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82  7.2  Contribution by specific sources to various measured signals. . . . . . . . . . . . 85  vii  7.3  Mean R values for each electrode, algorithm, OA type, and reference input combination where the R value for ET+fEEG was significantly different than EOG and/or fEEG. Columns marked “t-test Result” show the results of the pairwise comparison of ET+fEEG reference with EOG and fEEG in each case. “S: Higher R”: difference in means is significant (p < .05) and ET+fEEG removes more OA. “S: Lower R”: difference in means is significant and ET+fEEG removes less OA. “N”: difference between mean R of ET+fEEG and other reference (either EOG or fEEG) is not significant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88  7.4  Mean R-values for RLS algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 92  7.5  Mean R-values for H∞ algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 93  8.1  Data collection tasks (HE=hand extension): timing of HE determined using physical switch activated when subject’s hand extends. Saccades/blinks allowed and expected for last two tasks. Numbers in brackets show how many times task was performed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100  8.2  R values for SS head model with dipole locations recalculated every session using three different error measures: RV, VAR, ABS. . . . . . . . . . . . . . . . . . . . 103  8.3  R values for SS head model with dipole locations calculated in the 1st session using three different error measures: RV, VAR, ABS. . . . . . . . . . . . . . . . . 104  8.4  R values for BE head model with dipole locations recalculated every session using three different error measures: RV, VAR , ABS. . . . . . . . . . . . . . . . . . . . 104  8.5  R values for BE head model with dipole locations calculated in the 1st session using three different error measures: RV, VAR, ABS. . . . . . . . . . . . . . . . . 104  8.6  R values for five algorithms: BMAR (B), RLS (R), H∞ (H), PCA (P) and CCA (C). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106  8.7  values (%) for five algorithms: BMAR (B), RLS (R), H∞ (H), PCA (P) AND CCA (C). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106  8.8  ANOVA results showing the significance (p values) of the effects on R of (i) mental state (Idle/OA/ERP), (ii) task (e.g., counting, relaxed, etc.) during nonOA periods, and (iii) task during OA periods. . . . . . . . . . . . . . . . . . . . . 108  9.1  BMAR vs. other methods using simulation data. . . . . . . . . . . . . . . . . . . 123  viii  List of Figures 1.1  Dependency diagram of thesis contributions. . . . . . . . . . . . . . . . . . . . . .  5  1.2  Generation and propagation of scalp and skin electric potential. . . . . . . . . . .  6  2.1  Sample measured VEOG, HEOG and corresponding spectrograms for each task for a single subject. Gray levels in spectrogram represent power spectral density (dB) at the corresponding time epoch and frequency, with brighter values representing a higher power spectral density. . . . . . . . . . . . . . . . . . . . . . 25  3.1  In this figure an example of a portion of a recorded bright pupil image is shown to illustrate the P-CR vector. In the P-CR method the vector (gx , gy ) is determined from the center of the corneal reflection to the center of the pupil. A mapping is then defined to relate the P-CR vector to the POG screen coordinates (px , py ). 33  3.2  The 3D model-based method for computing the POG is based on determining the location of the center of the cornea and the line-of-sight vector. Using (3.2) the POG can be found by tracing the LOS vector from C to the surface of the screen P . The model of the eye is based on the schematic description by Gullstrand which in this case includes three parameters; the radius of the model of the corneal sphere r, the distance from the center of corneal sphere to the center of pupil rd and the index of refraction of the aqueous humor fluid n. . . . 34  3.3  Illustration of the bright pupil and image differencing techniques. The bright pupil in Fig. 3.3(a) is illuminated with on-axis lighting, while the dark pupil in Fig. 3.3(b) is illuminated with off-axis lighting. The background intensity of the two images is similar, which after differencing (3.3(a) - 3.3(b)) results in a bright pupil on an almost blank background as shown in Fig. 3.3(c). . . . . . . . . . . . 35  3.4  An example of the results of the two stage pupil detection algorithm. In Fig. 3.4(a) the detected perimeter of the identified image difference pupil contour is shown overlying the difference image. Using the difference pupil contour as a guide, the pupil perimeter is detected in the bright pupil image as shown in Fig. 3.4(b). The gap in the pupil perimeter is a result of masking off the on-axis corneal reflection, which is subsequently compensated for by fitting an ellipse to the bright pupil contour perimeter. . . . . . . . . . . . . . . . . . . . . . . . . . . 36  ix  3.5  Regions-Of-Interest are used to reduce the quantity of image information to process as well as increase the camera frame rate. In Fig. 3.5(a) only the software ROI is applied to the original full sized bright pupil image (640x480 pixels). Only the portion of image within the rectangular box (110x110 pixels) surrounding the eye will be processed. In Fig. 3.5(b) the hardware ROI (640x120 pixels) has been applied in addition to the software ROI. . . . . . . . . . . . . . . . . . . . . . . . 38  3.6  Physical system showing the camera located beneath the monitor, the on-axis lighting (ring of LEDs surrounding the lens), the two off-axis point light sources located to the right of the monitor and the monitor upon which the POG is estimated. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40  3.7  An example of the fixation task in which the user observed each of 9 points on a 3x3 grid. In this example the POG samples were recorded with the HS P-CR vector method and a camera frame rate of 407 Hz. The original POG data is shown along with the results of filtering with a 500 ms moving window average. The POG screen coordinates have been converted from units of pixels to centimeters in this figure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41  3.8  A labeled sequence of 10 unfiltered POG estimates for the 3D POG estimation method are shown from a single fixation marker. Sampling sequences at two camera frame rates are illustrated, 30 Hz shown in Fig. 3.8(a) in which the 10 point sequence corresponds to a time interval of 333 ms, and 407 Hz shown in Fig. 3.8(b) which corresponds to a time interval of 25 ms. . . . . . . . . . . . . . 42  3.9  Fixation precision verses filter length is shown averaged across all four subjects indicating an exponential relationship. The POG screen coordinates were recorded with the system operating at 407 fps for both the HS P-CR and 3D POG methods. 45  4.1  Power spectrum of the calibration signal in three different cases: transmitter OFF (solid line with circle), transmitter ON while receiver is in motion (solid line with star), and transmitter ON while receiver is not in motion (solid line with triangle). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54  4.2  Ensemble averages of the power spectral density values in the four different conditions in all channels for subject 1. Rows 1–4 show ensemble averages of PSD values for tasks 1-4 at the same order (dashed line: transmitter ON, solid line: transmitter OFF). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56  4.3  Ensemble averages of the power spectral density values in the four different conditions in electrode location “c1” for subject 2. Rows 1-4 show ensemble averages of PSD values for tasks 1-4 at the same order (dashed line: transmitter ON, solid line: transmitter OFF). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56  4.4  Number of increases (positive value bars) and decreases (negative value bars) in power in each narrow frequency band when the transmitter is ON as compared to when it is OFF (row 1: shows number of increases and decreases in 0-27 Hz band, row 2: shows number of increases and decreases in 28-55 Hz band). . . . . 57  x  4.5  Number of increases (positive value bars) and decreases (negative value bars) in power in each narrow frequency band in electrode location “C1” when the transmitter is ON as compared to when it is OFF (row 1: shows number of increases and decreases in 0-27 Hz band, row 2: shows number of increases and decreases in 28-55 Hz band).  5.1  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58  OA removal & EOG estimation: A is the ocular artifact signal, B is the underlying brain signal, N is measurement noise, Eo is the signal measured at EOG electrode sites, Ee is the signal measured at EEG electrode sites, “OAR” is the ˜ are the estimations of the true EEG at the OA removal algorithm, and Y and Y ˜ EEG electrode sites. Ideally, Y=Y=B. . . . . . . . . . . . . . . . . . . . . . . . 62  5.2  EEG electrodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64  6.1  OA removal & EOG estimation: A is the ocular artifact signal, B is the underlying brain signal, N is measurement noise, Eo is the signal measured at EOG electrode sites, Ee is the signal measured at EEG electrode sites, “OAR” is the ˜ are the estimations of the true EEG at the OA removal algorithm, and Y and Y ˜ EEG electrode sites. Ideally, Y=Y=B. . . . . . . . . . . . . . . . . . . . . . . . 71  6.2  Comparison of EOG- and EEG-based OA removal during blink. . . . . . . . . . . 75  7.1  General approach to OA removal: VB is the signal component caused by brain sources, VO by ocular sources, VA by other “artifact” sources (e.g., EMG, EKG, etc.), and VN by noise sources (e.g., measurement noise). E is the measured scalp potential and Y is the output of the OA removal algorithm, and represents the EEG with OA removed. TOAR is a transformation (usually linear) that removes the OA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80  7.2  Overall operation of eye tracker for every image captured. The signals HET , VET and BET are extracted as a reference for OA removal. . . . . . . . . . . . . . . . 81  7.3  Areas of eye tracker images used for extracting blink signal (not drawn to scale). R is the pupil radius (average of lengths of ellipse axes). . . . . . . . . . . . . . . 83  7.4  Eye tracking vs EOG during (a) simultaneous horizontal (left) and vertical (up) saccade, and (b) blink. Sample eye images are shown above and EOG and eye tracking signals below, with arrows pointing to the time instant each image was captured. In (a), the x shows the current pupil position, and the dot shows the pupil position at the start of the saccade. In (b), the ellipse shows the area where the intensity value is tracked during the blink. . . . . . . . . . . . . . . . . . . . . 84  7.5  Logarithm of mean R values for (a) RLS and (b) H∞ algorithms during small upward saccades. Electrodes where the difference between means is significant are highlighted with boxes. Amplitudes are scaled the same in both (a) and (b) to show the relative performance of the algorithms at each electrode. . . . . . . . 87  xi  7.6  OA removal using (a) RLS at Fp1 during small right saccade and (b) H∞ at Fp2 during small upward saccade. In each case, 3 sample trials and the average of several trials is shown. The original EEG, and the corresponding artifactremoved signal using EOG, EEG, and eye tracker combined with EEG signals are plotted. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89  8.1  OA removal: (a) General approach to OA removal: VB is the signal component caused by brain sources, VO by ocular sources, VA by other “artifact” sources (e.g., EMG, EKG, etc.), and VN by noise sources (e.g., measurement noise). E is the measured scalp potential and Y is the output of the OA removal algorithm, and represents the EEG with OA removed. TOAR is a transformation (usually linear) that removes the OA; (b) Flowchart showing how calibration data was used by the proposed method to compute the model parameters needed during normal EEG data collection.  8.2  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101  Placement of EOG electrodes during calibration and for system evaluation. EOG reference signals (with 1-2Hz band-pass filter) were calculated as follows: HEOG=H1H2, VEOG1=V1-V2, VEOG2=V3-V4, VEOG=(VEOG1+VEOG2)/2. x-/y-/zaxes are as shown, with the origin at the center of the head. . . . . . . . . . . . . 102  8.3  R values for all algorithms during periods of OA only, ERP only, and neither (Idle). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109  8.4  Sample plots of original EEG (thicker dotted line) and signals with OA removed by RLS, H∞ , PCA, CCA and BMAR methods at four electrode positions over a 2000ms period. The data contains a segment with an eye movement (OA), a segment with a hand extension (ERP), and samples with neither. . . . . . . . . . 110  8.5  Sample plots of the logarithm of the square magnitude of the original EEG (thicker dotted line) and signals with OA removed by RLS, H∞ , PCA, CCA and BMAR methods at four electrode positions over a 2000ms period. The data contains a segment with an eye movement (OA), a segment with a hand extension (ERP), and samples with neither. A logarithm of the squared magnitude is used to show more details of the residual signal when a given method removes substantial portions of the original EEG. . . . . . . . . . . . . . . . . . . . . . . . 111  B.1 Two spherical shells S1, S2 that differ only by their radii r1, r2. . . . . . . . . . . 132  xii  Acknowledgments I would like to express my deepest gratitude to Dr. Peter Lawrence and Dr. Gary Birch for their support and encouragement during my graduate studies. They always provided time, helpful advice and encouragement, and displayed great understanding during challenging times. I would also like to thank the thesis committee members for their assistance. I am also indebted to my colleagues in the Neil Squire Brain Interface Lab, especially Drs. Ali Bashashati, Jaimie Borisoff, and Steve Mason, as well as Dr. Craig Hennessey at the UBC Robotics and Control Lab for their collaboration and input into discussions during this thesis work. Finally, the work of this thesis would not be possible without my family, including my sister Dorsa and my parents Missagh, Rouhieh, Fereidoun and Manijeh, but especially my children Tiana, Zayn and Amara, and above all my wife Arezou. I consider Arezou to be as much a contributor to this thesis as anyone, including myself.  xiii  Dedication Dedicated to my wife Arezou, my children Tiana Bahiyyih, Zayn Alexander and Amara May.  xiv  Statement of Co-Authorship This thesis is based on several manuscripts, resulting from collaboration between multiple researchers. For the following work, all major research, including research design, problem specification, development of methods, software and hardware, data collection and analysis and manuscript preparation were performed by the author, with assistance and supervision from Peter Lawrence and Gary Birch: A version of Chapter 2 appeared in the Proceedings of the 3rd IEEE EMBS Conference on Neural Engineering. This paper was co-authored with Peter Lawrence and Gary Birch. A version of Chapter 5 appeared in the Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. This paper was co-authored with Peter Lawrence and Gary Birch. A version of Chapter 6 appeared in the Proceedings of the 4th IEEE EMBS Conference on Neural Engineering. This paper was co-authored with Peter Lawrence and Gary Birch. A version of Chapter 7 has been submitted for publication. This paper was co-authored with Peter Lawrence and Gary Birch. A version of Chapter 8 has been submitted for publication. This paper was co-authored with Peter Lawrence and Gary Birch. For the following work, the author’s specific contributions are listed below: A version of Chapter 3 appeared in 2008 in IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics. This paper was co-authored with Craig Hennessey and Peter Lawrence. • Craig Hennessey performed the main research, including data collection and analysis, and manuscript preparation, with assistance from Peter Lawrence and Borna Noureddin. • The author designed and implemented an algorithm that was necessary for recording images to disk at the full frame rate in the high speed system. The algorithm was used in debugging the high speed eye-gaze tracking system, and was used in subsequent work used for eye tracking by both Craig Hennessey and Borna Noureddin. • The research reported in Chapter 3 was motivated by work performed by Borna Noureddin during his Masters thesis. A version of Chapter 4 appeared in 2006 in IEEE Transactions on Biomedical Engineering. This paper was co-authored with Ali Bashashati, Rabab Ward, Peter Lawrence and Gary Birch. xv  • Ali Bashashati developed the methods, collected the data, analyzed and interpreted the results, and prepared the manuscript. • Borna Noureddin helped collect the data and contributed to the development of evaluation methods, the interpretation of the results and the preparation of the manuscript. • Rabab Ward, Peter Lawrence and Gary Birch supervised the development of the work, assisted in preparing the manuscript, and interpreting the results.  xvi  Chapter 1  Introduction The electroencephalogram (EEG) is used to measure the electrical activity of the brain, and was first described by Hans Berger in 1929 [1]. EEG is considerably less expensive and easier to use than the magnetoencephalogram (MEG), and provides better time resolution than tools such as magnetic resonance imaging (MRI), although it offers relatively lower spatial resolution. It is widely used as a clinical diagnostic and research tool [2], and is being increasingly explored for human computer interface applications [3]. Until the 1960s, the EEG was used primarily to investigate neurophysiological phenomena. In the 1960s, automated analysis of EEG data emerged, and by the 1970s, the EEG began to be used more widely for clinical diagnosis. Gradually, digital EEG became more widespread and was combined with other neuroimaging techniques such as functional MRI [4, 5] for diagnosis and real-time monitoring (e.g., depth of anesthesia [6]). It also continues to be used to better understand and to help diagnose epilepsy [7], tumors [8], stroke [9], and sleep disorders [10]. Over the past decade, the EEG has increasingly been investigated for online human computer interface applications such as brain computer interfaces [3], neuromarketing [11], driver and pilot fatigue monitoring [12, 13], and pilot training [14]. Extracting meaningful neurological signals from the EEG requires the effective removal of non-neurological signals (artifacts) caused by such things as muscle movement [15], heart beat [16], eye movements and blinks. The latter two can be referred to as ocular artifacts (OA), and their removal from EEG - especially for online applications - remains an ongoing research problem [17–20]. A number of techniques have emerged for removing OAs from EEG. Many such techniques [20–25] use one or more electro-oculogram (EOG) signals either directly or indirectly as a reference. While for some clinical settings attaching EOG electrodes to the face may be acceptable, for many others - especially in the context of a human computer interface - it is not. Other techniques remove OAs without the need for a reference EOG signal. These approaches, however, need manual selection of a threshold [26] or of components to reject [27], are not able to handle all sources of OA (e.g., only blinks [26]) or are otherwise not suitable for real-time use (e.g., require large amounts of data [17]). Further, evaluating the effectiveness of OA removal methods is a difficult problem because the underlying brain sources of the EEG are unknown. Thus, after removing artifacts from recorded EEG, it is not possible to compare the results with a known artifact-free signal. Most methods are therefore evaluated using simulated data or by using visual inspection of real data. There is generally no reporting of quantitative measures of the performance of OA removal methods on real EEG that takes into account both how much OA is removed and how much  1  underlying EEG might have been distorted in the process.  1.1  Thesis Objectives  In this thesis the effects of the propagation of eye movement and blink effects to scalp EEG are studied and used to describe and objectively evaluate novel approaches to online removal of ocular artifacts from EEG recordings without EOG electrodes. The objectives of the thesis include the following. 1. Better understanding of ocular signal characteristics. Assumptions about the frequency characteristics of ocular artifacts have impacted decisions about eye movement and blink recording (e.g., sampling and filtering frequencies for EOG and frame rates for video-based eye tracking equipment). A study was carried out to test the assumptions about the frequency content of eye movements and blinks. The motivation and requirements for this objective is a recommendation, based on empirical evidence, for minimal sampling and filtering frequencies for both EOG and video-based eye tracking. 2. Improved OA removal evaluation measures. The lack of objective metrics has limited the evaluation of OA removal methods on real (i.e., not simulated or synthetic) EEG. A new metric was introduced to measure one common type of distortion of underlying brain signals after OA removal. It was then applied to the evaluation of the effect of previously untested factors such as the mental state of an individual on OA removal methods. The requirement of this objective is a quantitative measure, suitable for use with real EEG, of how much OA removal methods distort underlying brain signals. 3. Online ocular artifact removal using an eye tracker instead of EOG. Based on the improved understanding of OA frequency characteristics, a technique for replacing the EOG with an eye tracker-based reference for online removal of OAs was developed. This method is required to (a) remove both eye movements and blinks, (b) be fully automated and online, (c) need minimal calibration, and (d) operate without EOG electrodes. 4. Online ocular artifact removal without EOG or eye tracker. Based on the improved understanding of OA characteristics, an online biophysical model-based OA removal method was developed that does not require either an eye tracker or EOG electrodes during normal operation. Its performance was evaluated using the improved evaluation measures. This method is required to (a) remove both eye movements and blinks, (b) be fully automated and online, (c) need minimal calibration, (d) operate without EOG electrodes, and (e) minimize the distortion of EEG components unrelated to eye movements and blinks.  1.2  Thesis Contributions  In the course of achieving the objectives of this thesis, the following contributions were made.  2  • OA frequency characteristics. A study was carried out to test widely held assumptions about the frequency content of the EOG signal. It was discovered that the EOG could contain much higher frequency components than was previously reported during eye movements and blinks. A minimal sampling and filtering frequency was thus recommended for both EOG and video-based eye trackers. • High speed eye tracking. A technique for recording full resolution, uncompressed images from a high speed eye tracking device using a PC was developed. This allowed the analysis of the eye images for the development of algorithms for tracking the time course of eye movements and blinks for use with OA removal methods without the need for EOG. • Electromagnetic sensors and EEG. A study was carried out to investigate the effects of an electromagnetic motion tracking sensor on an EEG recording system. It was found that the sensor does not generate any consistent frequency components in the power spectrum of the EEG in the 0.1–55 Hz range. The sensors were thus able to be used to record hand extension tasks for evaluating the effect of a motor task on OA removal. • Measuring distortion. A new metric was introduced to measure the likelihood that a given OA removal method removes more power than is present in the original EEG. • Effects of mental state. A study was carried out to investigate the effects of factors such as mental state (e.g., whether the individual is performing a hand movement or mental task) on OA removal performance. It was found that some methods are significantly affected by the mental state. • Frontal electrodes as reference. A study was carried out to test whether frontal EEG electrodes can be used to replace EOG as a reference input for OA removal methods. It was found that for some methods, three specific frontal EEG electrodes can be used to replace the EOG. • Eye tracker-based OA removal. An approach to using an eye tracker with three frontal EEG channels instead of EOG for use by any online OA removal method was developed. It works for both eye movements and blinks, requires no EOG electrodes, calibration or user intervention. The approach was applied to two existing OA removal methods, and their performance relative to each other and relative to using an EOG reference input were compared. In addition, a novel algorithm (BSG) for extracting blink dynamics from the eye tracker images was developed. The use of the eye tracker and BSG algorithm provides the means for tracking point-of-gaze and blink dynamics simultaneously with EEG data collection and processing, which is often desired or required in clinical studies and a variety of human computer interface applications such as neuromarketing, braincomputer interfaces, and pilot and driver drowsiness monitoring. • Biophysical model-based OA removal. A fully automated, low-distortion, online OA removal method that does not require EOG electrodes during normal operation was 3  developed based on a biophysical model. Its performance was compared with four existing OA removal methods. It removed more OA during both eye movements and blinks than all other methods, and was the only method that never removed more power than was present in the original EEG. • OA effects on EEG-based BCI. A study was carried out to test the extent to which eye movements and blinks affect the performance of an EEG-based brain computer interface. The BCI was found to be affected by eye movements and blinks. In the remainder of this chapter an overview of the literature in ocular artifact removal will be presented, providing background material and motivation for the research undertaken. An overview of the nature of the interaction between OAs and brain signals measured at EEG electrodes will be presented. This will be followed by a review of current OA removal methods. Next, various techniques for evaluating the performance of OA removal methods will be presented. This is followed by a description of the data collection process used throughout this thesis. Finally an overview of the remainder of this manuscript based thesis will be presented describing the contents of the following chapters, and their relation to the overall thesis goals outlined above. Fig. 1.1 shows the relationship and dependencies of the work presented in the chapters of this thesis.  1.3  Ocular Artifacts in EEG  The potentials measured at each time instant at each scalp EEG electrode or skin EOG electrode are generated by a number of sources as shown in Fig. 1.2. Cortical activity originating in the brain relating to voluntary movement, eye movements and other artifacts all result in brain potentials (VB ) which are then propagated via some transfer function TBC to the cortical surface of the brain to produce cortical potentials VC . The cortical potentials are then propagated via TCS to the scalp and skin to produce the potentials VBS resulting from brain related activity. Eye movements begin with brain related activity that activates the extra-ocular muscles, which in turn result in the movement of the eyeballs or eyelids. The activation of the extraocular muscles can contribute to the scalp and skin potentials via the transfer function TM S . Similarly, the rotation of the eyes and eyelid movement can induce a change in the potentials (VE ) that contribute to the scalp potentials via the transfer function TES . The contributors (VBS , VM S , and VES ) to the scalp potentials (V ) combine with external sources of electric potentials (VX ) and measurement noise (N ) to produce the potentials measured by the electrodes, as shown in the right hand side of Fig. 1.2. The following simplifying assumptions can be made: 1. EEG and EOG potentials of interest are, or can be filtered to, relatively low-frequency signals (typically less than 30Hz). This means that electromagnetic propagation effects are negligible, and therefore an electrostatic model is sufficient to characterize the propagation of the sources of the potentials to the scalp and skin.  4  Figure 1.1: Dependency diagram of thesis contributions.  5  Cortical activity Event-related brain activity  VB EBA (u , t )  Background brain activity  VB BBA (u , t)  Other eventVBOBA (u , t ) related brain activity (artifact) EM-related brain V (u , t ) BOA activity  τ1  Ocular muscle activation  External sources  TBC TBC  Σ  VC (u , t )  TCS  VBS (u , t)  VMS (u , t )  TBC  VX (u, t )  Σ N (t )  TBC  VM (u , t )  TMS  Measured skin/scalp potentials (EEG or EOG) V (u , t )  Measurement noise  VES (u, t )  τ2  Eye movement or blink  VE (u , t )  TES  Figure 1.2: Generation and propagation of scalp and skin electric potential. 2. While the head is an inhomogeneous volume conductor, each of the four layers (brain, CSF, skull and scalp) can be treated as a homogenous medium with constant conductivity and permittivity. 3. Anisotropy effects are negligible within each of the four layers. 4. Due to the low frequency of EEG and EOG, capacitance effects are negligible within each of the four layers. 5. Proper shielding of the electrodes and the use of incandescent lights renders the effects of external sources negligible (i.e., VX = 0). 6. Measurement noise consists primarily of channel or line noise, and can be eliminated using suitable filtering techniques. 7. The effects of other internal artifacts (EMG, EKG, etc.) can be ignored, since their effects on skin potentials are much smaller than the effects of eye movements and blinks. A similar argument is used to ignore the effects of extra-ocular muscle activation on skin and scalp potentials (i.e., VM S = 0). Although in reality VM S = 0, this assumption is made to limit the scope of this work. Based on assumptions 1-4, the transfer functions TCS , TM S and TES are actually linear transformation matrices [28]. In addition, based on assumptions 5 and 7, VX and VM S can be removed from Fig. 1.2. Using these simplifications, the generation of measured scalp potentials 6  can be approximated as follows: Vˆ = VBS + TES VE + N  (1.1)  After appropriate filtering of measurement noise, the OA-free brain-related scalp potentials can be expressed as: VˆBS = Vˆ − TES VE  (1.2)  where Vˆ is the measured scalp potentials after filtering. The problem of removing the effects of eye movements and blinks can thus be expressed as finding a suitable TES . The statistical, spatial and temporal content of Vˆ and/or information about physiology can be used to estimate TES .  1.4  Ocular Artifact Removal Methods  Current techniques for removing OAs from EEG can be loosely defined as belonging to one of four categories: linear regression, component analysis, nonlinear filtering and biophysical modelling. The following sections review current research in each category, as well as recent attempts to use an eye tracker for online OA removal.  Component Analysis In component analysis techniques, Vˆ in Eq. 1.2 is separated into individual components, with TES being a “mixing” matrix for combining the sources (VE ) into the measured potentials. Components resembling OA are removed, and the remaining re-assembled into VˆBS . Component analysis techniques include wavelet analysis, principal component analysis (PCA), independent component analysis (ICA) and other blind source separation (BSS) techniques such as second order blind inference (SOBI). The measured time-frequency signals are separated into individual components, the component(s) that seem to resemble OA are removed, and the remaining components re-assembled into a new set of signals. Vigon et. al. [29] provided a quantitative evaluation of four OA removal techniques based on simulated data. They found that the two ICA techniques they used were better able to remove OAs than either PCA or regression. Further, PCA requires the artifact to be uncorrelated with brain activity [30], which is not always the case. ICA, however, only requires the OA and brain activity to be independent. On the other hand, Wallstrom et. al. [20] compared regression with PCA and ICA, and found that ICA may cause spectral distortions. Nevertheless, ICA has received much attention recently as a means of removing OAs from EEG, and thus is worth reviewing briefly. Han et. al. [31] and Jung et. al. [32] used ICA to identify and remove the contribution of artifacts (including OAs) to scalp EEG. The method in each case was offline, required a significant amount of data for proper separation of artifact from brain activity, and manual inspection to identify the artifact components. James et. al. [33] applied a temporal constraint to ICA to automatically identify the OA component. However, the automation required an EOG signal, and was still offline, requiring multiple iterations. Hoang et. al. [34] introduced 7  the idea of applying dipole source localization on ICA components of single trials offline, rather than on the raw EEG data. A BSS method used by Joyce et. al. [27] for OA removal is SOBI. Although SOBI is also an offline method, they introduce a novel method for automating the selection of the OA component(s). They also state that “small amounts of leakage between the ocular and nonocular components can occur, and the ocular sources also may be represented in several components each reflecting some part of the eye motion”. Tang et. al. [35] introduced validation criteria for SOBI, and showed it to be an effective means of removing artifacts from EEG offline. The above methods have the disadvantage of being inherently offline, or requiring too much data to be used practically in online applications. Zikov et. al. [36] used a wavelet denoising technique to remove OAs from EEG. While the approach is promising for online application, it requires manual selection of wavelet thresholds.  Linear Regression Linear regression techniques [37] use only the EOG signals of VE in Eq. 1.2, with TES thus consisting of fixed regression coefficients. Both Gratton [38] and Croft et. al. [37] have given a good review of major EEG OA removal techniques. Although [37] favoured linear regression, they also suggested that with accurate head models, source dipole analysis (SDA) “offers a very strong method of ocular artifact removal”. More recently, Schlogl et. al. [24] presented a fully automated regression method. EOG electrodes measure both eye movement and some brain activity [39]. Since linear regression involves subtracting a portion of EOG signals from measured scalp EEG, it is likely that in doing so, a portion of brain activity is also removed. In addition, the methods inherently require EOG electrodes, which is not suitable for some applications, especially human computer interaction.  Nonlinear Filtering Nonlinear filters (e.g., nonlinear recursive least squares or RLS [21]) treat the EOG as a reference signal to be extracted from Vˆ in Eq. 1.2, with TES consisting of adaptive filter coefficients. Nonlinear filtering includes adaptive filters, statistical models (e.g., ARMAX) and artificial neural networks (ANNs). There has been some work reported on developing online OA removal methods. Selvan et. al. [40] applied an ANN-based adaptive filter. Erfanian et. al. [25] used an adaptive noise cancellation technique, while He et. al. [21] used adaptive filtering for online OA removal. Puthusserypady et. al. [23] applied H∞ techniques for online OA removal. More recently, Mehrkanoon et. al. [41] used an LMS adaptive filter. All these methods, while online, require a good EOG reference using EOG electrodes.  Eye Tracking For OA removal methods (such as those that employ nonlinear filters) that require an EOG reference input, one possible alternative is to use an eye tracking device. For example, filter  8  coefficients of TES in Eq. 1.2 can be adapted using eye tracking data. Kierkels et. al. [18] proposed the use of an eye tracker for OA removal. They used an eye tracker to generate pupil positions, which were then used as inputs to a Kalman Filter to remove the effects of eye movements. Although the method was shown to work well for eye movements, it was not able to handle blinks, and required a 30-second starting period for stabilizing the Kalman Filter (which required manual tuning), during which the OA removal algorithm could not be used reliably. Further, the eye tracker used by Kierkels et. al. operated at 50Hz, while it has been shown [42] that a high-speed eye tracker is required in order to not miss important dynamics of eye movements and blinks. The eye tracking system used in this thesis was developed as a part of a series of studies [43–48]. A new generation of low-cost, high performance eye and gaze tracking systems (www.mirametrix.com) will reduce the cost of such systems for use in such EEG applications as proposed here.  Biophysical Modelling It is also possible to determine TES and VE in Eq. 1.2 using appropriate models [28] for the geometry and conductivity of the various parts of the head and for the movement of the eyes and eyelids, which are modelled using equivalent current dipoles [49]. The position and orientation of dipoles for eye movements and blinks are fixed. VE represents the magnitudes of the dipole moments, and TES represents the head model, location of electrodes and location and orientation of the source dipoles. For a single time instant (sample), VE is a column vector with one element per dipole and TES is a matrix, with each row representing the mapping of sources to each electrode and each column the contribution of each source to all the electrodes. For multiple time points (e.g., an epoch), VE is a matrix (one column per time sample), while TES remains the same. Thus, the OA signal to be removed can be decomposed into a time-independent matrix that can be calculated based on an appropriate head model, and a set of time-dependent vectors based on the source model. This technique is similar to SDA used to infer the neurophysiological origins of measured scalp EEG. In SDA (see [2] and [50]), sources are modelled as current dipoles. The forward problem of modelling the propagation of a known current source through the various parts of the head has been addressed as a boundary value problem. The shape and conductivity of various layers within the head are estimated, and Poisson’s equation [50]) is used to provide the necessary mathematical relationships. A homogeneous sphere can be used to model the head, but this usually provides an inadequate approximation. In the classical head model, each layer of tissue in the head (the brain, skull, scalp and possibly cerebrospinal fluid or CSF) is modeled as a shell within a sphere. Modelling the various layers as spherical shells is not accurate (e.g., the brain and skull are not typically spherical). More appropriate “realistic” head models use numerical methods to take the irregular shapes of the various layers into account. In this sense, magnetic resonance imaging (MRI) has been used to provide a means of estimating the actual shapes. Such nu9  merical methods include finite element (FE) modelling and boundary element (BE) modelling. Conventionally, scalp potentials are calculated from each dipole source via an integral equation. Finke et. al. [51] proposed using lead field analysis (LFA) as an alternative to the conventional approach. LFA entails calculating the electric field that results at each dipole location due to a current injected and withdrawn at surface electrode sites. The solution to the forward problem is then simply obtained from the scalar product of this electric field and the dipole moment. They reported that the two approaches were comparable in accuracy, with the added benefit of LFA being more robust to skull conductivity variations. SDA involves an iterative optimization of the best-fit source(s) for a given head model and EEG measurement. While SDA is mostly used to localize brain activity, Berg and Scherg [49] adapted it for modelling eye activity. The positive charge of the cornea relative to the retina generates a relatively constant electric potential difference across each eye, making it possible to model the eye as a moving dipole. Since the stationary eye itself would not contribute to AC recordings of EEG, Berg and Scherg modelled eye activity (eye movements and blinks) as current dipoles instead. They performed a spatiotemporal brain electric source analysis (BESA) [52, 53], and found that vertical and horizontal eye movements as well as blinks can each be modelled well with one dipole per eye. They further described a possible offline OA removal method based on their results, and indicated that using a realistic head model would likely provide more accurate results than their 3-layer spherical model. Their approach is promising, as it does not inherently require EOG electrodes. To date, nobody has reported adapting their method for online OA removal, or comparing their results with realistic head models. One question that must be addressed for biophysical model-based OA removal is whether to use spherical shell models or realistic head models. Hamalainen et. al. [54] compared a realistic head model with both homogeneous and multilayer spherical models. They found that for frontal and frontotemporal sources (e.g., around the eyes), the added computational complexity of realistic models is justified. Menninghaus et. al. [55] compared realistic and spherical head models using both simulations and a head phantom. They found that the realistic model gave better accuracy for source dipole localization. Cuffin et. al. [56] developed a BE head model based on MRI scans, and used sources implanted in epilepsy patients during surgery to validate their model. They showed that realistic models are definitely more accurate if the measured EEG has a good signal to noise ratio. Similarly, Vanrumste et. al. [57] compared spherical and realistic head models for source dipole localization. They concluded that for very noisy data, realistic head models did not yield much advantage over spherical models for a single time point, but that for multiple time samples, even under noisy conditions, realistic models were clearly superior. It therefore seems likely, based on previous work, that a biophysical model-based OA removal method would benefit from using a realistic head model. He et. al. [28] used a BE model to efficiently calculate cortical potentials from scalp potentials. FE models are more computationally intensive, but can model anisotropic effects and thus potentially provide more accurate results than BE models. Le et. al. [58] used FEs to model the scalp, skull and cortex using MRI scans, and validated their model with cortical recordings from a neurosurgery patient. Zhang et. al. [59] proposed a second-order FE model  10  and compared their results with a 3-layer spherical head model, concluding that the realistic model provides “substantially” improved accuracy for source localization. Michel et. al. [60] compared FE and BE head models, and found that FE models are better for handling local conductivity changes and skull breaches. However, if local conductivities are not known a priori, the computationally more intensive FE models do not provide a significant advantage. Fuchs et. al. [61] validated a BE model for inverse source localization, and also developed a novel, very efficient forward solution. These developments have opened the door to using a BE model of the head in an online model-based OA removal method. Already, Flanagan et. al. [62] have applied SDA for performing offline OA removal on EEG. With efficient methods for developing realistic MRI-based head models, it is possible to use BESA to develop more accurate descriptions of the effects of OA on scalp EEG, and to use such descriptions to develop an automatic online OA removal approach without relying on EOG electrodes.  1.5  Evaluation Techniques  Typically (e.g., in [18]), OA removal methods are evaluated using simulated data. In Fig. 1.2 and Eqs. 1.2 and 1.1, VES (ocular artifact), N (noise) and VBS (true EEG) are generated. Metrics such as the signal to noise ratio of the output (VˆBS ) can be compared to the artifact-free EEG (VBS ). For real data, however, VES , N and VBS are unknown, making such an approach unsuitable. Instead, the performance on real data is reported subjectively [19, 30, 63], often based on visual inspection of the resulting waveforms. Puthusserypady et. al. [64] used the following metric to measure the amount of OA removed: M  N  M  N  2  VBS (u, t)2  (V (u, t) − VBS (u, t)) /  R= u=1 t=1  (1.3)  u=1 t=1  where N is the number of samples and M is the number of channels. The numerator represents the power of the artifact signal removed from the EEG, and the denominator the power of the remaining EEG. However, while during an artifact, higher R values are better, during nonartifact periods, lower values of R are desirable. To use an extreme example, if the OA removal algorithm considered the entire signal to be artifact, VˆBS → 0 and R→ ∞: R is maximal, but clearly the algorithm has poor performance.  1.6  Data Collection  In Chapter 2, EOG was collected from three subjects. No EEG data was collected for those subjects. In Chapters 7 and 8, to evaluate the OA removal methods described, substantial data was recorded from an additional four subjects, this time including EOG, EEG and eye tracker signals. Data was collected from each subject during three sessions, each on different days (i.e.,  11  12 days of recording in total). Data consisting of 57 channels of EEG (including two linkedear reference electrodes) and 7 channels of EOG were recorded using an existing, available 64-channel amplifier (http://l64.sagura.royalmedicalsystems.com). Two Polhemus sensors were attached to the right hand of each subject to record hand movements, and the high speed eye tracker described in Chapter 3 was used to record eye movements and blinks. The locations of electrodes on the subject were also recorded using the Polhemus device. A data acquisition board (http://sine.ni.com/nips/cds/view/p/lang/en/nid/10967) was used to synchronize the amplifier, Polhemus sensor and eye tracker signals. To ensure the highest quality EEG and EOG signals possible, each session required the preparation of the scalp and the application of 64 electrodes in total, taking care to ensure that there was minimal (<2kΩ) impedance between each electrode and the reference electrodes. This preparation could take up to 2 hours for each session. In addition, to carry out the range of tasks necessary for the work reported in Chapters 7 and 8, the recording during each session took up to 2 hours (including short breaks), for a total of 4 hours per session. In total, including the eye tracking images recorded, 540GB of data was recorded for each subject. Therefore, given the cost and effort required, it was deemed prudent to carry out pilot studies first using a total of seven subjects (three used in Chapter 2 with EOG only and four more used in Chapters 7 and 8 with EOG, EEG and eye tracking data). Statistical methods (e.g., ANOVA and t-tests) have been employed to indicate the significance of the findings in the pilot studies. Further experiments would need to be carried out with additional subjects to refine the level of significance of the findings.  1.7  Chapter Summary  The unifying theme of the research presented here is the goal of developing methods for the online removal of ocular artifacts from EEG without the need for EOG electrodes attached to the face. The thesis presented here is written in manuscript style, as permitted by the Faculty of Graduate Studies at the University of British Columbia. In the manuscript style thesis, each chapter represents an individual research effort, culminating in a peer reviewed submission or publication. Each chapter can be read individually with an overview of the motivation of the research and a review of relevant literature presented for each chapter. The references are summarized in the bibliography found at the end of each chapter as per the requirements of the Faculty of Graduate Studies. In Chapter 2 the results of a new time-frequency analysis of OAs found in the EOG is presented [42]. The results indicate that frequencies higher than previously expected can occur in EOG signals during eye movements and blinks. A minimum sampling rate is therefore suggested for both sampling and filtering of EOG, and for frame rates of video-based eye trackers. Chapter 3 describes a high speed eye tracker [45] suitable for online OA removal applications, and which employs a new technique for recording eye images and pupil positions at least as fast as the minimum frame rate determined in Chapter 2. In Chapter 4 the results of a study to determine the effects of using an electromagnetic 12  motion tracking sensor on an EEG recording system is presented [65]. The study confirmed the hypothesis that in typical EEG frequencies, there is no evidence of any spectral interference or distortion of the EEG caused by the sensor. Chapter 5 presents a metric for measuring a common type of distortion introduced by OA removal methods [66]. In addition, the effect of replacing the EOG with frontal EEG channels is investigated. The results indicate that, when describing the performance of OA removal methods, in addition to reporting how much OA is removed, it is important (and now possible) to quantify how much the underlying EEG might be distorted. In Chapter 6 the results of a study is presented [67] to determine whether the choice of a reference signal affects an existing online OA removal method, and whether the method performs differently during OA and during non-OA periods. For the latter, data containing a hand movement task is used. The periods where the subject was performing a hand movement was marked using the motion tracking sensor described in Chapter 4, which was able to be used without affecting the simultaneously recorded EEG. The results show that it is possible to use frontal EEG channels instead of EOG for online OA removal methods that require an EOG reference input. Further, it is shown that the method performs significantly differently during periods with and without OA, suggesting that the performance of OA removal methods should be reported both for periods with OA and for periods without. In addition, the results indicate that a thorough analysis of performance of OA removal methods should consider data collected during a variety of mental states and tasks (e.g., a hand extension task). Based on the findings in Chapter 2 and using the system described in Chapter 3, a novel approach [68] is presented in Chapter 7 for replacing the EOG with an eye tracker-based reference for online OA removal methods that require an EOG reference. The high-speed eye tracker is used along with a new online algorithm for extracting the time course of a blink from eye tracker images to remove both eye movement and blink artifacts without the use of EOG electrodes. The technique used to record eye images and pupil positions from the high speed eye tracker was essential for this work. The effectiveness of the approach is demonstrated with two adaptive filters. A full analysis of the results at electrodes across the scalp for eye movements and blinks of different magnitudes is provided. Small and large saccades of approximately 10◦ and 40◦ on average, respectively, are examined. Similarly, small and large blinks of approximately 20µV and 100µV (as measured by the EOG) on average, respectively, are included in the analysis. It is shown that both methods benefit from the use of an eye tracker instead of EOG. In Chapter 8 a new method for online OA removal without EOG electrodes during real-time operation is presented [69]. The method uses a biophysical modelling approach, and does not require manual intervention. Using the metric proposed in Chapter 5 and the findings of the study described in Chapter 6, its performance is compared with existing methods separately for periods both with and without eye movements and blinks. The results showed that the method effectively eliminates one common type of distortion caused by other OA removal methods, while consistently removing more OA during eye movements and blinks. In Chapter 9 the results of the collected works are related to one another in the context of the overall thesis goal of online OA removal without EOG electrodes. The strengths and  13  weaknesses of the research is then presented, along with future directions for research.  14  References [1] H. Berger, “Electroencephalogram in humans,” Archiv fur Psychiatrie und Nervenkrankheiten, vol. 87, pp. 527–570, 1929. [2] E. Niedermeyer and F. L. da Silva, Electroencephalography: Basic Principles, Clinical Applications, and Related Fields, 4th Ed. Baltimore, Maryland USA: Lippincott Williams & Wilkins, 1998. [3] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain-computer interfaces for communication and control,” Clinical neurophysiology : official journal of the International Federation of Clinical Neurophysiology, vol. 113, no. 6, pp. 767–791, 06// 2002. [Online]. Available: PM:12048038 [4] N. M. Correa, T. Eichele, T. Adali, Y.-O. Li, and V. D. Calhoun, “Multi-set canonical correlation analysis for the fusion of concurrent single trial erp and functional mri,” NeuroImage, vol. 50, no. 4, pp. 1438–1445, MAY 1 2010. [5] J. R. Sato, C. Rondinoni, M. Sturzbecher, D. B. de Araujo, and E. A. Jr., “From eeg to bold: Brain mapping and estimating transfer functions in simultaneous eeg-fmri acquisitions,” NeuroImage, vol. 50, no. 4, pp. 1416–1426, MAY 1 2010. [6] A. M. Petrun and M. Kamenik, “Monitoring the depth of anesthesia with a bis monitor,” Zdravniski Vestnik-Slovenian Medical Journal, vol. 79, no. 1, pp. 43–47, JAN 2010. [7] S. Buoni, R. Zannolli, C. D. Felice, A. D. Nicola, V. Guerri, B. Guerra, S. Casali, B. Pucci, L. Corbini, F. Mari, A. Renieri, M. Zappella, and J. Hayek, “Eeg features and epilepsy in mecp2-mutated patients with the zappella variant of rett syndrome,” Clinical Neurophysiology, vol. 121, no. 5, pp. 652–657, MAY 2010. [8] M. M. Basha, S. Mittal, D. R. Fuerst, I. Zitron, M. Rayes, and A. K. Shah, “Quantitative interictal intracranial eeg monitoring helps define the epileptogenic focus in patients with primary brain tumors,” Epilepsia, vol. 49, pp. 3–3, 2008. [9] R. Singh, N. Zecavati, S. Weinstein, and J. Carpenter, “Seizure incidence, eeg characteristics, and short-term outcome in the pediatric stroke population,” Epilepsia, vol. 50, pp. 172–172, NOV 2009. [10] D. Martinez, T. C. Breitenbach, and M. do Carmo Sfreddo Lenz, “Light sleep and sleep time misperception - relationship to alpha-delta sleep,” Clinical Neurophysiology, vol. 121, no. 5, pp. 704–711, MAY 2010. 15  [11] C. Oreja-Guevara, “Neuromarketing,” Neurologia, pp. 4–7, OCT 2009. [12] Y. Dong, Z. Hu, K. Uchimura, and N. Murayama, “Driver inattention monitoring system for intelligent vehicles: A review,” 2009 Ieee Intelligent Vehicles Symposium, Vols 1 and 2, pp. 875–880, 2009. [13] J. A. Caldwell, J. L. Caldwell, D. L. Brown, and J. K. Smith, “The effects of 37 hours of continuous wakefulness on the physiological arousal, cognitive performance, self-reported mood, and simulator flight performance of f-117a pilots,” Military Psychology, vol. 16, no. 3, pp. 163–181, 2004. [14] C. Dussault, J. C. Jouanin, and C. Y. Guezennec, “Eeg and ecg changes during selected flight sequences,” Aviation Space and Environmental Medicine, vol. 75, no. 10, pp. 889–897, OCT 2004. [15] J. Gao, C. Zheng, and P. Wang, “Online removal of muscle artifact from electroencephalogram signals based on canonical correlation analysis,” Clinical Eeg and Neuroscience, vol. 41, no. 1, pp. 53–59, JAN 2010. [16] S. Devuyst, T. Dutoit, P. Stenuit, M. Kerkhofs, and E. Stanus, “Removal of ecg artifacts from eeg using a modified independent component analysis approach,” 2008 30th Annual International Conference of the Ieee Engineering in Medicine and Biology Society, Vols 1-8, pp. 5204–5207, 2008. [17] G. Gomez-Herrero, W. D. Clercq, H. Anwar, O. Kara, K. Egiazarian, S. V. Huffel, and W. V. Paesschen, “Automatic removal of ocular artifacts in the EEG without an EOG reference channel,” Proceedings of the 7th Nordic Signal Processing Symposium (NORSIG), 2006, pp. 130–133, 2006. [18] J. J. M. Kierkels, J. Riani, J. W. M. Bergmans, and G. J. M. van Boxtel, “Using an eye tracker for accurate eye movement artifact correction,” Biomedical Engineering, IEEE Transactions on, vol. 54, no. 7, pp. 1256–1267, 2007. [19] K. H. Ting, P. C. W. Fung, C. Q. Chang, and F. H. Y. Chan, “Automatic correction of artifact from single-trial event-related potentials by blind source separation using second order statistics only,” Medical engineering physics, vol. 28, no. 8, pp. 780–794, 2006. [20] G. L. Wallstrom, R. E. Kass, A. Miller, J. F. Cohn, and N. A. Fox, “Automatic correction of ocular artifacts in the EEG: a comparison of regression-based and component-based methods,” International Journal of Psychophysiology, vol. 53, no. 2, pp. 105–119, 2004. [21] P. He, G. Wilson, and C. Russell, “Removal of ocular artifacts from electro-encephalogram by adaptive filtering,” Medical and biological engineering and computing, vol. 42, no. 3, pp. 407–412, 2004. [22] T. Liu and D. Yao, “Removal of the ocular artifacts from EEG data using a cascaded spatiotemporal processing,” Computer methods and programs in biomedicine, vol. 83, no. 2, pp. 95–103, 2006. 16  [23] S. Puthusserypady and T. Ratnarajah, “H∞ adaptive filters for eye blink artifact minimization from electroencephalogram,” IEEE, Signal Processing Letters, vol. 12, no. 12, pp. 816–819, 2005. [24] A. Schlogl, C. Keinrath, D. Zimmermann, R. Scherer, R. Leeb, and G. Pfurtscheller, “A fully automated correction method of EOG artifacts in EEG recordings,” Clinical Neurophysiology, vol. 118, no. 1, pp. 98–104, 2007. [25] D. Talsma, “Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure,” Psychophysiology, vol. 45, no. 2, pp. 216–228, 2008. [26] Y. Li, Z. Ma, W. Lu, and Y. Li, “Automatic removal of the eye blink artifact from EEG using an ICA-based template matching approach,” Physiological measurement, vol. 27, no. 4, pp. 425–436, 2006. [27] C. A. Joyce, I. F. Gorodnitsky, and M. Kutas, “Automatic removal of eye movement and blink artifacts from EEG data using blind component separation,” Psychophysiology, vol. 41, no. 2, pp. 313–325, 2004. [28] B. He, Y. H. Wang, and D. S. Wu, “Estimating cortical potentials from scalp eeg’s in a realistically shaped inhomogeneous head model by means of the boundary element method,” Ieee Transactions on Biomedical Engineering, vol. 46, no. 10, pp. 1264–1268, OCT 1999. [29] L. Vigon, M. R. Saatchi, J. E. W. Mayhew, and R. Fernandes, “Quantitative evaluation of techniques for ocular artefact filtering of eeg waveforms,” Iee Proceedings-Science Measurement and Technology, vol. 147, no. 5, pp. 219–228, SEP 2000. [30] N. Ille, P. Berg, and M. Scherg, “Artifact correction of the ongoing EEG using spatial filters based on artifact and brain signal topographies,” Journal of clinical neurophysiology, vol. 19, no. 2, pp. 113–124, 2002. [31] J. Han and K. S. Park, “Independent component analysis for identification and minimization,” in [Engineering in Medicine and Biology, 1999. 21st Annual Conf. and the 1999 Annual Fall Meeting of the Biomedical Engineering Soc.] BMES/EMBS Conference, 1999. Proceedings of the First Joint, vol. 1, 1999, p. 428 vol.1. [32] T. P. Jung, S. Makeig, M. Westerfield, J. Townsend, E. Courchesne, and T. J. Sejnowski, “Analysis and visualization of single-trial event-related potentials,” Human brain mapping, vol. 14, no. 3, pp. 166–185, NOV 2001. [33] C. J. James and O. J. Gibson, “Temporally constrained ica: an application to artifact rejection in electromagnetic brain signal analysis,” Biomedical Engineering, IEEE Transactions on, vol. 50; 50, no. 9, pp. 1108–1116, 2003. [34] T. N. Hoang, W. El-Deredy, D. E. Bentley, A. K. P. Jones, P. J. Lisboa, and F. Mcglone, “Dipole source localisation using independent component analysis: single trial localisation of laser evoked pain,” vol. 1, 2004, pp. 403–406 Vol.1. 17  [35] A. C. Tang, M. T. Sutherland, and C. J. McKinney, “Validation of sobi components from high-density eeg,” NeuroImage, vol. 25, no. 2, pp. 539–553, 4/1 2005. [36] T. Zikov, S. Bibian, G. A. Dumont, M. Huzmezan, and C. R. Ries, “A wavelet based de-noising technique for ocular artifact correction of the electroencephalogram,” ser. Proceedings of the Second Joint EMBS/BMES Conference, Houston, TX, USA, 10//; 2002 2002, pp. 98–105. [37] R. J. Croft and R. J. Barry, “Removal of ocular artifact from the eeg: a review,” Neurophysiologie Clinique/Clinical Neurophysiology, vol. 30, no. 1, pp. 5–19, 2000. [38] G. Gratton, “Dealing with artifacts: The eog contamination of the event-related brain potential,” Behavior Research Methods Instruments & Computers, vol. 30, no. 1, pp. 44– 53, FEB 1998. [39] R. J. Croft and R. J. Barry, “Issues relating to the subtraction phase in eog artefact correction of the eeg,” International Journal of Psychophysiology, vol. 44, no. 3, pp. 187– 195, JUN 2002. [40] S. Selvan and R. Srinivasan, “Recurrent neural network based efficient adaptive filtering technique for the removal of ocular artefacts from eeg,” IETE Technical Review, vol. 17, no. 1-2, pp. 73–78, JAN-APR 2000. [41] S. Mehrkanoon, M. Moghavvemi, and H. Fariborzi, Real time ocular and facial muscle artifacts removal from EEG signals using LMS adaptive algorithm. NEW YORK; 345 E 47TH ST, NEW YORK, NY 10017 USA: IEEE, 2007. [42] B. Noureddin, P. D. Lawrence, and G. E. Birch, “Time-frequency analysis of eye blinks and saccades in EOG for EEG artifact removal,” Neural Engineering, 2007. CNE ’07. 3rd International IEEE/EMBS Conference on, pp. 564–567, 2007. [43] B. Noureddin, P. D. Lawrence, and C. F. Man, “A non-contact device for tracking gaze in a human computer interface,” Comput. Vis. Image Underst., vol. 98, no. 1, pp. 52–82, 2005. [44] C. Hennessey, B. Noureddin, and P. Lawrence, “A single camera eye-gaze tracking system with free head motion,” in Proceedings of the 2006 symposium on Eye tracking research & applications, 2006, pp. 87–94. [45] ——, “Fixation precision in high-speed noncontact eye-gaze tracking,” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 38, no. 2, pp. 289–298, 2008. [46] C. Hennessey and P. Lawrence, “3d point-of-gaze estimation on a volumetric display,” in Proceedings of the 2008 symposium on Eye tracking research & applications.  New York,  NY, USA: ACM, 2008, pp. 59–59.  18  [47] ——, “Noncontact binocular eye-gaze tracking for point-of-gaze estimation in three dimensions,” Biomedical Engineering, IEEE Transactions on, vol. 56, no. 3, pp. 790 –799, March 2009. [48] ——, “Improving the accuracy and reliability of remote system-calibration-free eye-gaze tracking,” Biomedical Engineering, IEEE Transactions on, vol. 56, no. 7, pp. 1891 –1900, July 2009. [49] P. Berg and M. Scherg, “Dipole models of eye movements and blinks,” Electroencephalography and clinical neurophysiology, vol. 79, no. 1, pp. 36–44, 1991. [50] S. Baillet, J. C. Mosher, and R. M. Leahy, “Electromagnetic brain mapping,” IEEE Signal Processing Magazine, vol. 18, no. 6, pp. 14–30, NOV 2001. [51] S. Finke, R. M. Gulrajani, and L. Gotman, “Conventional and reciprocal approaches to the inverse dipole localization problem of electroencephalography,” Ieee Transactions on Biomedical Engineering, vol. 50, no. 6, pp. 657–666, JUN 2003. [52] M. Scherg and T. W. Picton, “Separation and identification of event-related potential components by brain electric source analysis,” Electroencephalography and clinical neurophysiology, pp. 24–37, 1991. [53] M. Scherg, “Fundamentals of dipole source potential analysis,” Grandori, F., M.Hoke and G.L.Romani (Ed.).Advances in Audiology, Vol.6.Auditory Evoked Magnetic Fields and Electric Potentials.Viii+362p.S.Karger Ag: Basel, Switzerland; New York, New York, Usa.Illus, pp. 40–69, 1990. [54] M. S. Hamalainen and J. Sarvas, “Realistic conductivity geometry model of the human head for interpretation of neuromagnetic data,” Ieee Transactions on Biomedical Engineering, vol. 36, no. 2, pp. 165–171, FEB 1989. [55] E. Menninghaus, B. Lutkenhoner, and S. L. Gonzalez, “Localization of a dipolar source in a skull phantom - realistic versus spherical model,” Ieee Transactions on Biomedical Engineering, vol. 41, no. 10, pp. 986–989, OCT 1994. [56] B. N. Cuffin, “Eeg localization accuracy improvements using realistically shaped head models,” Ieee Transactions on Biomedical Engineering, vol. 43, no. 3, pp. 299–303, MAR 1996. [57] B. Vanrumste, G. V. Hoey, R. V. de Walle, M. R. P. D’Have, I. A. Lemahieu, and P. A. J. M. Boon, “Comparison of performance of spherical and realistic head models in dipole localization from noisy eeg,” Medical engineering & physics, vol. 24, no. 6, pp. 403–418, JUL 2002. [58] J. Le and A. Gevins, “Method to reduce blur distortion from eegs using a realistic head model,” Ieee Transactions on Biomedical Engineering, vol. 40, no. 6, pp. 517–528, JUN 1993. 19  [59] Y. C. Zhang, S. A. Zhu, and B. He, “A second-order finite element algorithm for solving the three-dimensional eeg forward problem,” Physics in Medicine and Biology, vol. 49, no. 13, pp. 2975–2987, JUL 7 2004. [60] C. M. Michel, M. M. Murray, G. Lantz, S. Gonzalez, L. Spinelli, and R. G. de Peralta, “Eeg source imaging,” Clinical Neurophysiology, vol. 115, no. 10, pp. 2195–2222, OCT 2004. [61] M. Fuchs, M. Wagner, and J. Kastner, “Boundary element method volume conductor models for eeg source reconstruction,” Clinical Neurophysiology, vol. 112, no. 8, pp. 1400– 1407, AUG 2001. [62] D. Flanagan, R. Agarwal, Y. H. Wang, and J. Gotman, “Improvement in the performance of automated spike detection using dipole source features for artefact rejection,” Clinical Neurophysiology, vol. 114, no. 1, pp. 38–49, JAN 2003. [63] Kierkels, “A model-based objective evaluation of eye movement correction in eeg recordings,” p. 246, 2006. [64] S. Puthusserypady and T. Ratnarajah, “Robust adaptive techniques for minimization of EOG artefacts from EEG signals,” Signal processing, vol. 86, no. 9, pp. 2351–2363, 2006. [65] A. Bashashati, B. Noureddin, R. K. Ward, P. D. Lawrence, and G. E. Birch, “An experimental study to investigate the effects of a motion tracking electromagnetic sensor during EEG data acquisition,” Ieee Transactions on Biomedical Engineering, vol. 53, no. 3, pp. 559–563, 2006. [66] B. Noureddin, P. D. Lawrence, and G. E. Birch, “Quantitative evaluation of ocular artifact removal methods based on real and estimated EOG signals,” in Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS 2008), 2008, pp. 5041–5044. [67] ——, “Effects of task and EEG-based reference signal on performance of on-line ocular artifact removal from real EEG,” in Proceedings of the 4th International IEEE/EMBS Conference on Neural Engineering (NER ’09), 2009, pp. 614–617. [68] ——, “Online removal of eye movement and blink EEG artifacts using a high speed eye tracker,” in submission. [69] ——, “Dipole modeling for low distortion, automated online ocular artifact removal from EEG without EOG,” in submission. [70] A. Bashashati, B. Nouredin, R. K. Ward, P. Lawrence, and G. E. Birch, “Effect of eyeblinks on a self-paced brain interface design,” Clinical Neurophysiology, vol. 118, no. 7, pp. 1639–1647, JUL 2007.  20  Chapter 2  Time-frequency Analysis of Eye Blinks and Saccades in EOG for EEG Artifact Removal 1  2.1  Introduction  Eye movements and blinks pose a serious problem for electroencephalogram (EEG) measurements, and their removal from measured EEG is an ongoing research problem [1][2][3][4]. Almost all current approaches to ocular artifact (OA) detection and removal [5][6][7] use one or more electro-oculogram (EOG) signals either directly or indirectly as a reference. For clinical EEG and other EEG applications (e.g., brain-computer interfaces or BCIs), collecting EOG is relatively simple and inexpensive. Detection of large OAs (e.g., blinks) can be accomplished using simple threshold operators, while OA removal can be accomplished using more sophisticated methods such as blind source separation [8][9]. However, the EOG signal contains not only eye movements and blinks, but also neural sources [10] (including those seen at EEG electrodes) and other sources of artifacts such as facial twitches. Also, during a blink or large eye movement, EOG electrodes move along with the facial muscles around the eyes, introducing additional noise in the measurement. For applications where parts of the EEG can be simply rejected, and there is no restriction about placing extra electrodes on the subject’s face, the EOG may be used to detect most eye movements and mark portions of the EEG to reject, assuming the EOG is filtered and sampled appropriately. For many applications, however, it is unacceptable to discard large portions of the EEG. In those cases, since the EOG may contain part of the EEG of interest, using the EOG as a reference in the OA removal scheme runs the risk of distorting the EEG after artifact removal. For other, real-world applications (e.g., BCIs), it is highly desirable (and sometimes even a requirement) not to place extra electrodes on the subject’s face. Thus alternative ways of measuring eye movements and blinks should be explored as possible references for the removal of OAs from EEG. One such alternative is a non-contact video-based eye tracking device (ETD). Unlike EOG, these devices do not require anything to be attached to the subject’s face or head, and can be used to extract only eye movements and blinks. Since their measurement is not contaminated with any other biological sources, they can be used as a source for OA removal without signifi1  A version of this chapter has been published. Noureddin, B., Lawrence, P.D., Birch, G.E., 2007. Timefrequency Analysis of Eye Blinks and Saccades in EOG for EEG Artifact Removal. In Proceedings of the 3rd IEEE EMBS Conference on Neural Engineering. pp. 564-567.  21  cantly distorting the underlying EEG of interest. While such a device introduces an extra piece of equipment, it can simultaneously provide the point-of-gaze of the subject, which can be very useful in a clinical EEG experiment or BCI. However, in order to select an appropriate ETD, the time-frequency characteristics of the EOG must first be analyzed to ensure that the ETD is capable of providing output at a rate that at least matches the highest frequency in the EOG. Where the addition of an ETD is not possible or cannot be justified for the given application (e.g, simple OA rejection), such an analysis is essential in order to select appropriate filtering and sampling parameters for the EOG. While some work has been done to analyze the frequency components of various OAs in the EEG, there has been little work to show the time-frequency characteristics of specific types of OAs in the original EOG. Even the few studies [11][12][13][14] that have examined the frequency content at EEG sites do not analyze higher frequency components. Also, since neither neural nor ocular signals are stationary, it is inappropriate to use simple spectral analysis. Instead, a time-frequency analysis (e.g., spectrogram or short-time Fourier transform) is more appropriate [15]. This chapter provides a time-frequency analysis of blinks and eye movements (specifically large, rapid eye movements or “saccades”) in an EOG signal measured at a very high sampling rate. Based on the analysis, we suggest minimum filtering and sampling frequencies needed for EOG to avoid missing eye movements and blinks that may cause artifacts in the EEG. In addition, a minimum sampling rate for an eye tracker-based alternative for OA removal is suggested.  2.2  Methods  Data was collected from 3 subjects (one male, two female, all between the ages of 20-35), sitting approximately 50cm in front of a computer monitor. EOG electrodes were attached at the right outer canthus and nasion for horizontal EOG (HEOG) measurement, and above and below the right eye for vertical EOG (VEOG). Both signals were amplified with an amplifier having a nominal gain of 1100, equipped with a second-order Butterworth low-pass filter with a cutoff frequency of 12kHz. The data was sampled at 24.5kHz using a PC and 12-bit analog to digital converter board. Each subject was instructed to perform four tasks. During the first task, a 4x4 grid of numbers was displayed on the computer monitor. Each number was sequentially highlighted for two seconds, and the subject was instructed to follow the highlighted number. The data from this task was used to analyze involuntary blinks. During the second task, the same grid was shown, but this time the subject was instructed to blink at least once during a two second interval. The data from this task was used to analyze voluntary blinks. Seven trials of 32 seconds each were collected from each subject for the first and second tasks. During the third task, a small red dot was shown sequentially for one second on each corner of the computer monitor, and the subject was instructed to follow the dot. Twenty-five trials for a total of 100 seconds of data were collected from each subject, and the data from this task was used to analyze medium-sized saccades. During the final task, the subject was instructed to look at the 22  ceiling, at the floor and back up at the ceiling five times in a row, and then to look to the far left, the far right and again the far left five times in a row. Each eye movement was guided to last about two seconds. The entire trial was repeated five times for each subject, and the data from this task was used to analyze large saccades. Once the two EOG signals (VEOG and HEOG) were digitized, two digital filters were applied to each signal: a 60Hz notch filter to remove line noise, and a 1000Hz 2nd-order Butterworth low-pass filter. The spectrogram of each signal was then calculated using a Hanning window. After visual inspection of several trials, it was determined that no significant power existed beyond 200Hz for any of the collected data. Thus, for each time epoch in the spectrogram, the power in the 0.1-200Hz range was examined (see Figure 2.1 for an example). For each time epoch, a frequency f was calculated such that the power in the 0-f Hz range contains 25% of the total power. Similarly, the frequencies containing 50% and 95% of the total power were calculated for each time epoch.  HEOG HEOG HEOG HEOG  Total  VEOG  Subject 3  VEOG  Subject 2  VEOG  Subject 1  VEOG  Table 2.1: Frequencies required to capture 25%, 50% and 95% of the power at any given time for blinks (voluntary and involuntary) and saccades (medium and large) as measured by the EOG.  min max mean min max mean min max mean min max mean min max mean min max mean min max mean min max mean  Blinks 25% 50% 0.0 0.0 1.5 6.0 0.4 1.1 0.0 0.0 4.5 35.9 0.8 3.1 0.0 0.0 1.5 19.4 0.2 1.1 0.0 0.0 3.0 44.9 0.5 2.5 0.0 0.0 23.9 46.4 3.0 8.6 0.0 0.0 10.5 44.9 0.7 3.0 0.0 0.0 23.9 46.4 1.4 4.2 0.0 0.0 10.5 44.9 0.7 2.9  95% 1.5 140.6 22.6 1.5 179.5 57.5 1.5 181.0 20.3 1.5 178.0 50.8 1.5 121.2 53.6 1.5 166.1 52.4 1.5 181.0 34.9 1.5 179.5 54.2  Saccades 25% 50% 0.0 0.0 1.5 9.0 0.2 0.8 0.0 0.0 1.5 3.0 0.3 0.7 0.0 0.0 1.5 1.5 0.1 0.4 0.0 0.0 3.0 3.0 0.2 0.4 0.0 0.0 3.0 3.0 0.2 0.5 0.0 0.0 3.0 4.5 0.2 0.4 0.0 0.0 3.0 9.0 0.2 0.6 0.0 0.0 3.0 4.5 0.2 0.5  95% 1.5 134.6 16.2 1.5 130.1 16.4 1.5 179.5 12.1 1.5 116.7 7.9 1.5 86.8 9.4 1.5 98.7 7.4 1.5 179.5 12.6 1.5 130.1 11.0  23  2.3  Results  The minimum and maximum frequencies containing 25%, 50% and 95% of the power at any given time epoch in the spectrogram, and the average frequency over all epochs containing 25%, 50% and 95% of the power were calculated over all trials for each subject and for each task. The results are shown in Table 2.1 for blinks (voluntary and involuntary) and for saccades (medium and large). From Table 2.1, it appears that there is a notable difference between the frequency content of blinks and saccades. A t-test showed that the mean frequency required to account for 95% of the power over all subjects is statistically significantly different (p<.05) between blinks and saccades. The data also shows that frequencies as high as 181Hz need to be considered to account for 95% of the power for a particular subject and task. Similarly, on average over all subjects, frequencies up to 54Hz for blinks and up to 13Hz for saccades must be considered to account for 95% of the power at any given time. Table 2.2 compares mean frequencies for voluntary and involuntary blinks and for medium and large saccades. It appears that there is no substantial difference in frequency content between medium and large saccades, but that there may be some difference in the frequency content between voluntary and involuntary blinks. A t-test revealed that the mean frequency required to account for 95% of the power over all subjects is statistically significantly different (p<.05) between voluntary and involuntary blinks in both VEOG and HEOG and between medium and large saccades in HEOG but not in VEOG. Figure 2.1 shows a sample of the measured VEOG, HEOG and spectrogram for each task for a single subject. In this figure it can be seen that the amplitude of the VEOG and HEOG signals for saccades vary widely over time. At times they are similar to amplitudes for blinks. Although the propagation of blinks and saccades to EEG electrode sites may be different, this suggests that saccades should not be ignored for effective OA removal. Table 2.2: Comparison of frequencies required, on average, to capture 25%, 50% and 95% of the total power at any given time instant for each of the four tasks. 25% 50% 95% Power VEOG 0.2 0.6 11.6 HEOG 0.2 0.4 7.5 VEOG 0.2 0.6 13.4 Medium saccades HEOG 0.3 0.6 13.6 2.1 6.0 43.1 Involuntary VEOG blinks HEOG 0.7 3.4 62.7 0.8 2.4 26.7 Voluntary VEOG blinks HEOG 0.7 2.5 45.7 Large saccades  2.4  Conclusions  We found that saccades can have amplitudes approaching those of blinks in the EOG. Further research is required to see if the same holds at EEG electrode sites. In addition, on average  24  (a) VEOG (voluntary blinks)  (b) HEOG (voluntary blinks)  (c) VEOG (involuntary blinks)  (d) HEOG (involuntary blinks)  (e) VEOG (medium saccades)  (f) HEOG (medium saccades)  (g) VEOG (large saccades)  (h) HEOG (large saccades)  Figure 1: Sample measured VEOG, HEOG and corresponding spectrograms for each task for a single subject. Gray levels in spectrogram represent power spectral density (dB) at the corresponding time epoch and frequency, with brighter values representing a higher power spectral density.  Figure 2.1: Sample measured VEOG, HEOG and corresponding spectrograms for each task for a single subject. Gray levels in spectrogram represent power spectral density (dB) at the corresponding time epoch and frequency, with brighter values representing a higher power spectral density.  25  over all subjects (Table 2.1), frequencies up to 54Hz need to be considered to account for 95% of the power. Our results further indicate that frequencies as high as 181Hz can appear in a subject’s EOG during certain tasks. This suggests that if EOG is used in an EEG ocular artifact removal scheme, it should be filtered below 181Hz and sampled at least at 362Hz to avoid aliasing. In addition, if as we suggested above, a video-based eye tracking device is used instead of the EOG to remove ocular artifacts, it would need to provide eye tracking output at a frequency of at least 362Hz.  26  References [1] A. Kandaswamy, V. Krishnaveni, S. Jayaraman, N. Malmurugan, and K. Ramadoss, “Removal of ocular artifacts from EEG - A survey,” IETE JOURNAL OF RESEARCH, vol. 51, no. 2, pp. 121–130, MAR-APR 2005. [2] G. L. Wallstrom, R. E. Kass, A. Miller, J. F. Cohn, and N. A. Fox, “Automatic correction of ocular artifacts in the EEG: a comparison of regression-based and component-based methods,” International Journal of Psychophysiology, vol. 53, no. 2, pp. 105–119, 2004. [3] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain-computer interfaces for communication and control,” Clinical neurophysiology : official journal of the International Federation of Clinical Neurophysiology, vol. 113, no. 6, pp. 767–791, 06// 2002. [Online]. Available: PM:12048038 [4] L. Vigon, M. R. Saatchi, J. E. W. Mayhew, and R. Fernandes, “Quantitative evaluation of techniques for ocular artefact filtering of eeg waveforms,” Iee Proceedings-Science Measurement and Technology, vol. 147, no. 5, pp. 219–228, SEP 2000. [5] R. Croft, J. Chandler, R. Barry, N. Cooper, and A. Clarke, “EOG correction: A comparison of four methods,” PSYCHOPHYSIOLOGY, vol. 42, no. 1, pp. 16–24, JAN 2005. [6] T. N. Hoang, W. El-Deredy, D. E. Bentley, A. K. P. Jones, P. J. Lisboa, and F. Mcglone, “Dipole source localisation using independent component analysis: single trial localisation of laser evoked pain,” vol. 1, 2004, pp. 403–406 Vol.1. [7] N. Ille, P. Berg, and M. Scherg, “Artifact correction of the ongoing EEG using spatial filters based on artifact and brain signal topographies,” Journal of clinical neurophysiology, vol. 19, no. 2, pp. 113–124, 2002. [8] K. H. Ting, P. C. W. Fung, C. Q. Chang, and F. H. Y. Chan, “Automatic correction of artifact from single-trial event-related potentials by blind source separation using second order statistics only,” Medical engineering physics, vol. 28, no. 8, pp. 780–794, 2006. [9] C. A. Joyce, I. F. Gorodnitsky, and M. Kutas, “Automatic removal of eye movement and blink artifacts from EEG data using blind component separation,” Psychophysiology, vol. 41, no. 2, pp. 313–325, 2004. [10] R. J. Croft and R. J. Barry, “Issues relating to the subtraction phase in eog artefact correction of the eeg,” International Journal of Psychophysiology, vol. 44, no. 3, pp. 187– 195, JUN 2002. 27  [11] D. Hagemann and E. Naumann, “The effects of ocular artifacts on (lateralized) broadband power in the EEG,” CLINICAL NEUROPHYSIOLOGY, vol. 112, no. 2, pp. 215–231, FEB 2001. [12] C. Bolger, S. Bojanic, N. Sheahan, D. Coakley, and J. Malone, “Dominant frequency content of ocular microtremor from normal subjects,” VISION RESEARCH, vol. 39, no. 11, pp. 1911–1915, JUN 1999. [13] M. EIZENMAN, P. HALLETT, and R. FRECKER, “POWER SPECTRA FOR OCULAR DRIFT AND TREMOR,” VISION RESEARCH, vol. 25, no. 11, pp. 1635–1640, 1985. [14] T. GASSER, L. SROKA, and J. MOCKS, “THE TRANSFER OF EOG ACTIVITY INTO THE EEG FOR EYES OPEN AND CLOSED,” ELECTROENCEPHALOGRAPHY AND CLINICAL NEUROPHYSIOLOGY, vol. 61, no. 2, pp. 181–193, 1985. [15] A. Cohen, Biomedical Signals: Origin and Dynamic Characteristics; Frequency-Domain Analysis, ser. The Biomedical Engineering Handbook, 3rd ed. vol. 2.  Boca Rato, FL:  CRC Press, 2006.  28  Chapter 3  Fixation Precision in High-Speed Noncontact Eye-Gaze Tracking 2  3.1  Introduction  Eye-gaze tracking systems offer great promise as an interface between humans and machines. Eye-gaze can provide insight into the intention of a user, as a user typically looks at objects of interest before acting upon them [1]. Real-time eye-gaze tracking systems allow dynamic interaction between the user and system using the human visual system for both feedback and control [2]. Tracking the fixations of a user provides a means for using eye-gaze information as a pointing device [3]. The use of eye-gaze as an input modality has not had widespread appeal with the general population however, due in part to the shortcomings of current eyegaze tracking technology. Some of the key issues which must be improved upon are accuracy, precision, latency, ease of use, comfort and cost [4] [5]. Recent advances in the development of non-contact video-based eye-gaze tracking systems have removed the need for contact with the user and have greatly improved the user’s comfort [6]. Non-contact systems coupled with advanced Point-Of-Gaze (POG) estimation algorithms which compute the location of the eye in 3D space can now operate without significantly restricting the user’s head motion. The increased freedom of motion greatly improves the ease of use of the system. Eye-gaze tracking systems in general and non-contact video-based systems in particular suffer from low precision, or fluctuating fixation estimates. The low precision is caused not just by sensor and system noise but is also due in part to the natural motions of the unconstrained head and eye. Considerable research has focused on developing real-time applications which compensate for the low precision include the use of large pointing targets [7] [8], fisheye lenses [9], and enhanced pointing algorithms such as MAGIC pointing [3] and the Grab and Hold Algorithm [10]. In this chapter a definition for fixation precision in the context of eye-gaze tracking is provided. Techniques for improving the precision of non-contact, video-based eye-gaze tracking systems at very high sampling rates are described. The high speed sampling techniques developed are evaluated on the High Speed Pupil-Corneal Reflection vector method (HS P-CR) and a 3D model-based POG method allowing free head motion, at each of three different POG sampling rates. Given the achieved performance of each POG method it is shown how digital 2 A version of this chapter has been published. Hennessey, C., Noureddin, B., Lawrence, P.D., 2008. Fixation Precision in High-Speed Noncontact Eye-Gaze Tracking. IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics. vol. 38, no. 2, pp. 289-298, April 2008.  29  filtering can be used to improve fixation precision at each POG sampling rate for both methods.  3.2 3.2.1  Background Eye Movements  Although the surrounding world appears stable, the head and eyes are continuously in motion and the images formed on the retinas are constantly changing. The stable view of the external world is only an artificially stabilized perception. Natural human vision is typically made up of short, relatively stable fixations connected by rapid reorientations of the eye (saccades). It is during fixations that the sensory system of the eye collects information for cognitive processing; during saccades, sensitivity of visual input is reduced [11]. Fixations typically remain within 1◦ of visual angle and last from 200 to 600 ms [1]. While fixating, the eye slowly drifts, with a typical amplitude of less than 0.1◦ of visual angle and a frequency of oscillation of 2 to 5 Hz. This drift is corrected by small fast shifts in eye orientation called microsaccades which have a similar amplitude to the drift. Superimposed on this motion is a tremor with a typical amplitude of less than 0.008◦ of visual angle with frequency components from 30-100 Hz and at times up to 150 Hz [12]. These small eye motions during a fixation are thought to be required to continuously refresh the sensors in the eye, as an artificially stabilized image will fade from view [13]. Saccades most frequently travel from 1 to 40◦ of visual angle and last 30 to 120 ms. Between saccades there is typically a 100 to 200 ms delay [1]. A number of other task specific eye motions exist, such as smooth pursuit, nystagmus and vergence which are not often found in the normal interaction between a user and a desktop monitor. The focus of this chapter is on the POG during fixations which are located between saccadic reorientations of the eye.  3.2.2  Fixation Detection and Filtering  A clear identification of the beginning and end of a fixation within the raw eye data stream is important as filtering should only be performed on POG data located within a single fixation. Poor identification of the beginning or end of a fixation may result in a degradation in fixation precision by incorporating POG data from saccades or neighboring fixations. There have been a number of methods developed for identifying the start and end of fixations in raw eye data streams using position, velocity and acceleration thresholding based on a priori knowledge of the behavior of eye-gaze movements [14] [15]. The fixation identification method used in this chapter is based on position variance of eye data as described by Jacob [1]. Due to the natural motions of the eye, fixation precision in eye-gaze tracking systems may be low, limiting the range of potential applications. However, as noted by Jacob [1], this low precision can be improved by low pass filtering the estimated POG data to reduce noise, at the expense of increased latency. The desired degree of filtering within a fixation will depend on the particular application under consideration. For high precision a higher order filter may be used at the expense of a longer latency or lag between the start of a fixation and the desired filter response. Alternatively a lower order filter may be used to allow the POG fixation to drift 30  slightly over time to follow the natural drift of the eye. Using digital finite-impulse-response (FIR) filtering techniques allows the filter order to be easily modified, as well, clearing the filter history (memory) provides a simple means for reseting the filter when a fixation termination is detected.  3.2.3  Eye-gaze Tracking Systems  The development of non-contact eye-gaze tracking systems is an important step in improving the acceptability of eye-gaze as a general form of human machine interface. One of the recent trends in eye-gaze tracking systems has been away from systems requiring contact with the subject’s face and head and towards non-intrusive and non-restrictive systems. Contact based methods such as electro-oculography (EOG), the scleral search coil and head mounted video-oculography (VOG) are seen as less desirable due to the requirement for contact with the users head, face or eyes. The EOG and search coil methods do benefit however from the ability to record the subject’s eye gaze electronically, rather than optically as in the case of video-based tracking. Electronic recording can be performed easily at high data rates (1000’s of Hz) using modern analog to digital integrated circuits. The sampling rate of video-based systems is limited to at most the frame rate of the imaging cameras (typically 30 Hz) and is often even lower due to the image processing techniques used and the high computational power required to process large quantities of image data in real-time. In the late 1980’s Hutchinson et al [16] developed a non-contact video-based system which used the P-CR vector method for computing the POG. The P-CR method greatly enhanced the usability of remote eye-gaze tracking systems by providing tolerance to minor head displacements. The system they developed was targeted to work with the severely disabled who had no other easily available means of communication. Images were recorded with a resolution of 512x480 pixels with a POG sampling rate of 30 Hz. After calibration, average accuracies for this method are typically 0.5 to 1◦ of visual angle. Over the past two decades, the P-CR vector method has been the favored means for noncontact, video-based POG estimation. However, the P-CR method still required a relatively stable head position. The accuracy of the method degrades considerably as the head is displaced from the calibration position [6]. To allow for free head motion, Shih and Liu [17] developed a novel 3D model-based method for estimating eye-gaze. Using models of the system, camera and eye, their algorithm was designed to accurately estimate the POG regardless of head location. Their system used two RS-170 based cameras and frame grabbers to record images with a resolution of 640x280 pixels at 30 Hz. Average accuracy was shown to be better than 1◦ of visual angle. Unfortunately their system design required the cameras to be quite close to the subjects’ eyes to acquire high spatial resolution images, restricting the freedom of head motion due to the limited camera field of view. To overcome the limitation of a narrow field of view, Ohno and Mukawa [18] developed a 3D model-based system with a camera mounted on a pan / tilt mechanism with a Narrow Angle (NA) lens, and two fixed cameras with Wide Angle (WA) lenses. The fixed cameras used 31  stereo imaging to determine the location of the head within the scene and directed the pan / tilt mechanism to orient the NA camera towards the eye. The WA cameras recorded images with a resolution of 320x120 pixels while the NA camera recorded images with a resolution of 640x480 pixels, all at frame rates of 30 Hz. System accuracy was reported as better than 1.0◦ of visual angle. The pan / tilt mechanism allowed the NA camera to track the motion of the eye with a larger effective field of view; however, the speed at which the mechanism could move was not sufficient to keep up with the faster motion of the head and eye resulting in loss of tracking and slow re-acquisition. Beymer and Flickner [19] used high speed galvonometers for their 3D model-based system in an attempt to overcome the limitations of the slow pan / tilt systems. A pair of fixed WA cameras used stereo imaging to direct the orientation of two NA cameras by controlling the pan and tilt of rotating lightweight mirrors mounted on galvonometers. The focus of each camera was controlled with a lens mounted on a bellows and driven by another motor. The NA cameras recorded NTSC images (with a typical resolution of 640x480 pixels) at a frame rate of 30 Hz. Due to the significant processing involved in the system a POG sampling rate of only 10 Hz was achieved. The accuracy reported for this system was 0.6◦ degrees of visual angle. While their system was capable of tracking the eye in the presence of natural high speed head motion, considerable calibration was required, and the overall complexity resulted in a low POG sampling rate. The 3D model-based system by Hennessey et al [20] was developed to minimize the physical system complexity while still allowing for fast head motion. The system was based on a single fixed camera with a high resolution sensor and no moving parts. The higher resolution sensor allowed a larger range of head motion with the eye remaining in the field of view of the camera, while still providing images with sufficient spatial resolution for the eye-tracking system to operate correctly. The system algorithms were designed to track the motion of the eye within the image and only operate on the portion of the image containing the eye. Processing only the portion of the image containing the eye allowed the POG to be computed rapidly, regardless of the overall image resolution. At the time of system development a camera with a resolution of 1024x768 pixels was available with a maximum frame rate of 15 Hz. Using this system, accuracies of better than 1◦ of visual angle were achieved. Fixation precision has not often been reported in evaluations of 3D model-based eye-gaze tracking systems as the focus tended to be on the basic system functionality and accuracy of the novel POG algorithms. However, Yoo and Chung [21] did provide some insight into the fixation precision of their free head motion eye-gaze tracking system. Using a similar system design as Ohno and Mukawa [18] they reported an accuracy of 0.98◦ in horizontal error and 0.82◦ in vertical error when operating at 15 Hz. Precision in standard deviations was reported in millimeters which converted to 0.84◦ of visual angle. We believe that fixation precision is an important parameter in the evaluation of the performance of eye-gaze tracking systems and the goal of this chapter is to present methods for enhancing fixation precision.  32  Computer Screen  Camera Image (0,0) Iris  Pupil Corneal Reflection  gy  Sclera  py px  P  gx  Figure 3.1: In this figure an example of a portion of a recorded bright pupil image is shown to illustrate the P-CR vector. In the P-CR method the vector (gx , gy ) is determined from the center of the corneal reflection to the center of the pupil. A mapping is then defined to relate the P-CR vector to the POG screen coordinates (px , py ).  3.3  Methods  3.3.1  Point-Of-Gaze Estimation  There are currently two main types of methods for computing the POG from remote video images, the P-CR vector method and the 3D model-based method. P-CR Method The simplicity of the P-CR vector method and it’s ability to handle minor head motions led to its widespread adoption. As the eye rotates to observe different points, the image of the reflection off the spherical corneal surface remains relatively fixed. The corneal reflection, generated by external lighting, provides a reference point for determining the relative motion of the pupil. A simple mapping is used to relate the 2D POG screen vector to the 2D image vector formed from the center of the corneal reflection to the center of the pupil as shown in Fig. 3.1. Independent polynomial equations are determined to relate the 2D P-CR vector (gx , gy ) to each of the 2D POG screen co-ordinates (px , py ). The polynomial order varies between different system designs but is most often of first order as shown in (3.1). It has been shown that small increases in accuracy may be achieved by increasing the order of the polynomial, at the expense of a decrease in robustness to head motion and the need for an increasing number of calibration points [22]. px = a0 + a1 gx + a2 gy + a3 gx gy py = b0 + b1 gx + b2 gy + b3 gx gy  (3.1)  The parameters ai and bi are determined from a calibration procedure in which the user fixates sequentially on a number of known screen locations while the P-CR vector is recorded. In the case of a first order polynomial fit, a minimum of 4 calibration points are required to solve for the 4 unknowns in each of the two equations in (3.1), typically using a least squares method. 33  Eye r  n  C r d  LOS  P  Pupil Cornea  Figure 3.2: The 3D model-based method for computing the POG is based on determining the location of the center of the cornea and the line-of-sight vector. Using (3.2) the POG can be found by tracing the LOS vector from C to the surface of the screen P . The model of the eye is based on the schematic description by Gullstrand which in this case includes three parameters; the radius of the model of the corneal sphere r, the distance from the center of corneal sphere to the center of pupil rd and the index of refraction of the aqueous humor fluid n. 3D Model-Based Method Algorithms based on 3D models have been developed to overcome the degradation in accuracy that the P-CR method suffers with larger head movements. The 3D model-based methods compute the position of the eye in 3D space, which is then used in computing the POG regardless of the head and eye position. There are a large variety of 3D model-based algorithms, although each technique is typically based on a model of the physical system, camera and eye. The physical system is modeled geometrically through physical measurement or using optical methods as in [17]. The camera lens is modeled as a pin-hole with parameters identified through camera calibration [23] [24]. The models of the eye are most often based, in varying levels of sophistication, on the schematic eye developed by Gullstrand [25]. An example of a typical eye model with three parameters is shown in Fig. 3.2. Per-user calibration is required to fit the eye model parameters to individual users. Feature information is extracted from recorded images and fit to the system models to solve for the location of the eye in 3D space, the Line-Of-Sight (LOS) and ultimately the POG as shown in Fig. 3.2. The location of the eye in 3D space is found by determining the center of the cornea C, when modeled as a spherical surface, using triangulation with images of multiple corneal reflections. With the 3D location of the cornea center and the image location of the center of the pupil, the 3D LOS vector can be computed. The LOS can be traced from C to intersect with any surface point P in the system by determining the parameter t in (3.2). The object of intersection is typically the surface of the computer screen which is parameterized as a plane in the system model. P = C + t · LOS  (3.2)  34  (a) Bright Pupil Image  (b) Dark Pupil Image  (c) Difference Image  Figure 3.3: Illustration of the bright pupil and image differencing techniques. The bright pupil in Fig. 3.3(a) is illuminated with on-axis lighting, while the dark pupil in Fig. 3.3(b) is illuminated with off-axis lighting. The background intensity of the two images is similar, which after differencing (3.3(a) - 3.3(b)) results in a bright pupil on an almost blank background as shown in Fig. 3.3(c).  3.3.2  Image Processing  Both the P-CR vector and 3D model-based methods for estimating the POG require features extracted from the recorded images. The P-CR method requires the location of the pupil and the location of a single corneal reflection, while the 3D method requires the pupil and at least two corneal reflections for triangulation. The location of the pupil and corneal reflections are found by identifying the perimeter of their respective image contours. The pupil contour perimeter can be considerably difficult to segment due to the low contrast between the pupil and the surrounding iris. The corneal reflections can be difficult to segment due to their small size, often less than 3x3 pixels. Varying levels of ambient light can compound the feature extraction difficulty. To improve the performance of the feature extraction task the bright pupil and image differencing techniques are used to create a high contrast image of the pupil [26] [27]. Computing a difference image from alternating bright pupil and dark pupil images removes most of the background features, ideally leaving only the high contrast pupil on a black background. An example of the bright pupil and image differencing techniques are shown in Fig. 3.3. Using a single on-axis light source generates a single corneal reflection which is used in the P-CR POG estimation method. By using two off-axis light sources for the dark pupil image, the two corneal reflections required for the 3D method are generated. While the image differencing technique aids in the identification of the pupil contour within the image, it is also susceptible to significant artifacts which may corrupt the identified contour. When the difference image is computed, the corneal reflections formed by the off-axis lighting in the dark pupil image can result in removing a portion of the pupil as seen in the lower left side of Fig. 3.3(c). Also seen is the addition to the pupil contour of the corneal reflection from the on-axis lighting. The difference image is also susceptible to significant artifacts due to inter-frame motion. Inter-frame motion may distort the extracted pupil contour by misaligning 35  (a) Difference Pupil Contour  (b) Bright Pupil Contour  Figure 3.4: An example of the results of the two stage pupil detection algorithm. In Fig. 3.4(a) the detected perimeter of the identified image difference pupil contour is shown overlying the difference image. Using the difference pupil contour as a guide, the pupil perimeter is detected in the bright pupil image as shown in Fig. 3.4(b). The gap in the pupil perimeter is a result of masking off the on-axis corneal reflection, which is subsequently compensated for by fitting an ellipse to the bright pupil contour perimeter. the bright and dark pupil images which will significantly impact the accuracy of the POG estimation algorithms. To avoid the inaccuracies resulting from inter-frame motion and the image differencing, a two stage approach to pupil detection was used. The first stage of pupil extraction determines the image difference pupil as described above. The corneal reflections are then identified in both the bright and dark pupil images based on their proximity to the roughly identified difference pupil (see Fig. 3.4(a)). In the second stage of pupil identification the pupil contour is segmented in only the bright pupil image using the previously detected difference pupil as a guide. Using only the bright pupil image avoids errors due to inter-frame motion and the accidental removal of pupil area by the subtraction of the dark pupil corneal reflections. The final step of the second stage is to mask off the portion of the pupil contour which may be due to the addition of the on-axis corneal reflection (see Fig. 3.4(b)). The resulting pupil perimeter retains its elliptical shape when compared with the initial roughly identified pupil perimeter. For a more detailed description of the methods used for pupil and corneal reflection segmentation see [28]. Before transferring the identified pupil and glint locations to the POG estimation algorithms, the identified contour perimeters are further refined using an ellipse fitting algorithm which is both fast (computationally efficient) and robust to noise [29]. Subpixel accuracy in the identification of the contour centers may be achieved by using the center of the equation of an ellipse fit to the contour perimeters [30]. As well, using an ellipse fit to the available pupil perimeter points compensates for the loss of data when a gap appears as a result of the masking operation to remove the corneal reflection from the on-axis lighting.  36  3.3.3  POG Sampling Rate  The POG sampling rate in video-based eye-gaze tracking systems is at most equal to the frame rate of the camera, although it is often less due to image processing requirements and techniques such as image differencing. In order to achieve high speed eye-gaze tracking the POG sampling rate must be maximized. Software Region-Of-Interest Image processing algorithms can be considerably time consuming due to the large quantity of information to process. To greatly reduce the processing load for our system, a software based Region-Of-Interest (ROI) was employed to constrain the processing to only the image area of interest. In the design of our system, rather than using mechanical tracking, the camera field of view encompasses a large area which allows the eye to move around within the scene. Accordingly, only a small portion of the overall scene contains information of interest as shown in Fig. 3.5(a). The location of the ROI is continuously updated to track the location of the eye, which allows for head motion within the field of view of the camera. Initially, the first captured images are processed in their entirety to identify the location of the pupil within the overall scene. The ROI is then centered on the eye as each frame is processed and the center of the pupil identified. In this fashion only a small portion of the image will normally be processed. In the event that the pupil is lost due to blinking or rapid head or eye motion which relocates the eye outside of the ROI between image frames, the entire image is reprocessed until the pupil location is re-identified or in the case of a blink, the eye reopens. Hardware Region-Of-Interest The basis of data reduction using a software ROI may also be applied to the reduction of data transmit from the camera to computer. Reducing the transmission of information per frame allows for an increase in the overall frame rate and consequently the maximum achievable POG sampling rate. The Firewire2 (IEEE-1394b) Digital Camera (DCAM) specification for data transmission defines the operation of hardware based ROIs (using Format 7), although some variation in behavior may be found between different camera manufacturers. Using commands in the Firewire2 protocol, the camera can be configured to apply a hardware ROI to an image before the imaging sensor is exposed and read. The frame rate for the camera used by our system (described in Section 3.3.4) only increased by skipping image rows, no frame rate improvement was achieved for skipping image columns. Using the software ROI in conjunction with the hardware ROI allowed the flexibility to maximize the frame rate while minimizing the required processing. Similar to the software ROI, the location of the hardware ROI was re-centered on the pupil each image frame to track the motion of the eye. Unfortunately, changing the location of the hardware ROI in real-time aborted the exposure of the current image, resulting in an underexposed image for one frame. To minimize the number of hardware ROI location changes, the size of the hardware ROI was  37  (a) Full Image and Software ROI  (b) Hardware and Software ROI’s  Figure 3.5: Regions-Of-Interest are used to reduce the quantity of image information to process as well as increase the camera frame rate. In Fig. 3.5(a) only the software ROI is applied to the original full sized bright pupil image (640x480 pixels). Only the portion of image within the rectangular box (110x110 pixels) surrounding the eye will be processed. In Fig. 3.5(b) the hardware ROI (640x120 pixels) has been applied in addition to the software ROI. chosen to be the full width of the original image and slightly larger than the height of the cornea, while the size of the software ROI was set to the width of the cornea and slightly smaller than the height of the hardware ROI as shown in Fig. 3.5(b). The software ROI then tracks all horizontal motion and most small vertical motions without requiring a change in the hardware ROI location. The hardware ROI is then only repositioned for larger vertical displacements in the position of the eye. Image Sequencing Recording alternating bright and dark pupil images for the image differencing technique aids in the detection of the pupil within the overall scene, however it also reduces the effective POG sampling rate. When a 1:1 ratio of alternating bright and dark pupil images are recorded, the 38  P-CR method can only generate a unique POG (Pi ) at half the camera frame rate as shown in Table 3.1, since all the information required to compute the POG is contained within the bright pupil image. Recall that for the P-CR POG estimation method the image features required are the pupil and a single corneal reflection, which are both found in the bright pupil image. The 3D method uses image information from both the bright and dark pupil images and as such can compute a unique POG (Pi ) at the camera frame rate by using features from each current image (fi+1 ) along with the image previously recorded (fi ). In the HS P-CR method reported here, the system operation was enhanced by increasing the sampling rate of unique POG estimates through increasing the ratio of bright pupil images with respect to dark pupil images. As the HS P-CR method only requires the dark pupil image to roughly identify the location of the pupil in the scene, the ratio of bright to dark pupil images may be increased until inter-frame motion results in loss of tracking due to misaligned image differencing. To illustrate the improvement in POG sampling rate an example of a 3:1 bright to dark pupil ratio is also shown in Table 3.1 in which the sampling rate has increased from 50% of the camera frame rate to 75%. Increasing the rate of unique POG estimates for the HS P-CR method by increasing the ratio of bright to dark pupil images is preferable to maintaining a 1:1 ratio and using a corneal reflection from the dark pupil image as is done in the 3D method. In the HS P-CR method, using image information for POG estimation from only a single bright pupil image (see Table 3.1) avoids the errors in POG estimation that may result from misaligned bright and dark pupil image features due to inter-frame motion. Unfortunately a similar technique cannot be used for the 3D method to avoid inter-frame motion while increasing the POG update rate. The 3D method would require two additional corneal reflections in the bright pupil image to compute the POG with information contained solely in a single image. The extra reflections would have to be masked off of the pupil contour as described in Section 3.3.2, potentially removing large portions of the pupil contour and consequently decreasing the accuracy of the pupil feature identification. The corneal reflection from the on-axis lighting in the bright pupil image cannot be used with the 3D method as the on-axis light source is located coaxially with the focal point of the camera, which results in a singularity in the 3D model algorithm, see Equation (4) in [20]. Table 3.1: POG sampling sequences for HS P-CR and 3D POG estimation methods with 1:1 and 3:1 bright to dark pupil ratios. Frame Sequence f1 f2 f3 f4 f5 f6 f7 f8 ... 1:1 ratio Image Type D B D B D B D B ... P-CR POG - P1 - P2 - P3 - P4 ... 3D POG - P1 P2 P3 P4 P5 P6 P7 ... Frame Sequence f1 f2 f3 f4 f5 f6 f7 f8 ... 3:1 ratio Image Type D B B B D B B B ... HS P-CR POG - P1 P2 P3 - P4 P5 P6 ... Dark pupil image (D), Bright pupil image (B), No unique POG sample (-)  39  Figure 3.6: Physical system showing the camera located beneath the monitor, the on-axis lighting (ring of LEDs surrounding the lens), the two off-axis point light sources located to the right of the monitor and the monitor upon which the POG is estimated.  3.3.4  Hardware  The Dragonfly Express from Point Grey Research was the digital camera used for the system described in this chapter. The camera is capable of recording full sized images of 640x480 pixels at frame rates up to 200 Hz. To increase the frame rate further, a hardware ROI was used to reduce the size of the recorded images. The camera uses the Firewire2 (IEEE-1394b) standard to transmit images from the camera to the computer. An electronic strobe signal generated by the camera at the start of each image frame was monitored by a custom microcontroller to synchronize the on-axis and off-axis lighting with the image exposure. The microcontroller also controlled the ratio of bright to dark pupil images as directed by the computer through the serial port. The system evaluation was performed on a Pentium IV 3 GHz processor with 2 GB of RAM. A flat screen LCD monitor with a width of 35.8 cm and a height of 29.0 cm was set to a resolution of 1280x1024 pixels and located at a distance of approximately 75 cm from the users eye. The physical system is shown in Fig. 3.6.  3.4  Experimental Design and Results  The techniques to perform high-speed, non-contact eye-gaze tracking described above were evaluated with the HS P-CR and 3D model-based methods for estimating the POG. Both POG methods were tested at three different camera frame rates to determine the effect of sampling rate on fixation precision. Varying levels of digital filtering were applied to the recorded data for each POG method at each frame rate to show the resulting improvements in precision. 40  Screen X−Axis (cm) 0  0  5  10  15  20  25  30  35  Screen Y−Axis (cm)  5  10  15  20  Unfiltered POG Filtered POG (500ms) Fixation Marker  25  Figure 3.7: An example of the fixation task in which the user observed each of 9 points on a 3x3 grid. In this example the POG samples were recorded with the HS P-CR vector method and a camera frame rate of 407 Hz. The original POG data is shown along with the results of filtering with a 500 ms moving window average. The POG screen coordinates have been converted from units of pixels to centimeters in this figure. The sequences of POG estimates were collected on a total of four different subjects while performing a simple task, with a data set recorded for each combination of the two POG methods and three camera frame rates, resulting in a total of 6 data sets per subject and 24 data sets overall. The camera frame rates tested were 30 fps, 200 fps and 407 fps which allows for comparison between the equivalent of a 30 fps NTSC systems, 200 fps achievable when recording full sized images without a hardware ROI (640x480 pixels), and 407 fps achievable with the hardware ROI enabled (640x120 pixels). The experimental procedure was comprised of a calibration phase, a familiarization phase and then the performance of a simple task during which the POG screen coordinates were recorded. The calibration consisted of having the subject observe the four corners of the screen for approximately one second each while the per-user parameters were estimated. After calibration, a short familiarization period was allowed in which the calibration was evaluated with the subject verifying that the computed POG across the screen was in fact the same (or at least very close to) their real POG. The subject was then asked to fixate on nine sequential points on a 3x3 grid which were displayed across the screen. Throughout the fixation task the screen coordinates of the POG were continuously recorded, along with a flag indicating the fixation status at each grid point. The fixation status flag was set to indicate the beginning of a fixation when the relative stability of a fixation was detected, and the flag was cleared when the larger motion of a saccade was detected, as per the position variance algorithm described by Jacob [1]. At least two seconds of fixation data was acquired before moving to the next point. An example of the fixation data collected on the 3x3 grid for a single subject is shown in Fig. 3.7 while a subset of 10 POG estimates from a single fixation point are shown in Fig. 3.8. As discussed previously, the POG sampling rate for the HS P-CR POG estimation method was enhanced by increasing the ratio of bright to dark pupil images for the 200 fps and 407 41  Screen X−Axis (cm) −1 0  0 6  1  Screen X−Axis (cm)  2  3  −1 0  4  3  9  0.5  0  0.5  1 4  6 5  3 2  1  1.5  2  2.5  3  2  1  1  7  Screen Y−Axis (cm)  Screen Y−Axis (cm)  1  2  1.5  2  8 7  9 10  2.5  10 3  3  8  3.5  3.5 Unfiltered POG Fixation Marker  Unfiltered POG Fixation Marker  5  4  (a) 3D POG estimation at 30 Hz  4  (b) 3D POG estimation at 407 Hz  Figure 3.8: A labeled sequence of 10 unfiltered POG estimates for the 3D POG estimation method are shown from a single fixation marker. Sampling sequences at two camera frame rates are illustrated, 30 Hz shown in Fig. 3.8(a) in which the 10 point sequence corresponds to a time interval of 333 ms, and 407 Hz shown in Fig. 3.8(b) which corresponds to a time interval of 25 ms. fps camera frame rates. At 30 fps the ratio had to remain at 1:1 bright to dark pupil images as higher ratios resulted in frequent loss of tracking due to inter-frame motion and misaligned image difference pupil contours. At the higher camera frame rates, higher ratios were possible while still maintaining tracking as the magnitude of the motion between each image frame was less. Since loss of tracking rarely occurred at the 1:1 ratio and 30 fps rate, a similar period between dark pupil images was used for the higher camera frame rates. The achieved HS P-CR update rates for each camera frame rate along with the corresponding bright to dark pupil image ratios are listed in Table 3.2. Table 3.2: Image sequence parameters for the HS P-CR POG method. Camera Frame Bright to Dark Dark Pupil POG Sampling Rate (fps) Pupil Ratio Period (ms) Rate (Hz) 30 1:1 66 15 200 9:1 50 180 407 19:1 49 386  Low pass filtering of the recorded sequence of POG screen coordinates was performed offline for each subject and each system configuration. Filtering the POG data offline allowed for comparison of various levels of filtering on a consistent set of data. The recorded X and Y POG coordinates were filtered with a rectangular window FIR filter (moving average) with filter lengths corresponding to latencies (window lengths) of 30 ms, 100 ms and 500 ms. The filter order for each system configuration was determined from the POG sampling rate and the desired latency as listed in Table 3.3. The three filter lengths were chosen to contrast the difference in fixation precision with latencies up to the duration of a typical fixation. 42  Table 3.3: Filter order for each sampling rate and filter length for the HS P-CR and 3D POG estimation methods. Sampling Rate HS P-CR Method 15 Hz 180 Hz 386 Hz 3D Method 30 Hz 200 Hz 407 Hz  30 ms  Filter Length 100 ms 500 ms  1 5 11  1 18 39  7 90 193  1 6 12  3 20 41  15 100 203  After filtering the recorded X and Y POG coordinates with each of the FIR filters, the fixation precision was determined at each of the 9 fixation points. The standard deviation was computed on the last 500 ms of the two seconds of data recorded at each fixation point to avoid combining data points from adjacent fixations when high filter orders are used. The reported fixation precision for each system configuration is the average of the 9 standard deviations for each of the 4 subjects and is reported in degrees of visual angle as shown in Table 3.4. To convert from units of screen pixels to degrees of visual angle the estimated POG and fixation marker reference point are first converted from pixels to centimeters with the scaling factors of 35.8 cm / 1280 pixels for the X coordinate and 29 cm / 1024 pixels for the Y coordinate. The POG error is then computed as the difference between the estimated POG (px , py ) and the fixation marker reference point (rx , ry ). It is assumed that in the worst case, the eye is located along a vector normal to the screen that extends from the midpoint of the POG error vector. The equation to convert from pixels to degrees of visual angle (θ) is then shown in (3.3) with the assumption that the average distance from eye to screen was 75 cm. θ = 2 · tan−1  (px − rx )2 + (py − ry )2 1 · 2 75  (3.3)  Table 3.4: Fixation Precision for each system configuration. Sampling Filter Length Rate None 30 ms 100 ms 500 ms HS P-CR Method 15 Hz 0.205 0.205 0.205 0.065 180 Hz 0.258 0.173 0.112 0.051 386 Hz 0.199 0.115 0.071 0.035 3D Method 30 Hz 0.550 0.550 0.306 0.108 200 Hz 0.390 0.288 0.200 0.074 407 Hz 0.347 0.230 0.155 0.050 Note: All units in degrees of visual angle  43  3.5  Discussion  Using the techniques described above, operation of the remote eye-gaze tracking system at high sampling rates was achieved. The higher sampling rates more accurately record the faster dynamics of the eye and reduce signal aliasing. Using the Nyquist criterion the sampling rate should be at least twice the highest frequency of the micro-saccades and tremors (up to 150 Hz [12]) observed during fixations. To illustrate the effect of aliasing a labeled sequence of POG estimates is shown with a low sampling rate (30 Hz) in Fig. 3.8(a) and at a much higher sampling rate (407 Hz) in Fig. 3.8(b). For the lower sampling rate the details of the trajectory of the POG are missing as illustrated by the erratic and large displacements between subsequent POG estimates. At the higher sampling rate the trajectory of POG estimates can more clearly be seen as the displacement between estimates is smaller. Processing the incoming images at 200 fps was achieved with the use of only the software ROI. With the addition of the hardware ROI, the camera frame rate increased to 407 fps. Using the 3D model-based POG estimation algorithm the sampling rate was equal to the camera frame rate: 30 Hz at 30 fps, 200 Hz at 200 fps and 407 Hz at 407 fps. When using the HS P-CR method for estimating the POG an update rate of only 15 Hz was achieved when operating at 30 Hz due to the requirements of the image differencing technique. With the reduced inter-frame motion at higher frame rates it was possible to enhance the P-CR method by increasing the ratio of bright to dark pupil images without losing lock on the eye. Increasing the bright to dark pupil ratio to 9:1 for the 200 fps frame rate increased the POG sampling rate to 180 Hz and increasing the ratio to 19:1 at 407 fps increased the sampling rate to 386 Hz. The POG update rates achieved for the HS P-CR and 3D methods are a significant increase over the rates achieved by similar eye-gaze tracking systems discussed in the background review of this chapter. The fixation precision reported for the 3D model-based POG method at the lowest sampling rate (30 Hz) was 0.55◦ . This result is of a similar magnitude to the precision reported by Yoo and Chung [21] at 0.84◦ for their non-contact, free head, eye-gaze tracking system, which operated at a rate of 15 Hz. The benefit of our system is the ability to increase the POG sampling rate which then allows digital filtering to further improve fixation precision while still maintaining an acceptable latency. Using digital low pass filtering resulted in an improvement in fixation precision in all system configurations as shown in Table 3.4. In the experiments performed, the best fixation precision was achieved with the longest filter (500 ms), which for the HS P-CR method resulted in a standard deviation of 0.035◦ or 1.6 screen pixels and 0.050◦ or 2.3 screen pixels for the 3D model-based method. The relationship between filter length and fixation precision appears to be exponential as shown in Fig. 3.9. As filter length increases, a diminishing return in the trade off between achieved precision and POG latency is achieved. The fixation precision of the HS P-CR method was compared with the 3D model-based method at each of the camera frame rates using three one-way ANOVAs. It was found that the HS P-CR method was statistically more precise than the 3D method at 30 fps (F (1, 70) = 87.168, p < 0.001), 200 fps (F (1, 70) = 17.939, p < 0.001) and 407 fps (F (1, 70) = 38.273, p < 0.001). This result is possibly due to motion of the eye between the image frames used to 44  0.35 3D Model POG method HS P−CR Method  0.25  0.2  Fixation Precision (  o  of visual angle)  0.3  0.15  0.1  0.05  0  0  0.1  0.2 0.3 Filter Length (seconds)  0.4  0.5  Figure 3.9: Fixation precision verses filter length is shown averaged across all four subjects indicating an exponential relationship. The POG screen coordinates were recorded with the system operating at 407 fps for both the HS P-CR and 3D POG methods. compute the POG in the 3D method. It is possible that the natural eye motions between image frames results in misaligned bright and dark pupil image features, increasing the variability of the estimated POG and consequently decreasing the fixation precision. Supporting this theory is the improvement in fixation precision for the 3D method when the camera frame rate increases, decreasing the time between image frames and consequently reducing the degree of potential inter-frame motion. A comparison of accuracy between the two methods was not performed as the focus in this chapter is on fixation precision. A more detailed investigation of system accuracy is presented in [20]. While not the focus of this chapter, system accuracy was confirmed to be comparable to many contemporary remote eye-gaze tracking systems [6]. Averaged over all subjects and all operating conditions the HS P-CR method resulted in an accuracy of 0.72◦ while the 3D method accuracy was 1.0◦ of visual angle. The accuracy of the HS P-CR method appears slightly better in these experiments; however, the measurements were only recorded with the head located near the calibration position and did not explicitly exercise the free head capabilities of the 3D model-based method.  3.6  Conclusions  The precision of eye-gaze tracking systems within fixations is a key factor in determining the usability of eye-gaze tracking for human computer interaction. In this chapter the start and end of fixations were detected using position variance thresholding. The precision of a fixation was then computed as the standard deviation of the POG estimates temporally located between 45  the beginning and end of the fixation. Techniques were presented which enable video-based, non-contact, eye-gaze tracking systems to operate at high POG sampling rates, more adequately recording the dynamics of high speed eye movements. A high speed method for P-CR POG estimation was also presented in which the sampling rate was increased by modifying the ratio of bright pupil to dark pupil images. Increasing the frequency of bright pupil images increased the frequency of the images containing the features required to compute the POG. The high speed techniques were evaluated on both the HS P-CR and 3D model-based POG methods. Within the fixations defined by the position variance thresholding, fixation precision was shown to improve through the application of low pass digital filters. Higher POG sampling rates allowed for a trade-off between fixation precision and real-time POG latency, depending on the intended user application. An exponential relationship was observed between filter order and fixation precision, indicating a diminishing incremental improvement with increasing filter orders. A comparison between the HS P-CR POG estimation method and the 3D model-based method showed that the fixation precision for the HS P-CR method was significantly better than the 3D method at each of three camera frame rates tested. One possible explanation for this result is that the HS P-CR POG estimation method avoided the misalignment of image feature data resulting from inter-frame motion by using information from only a single image to compute the POG. Although the 3D method is shown to be less precise, it does allow a wider range of head motion [20] than the HS P-CR method [6]. In this study however, subjects were asked to maintain a comfortable, relatively stationary head-position. Future work will focus on the evaluation of the techniques presented in this chapter on a larger sample of subjects. Integration of these methods with an eye-gaze tracking system for use in the real-world is also desirable to increase the realism of the eye-gaze tracking experiments.  46  References [1] R. Jacob, Virtual Environments and Advanced Interface Design. New York, NY, USA: Oxford University Press, 1995, ch. Eye tracking in advanced interface design, pp. 258-288. [2] R. Jacob and K. Karn, The Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Research. Amsterdam: Elsevier Science, 2003, ch. Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the Promises (Section Commentary), pp. 573-605. [3] S. Zhai, C. Morimoto, and S. Ihde, “Manual and gaze input cascaded (magic) pointing,” in CHI ’99: Proceedings of the SIGCHI conference on Human factors in computing systems. New York, NY, USA: ACM Press, 1999, pp. 246-253. [4] K. S. Karn, S. Ellis, and C. Juliano, “The hunt for usability: tracking eye movements,” in CHI ’99 extended abstracts on Human factors in computing systems. New York, NY, USA: ACM Press, 1999, pp. 173-173. [5] H. Collewijn, Vision research: A practical Guide to Laboratory Methods. Oxford University Press, 1999, ch. Eye movement Recording, pp. 245-285. [6] C. H. Morimoto and M. R. M. Mimica, “Eye gaze tracking techniques for interactive applications,” Comput. Vis. Image Underst., vol. 98, no. 1, pp. 4-24, 2005. [7] J. P. Hansen, D. W. Hansen, and A. S. Johansen, Universal Access In HCI. Lawrence Erlbaum Associates, 2001, ch. Bringing Gaze-based Interaction Back to Basics, pp. 325328. [8] D. J. Ward and D. J. C. MacKay, “Fast hands-free writing by gaze direction,” Nature, vol. 418, no. 6900, p. 838, 2002. [9] M. Ashmore, A. T. Duchowski, and G. Shoemaker, “Efficient eye pointing with a fisheye lens,” in Proceedings of the 2005 conference on Graphics interface. School of Computer Science, University of Waterloo, Waterloo, Ontario, Canada: Canadian Human-Computer Communications Society, 2005, pp. 203-210. [10] D. Miniotas and O. Spakov, “An algorithm to counteract eye jitter in gaze-controlled interfaces,” in Information Technology and Control, vol. 30, no. 1, Tampere, Finland, 2004, pp. 65-68. [11] K. Rayner, “Eye movements in reading and information processing: 20 years of research.” Psychol Bull, vol. 124, no. 3, pp. 372-422, Nov 1998. 47  [12] A. Spauschus, J. Marsden, D. Halliday, J. Rosenberg, and P. Brown, “The origin of ocular microtremor in man,” Experimental Brain Research, vol. 126, no. 4, pp. 556562, June 1999. [13] U. Tulunay-Keesey, “Fading of stabilized retinal images.” J Opt Soc Am, vol. 72, no. 4, pp. 440-447, Apr 1982. [14] M. A. Just and P. A. Carpenter, “A theory of reading: from eye fixations to comprehension.” Psychol Rev, vol. 87, no. 4, pp. 329-354, Jul 1980. [15] A. T. Duchowski, Eye Tracking Methodology: Theory and Practice. Springer-Verlag, 2003. [16] T. Hutchinson, J. White, W. Martin, K. Reichert, and L. Frey, “Humancomputer interaction using eye-gaze input,” IEEE Transactions on Systems, Man and Cybernetics, vol. 19, no. 6, pp. 1527-1534, 1989. [17] S.-W. Shih and J. Liu, “A novel approach to 3-d gaze tracking using stereo cameras,” IEEE Transactions on Systems, Man and Cybernetics, Part B, vol. 34, no. 1, pp. 234-245, Feb. 2004. [18] T. Ohno and N. Mukawa, “A free-head, simple calibration, gaze tracking system that enables gaze-based interaction,” in Proceedings of the 2004 symposium on Eye tracking research & applications. New York, NY, USA: ACM Press, 2004, pp. 115-122. [19] D. Beymer and M. Flickner, “Eye gaze tracking using an active stereo head,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, 18-20 June 2003, pp. II-451-II-458. [20] C. Hennessey, B. Noureddin, and P. Lawrence, “A single camera eyegaze tracking system with free head motion,” in Proceedings of the 2006 symposium on Eye tracking research & applications. New York, NY, USA: ACM Press, 2006, pp. 87-94. [21] D. H. Yoo and M. J. Chung, “A novel non-intrusive eye gaze estimation using cross-ratio under large head motion,” Comput. Vis. Image Underst., vol. 98, no. 1, pp. 25-51, 2005. [22] Z. Cherif, A. Nait-Ali, J. Motsch, and M. Krebs, “An adaptive calibration of an infrared light device used for gaze tracking,” in Proceedings of the 19th IEEE Instrumentation and Measurement Technology Conference, vol. 2, 21-23 May 2002, pp. 1029-1033vol.2. [23] R. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses,” IEEE Journal of Robotics and Automation, vol. 3, no. 4, pp. 323-344, Aug 1987. [24] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, Nov. 2000. [25] D. A. Goss and R. W. West, Introduction to the Optics of the Eye. Butterworth Heinemann, 2001.  48  [26] Y. Ebisawa and S. Satoh, “Effectiveness of pupil area detection technique using two light sources and image difference method,” in Proceedings of the 15th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Oct 28-31, 1993, pp. 1268-1269. [27] C. H. Morimoto, D. Koons, A. Amir, and M. Flickner, “Pupil detection and tracking using multiple light sources.” Image and Vision Computing, vol. 18, no. 4, pp. 331335, 2000. [28] C. Hennessey, “Eye-gaze tracking with free head motion,” Master’s thesis, University of British Columbia, August 2005. [29] A. Fitzgibbon, M. Pilu, and R. Fisher, “Direct least square fitting of ellipses,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 5, pp. 476-480, May 1999. [30] J. Zhu and J. Yang, “Subpixel eye gaze tracking,” in Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition. Washington, DC, USA: IEEE Computer Society, 2002, p. 131.  49  Chapter 4  An Experimental Study to Investigate the Effects of an Electromagnetic Sensor for Motion Tracking on EEG 3  4.1  Introduction  There is evidence that when a person moves a part of his/her body, specific patterns are introduced in his/her ongoing electroencephalogram (EEG) signal [1]. Understanding the resulting changes in the EEG signal could lead to a better understanding of the underlying brain activity. Investigating such changes requires the simultaneous capture of these movements and the recording of the EEG signal. Some researchers use custom made devices such as a micro-switch (e.g., see [2]) or record the electromyogram (EMG) signal (e.g., see [3]) to detect the onset of the movement. The disadvantages of devices such as micro-switches are that they should be customized for each type of movement and they do not provide any knowledge of the dynamics of the movement such as speed and movement pattern of the moving part. Recording the EMG signal also has the latter disadvantage. By using an appropriate motion tracking method however, a complete pattern of the movement can be obtained. Subsequently, this information can be used to gain a better understanding of the relationship between the movement and the observed EEG patterns. This understanding can also be used to guide subjects to improve the quality of their executed movements during the experiment. The need for such a system arises in our present research in analyzing EEG signals which aims at building successful and efficient brain computer interfaces [4]. To analyze the EEG signals related to a specific movement, we need information about the movement, mainly its onset, duration, trajectory and speed. To be able to do this, we have to use a motion tracking system. As this device will be placed in our experimental environment, we must ensure that it does not introduce any artifacts in the EEG signal. Three other criteria for selecting the appropriate tracking device are accuracy, cost and versatility, versatility in the sense that the same device can be used to track the movement of different parts of the body. Currently, there are several commonly used technologies for tracking motion. These tech3  A version of this chapter has been published. Bashashati, A., Noureddin, B., Ward, R.K., Lawrence, P.D. and Birch, G.E. 2006. An experimental study to investigate the effects of an electromagnetic sensor for motion tracking on EEG. IEEE Transactions on Biomedical Engineering. vol. 53, no. 3, pp. 559-563, 2006.  50  nologies are based on mechanical, electromagnetic, acoustic, optical, and inertial/magnetic principles [5]. The main disadvantage of mechanical tracking systems is that the user’s range and type of the movement are constrained [5]. Optical and acoustic systems operate within specific ranges that are limited by their line-of-sight observations. Electromagnetic tracking systems, on the other hand, have the versatility advantage and are not limited by the lineof-sight observation. Such a system uses a “source” element (a transmitter) that radiates a magnetic field. A small sensor (receiver) that is placed on the body’s moving part reports its position with respect to the source [5]. The workspace of electromagnetic tracking systems is limited to a 6 m distance between the radiating source and the receiver. This limitation does not affect our experiments since the space in which movement is captured in our EEG recording environment is limited; actually the distance between the source and the sensors is less than 1 m. Inertial/magnetic sensors do not need an electromagnetic radiating source and they do not have space limitations. These sensors however are large and expensive. Thus, a less expensive and smaller electromagnetic tracking technology is more suitable for our purposes. Previous work (e.g., see [6]) has demonstrated the application of electromagnetic sensors for locating electrodes before EEG recording. We were unable, however, to find previously reported simultaneous electromagnetic motion tracking and EEG recording. Since nonlinear effects in tissue or wiring may cause interference in the recorded EEG, we felt it prudent to examine the possibility of artifacts introduced by the use of electromagnetic sensors. To this end, we evaluate the Polhemus FASTRAK system, as this motion tracking device is readily available to us, relatively inexpensive and small in size. The goal of this study is to investigate the effects of using this motion tracking system on EEG signals. Specifically, we wish to test the two hypotheses: 1) the use of this system does not introduce any artifact with consistent frequency components to the power spectrum of the collected EEG signal in the frequency range of interest, and 2) if 1) does not hold, then any such artifacts can be removed by some practical means such as linear filtering.  4.2  Experimental System  We wish to study whether the use of the electromagnetic motion tracking system affects amplifier calibration and frequency content of the recorded EEG signal.  4.2.1  Electromagnetic Motion Tracking Sensor  We use the Polhemus FASTRAK electromagnetic system for motion tracking. The Polhemus FASTRAK tracking system uses electromagnetic fields to determine the position and orientation of a small (2.83 cm width, 2.29 cm length, 1.52 cm height and 17 g weight) receiver (sensor) as it moves through space, and provides dynamic, real-time measurements of its position (X, Y, Z Cartesian coordinates) and its orientation (azimuth, elevation, and roll). The technology is based on generating near field, low frequency magnetic fields from a single assembly of three concentric and stationary antenna coils (transmitters) which are oriented perpendicular to one another and detecting the fields with a single assembly of three concentric, perpendicular to one 51  another remote sensing antennas (coils) called a receiver. The receiver is completely a passive device. Each loop of the transmitter antenna is in turn excited with a driving signal identical in frequency and phase. The field creates currents (in the order of milliamps) in the passive coils (receiver), proportional to the strength of the current in the transmitter, the distance between the transmitter and receiver (sensor), and orientation of the receiver. Sequentially exciting the three coils of the transmitter results in an output at the receiver of a set of three linearly independent vector fields. The three output (receiver) vectors contain sufficient information to determine the position and orientation of the receiver relative to the transmitter. Essentially nine measurements are available to solve for the six unknowns of x, y, z for position and azimuth (yaw), elevation (pitch), and roll for orientation [6]. The manufacturer’s specified accuracy of the system is 0.08 cm RMS for the X, Y, or Z position and 0.15◦ RMS for the receiver orientation [6]. It operates at one of 8, 10, 12 and 14 kHz carrier frequencies, with 12 KHz being the standard frequency, and the other frequencies used when multiple units are run in the same area. The sensors have virtually no latency and the digital signal processing technology has a 4-ms latency updated at 120 Hz [7]. The standard operational range is 1.21.8 m, which is suitable for our application.  4.2.2  Experimental Paradigm  The data used in this evaluation is collected from subjects positioned 150 cm in front of a computer monitor. The EEG signals are recorded from 15 mono-polar electrodes referenced to ear mastoids according to the International 1020 System at C1, C2, C3, C4, Cz, FC1, FC2, FC3, FC4, FCz, F1, F2, F3, F4 and Fz. As ocular activity may introduce artifacts in the EEG signals, Electro-oculogram (EOG) is measured as the potential difference between two electrodes, placed at the corner of and below the right eye. Ocular artifacts are considered present when the difference between the EOG electrodes exceeds an operator defined threshold for each subject. Only EEG signals that are not contaminated by ocular artifacts are used for further processing. All EEG and EOG signals were amplified with Grass Model 8-18C EEG amplifiers and sampled, at 128 Hz, by a PC equipped with a 12-bit analog to digital converter embedded on a Data Translation 2801A data acquisition board. The electromagnetic receiver (sensor) is mounted on subjects’ middle finger, and the source (transmitter) is placed beside the chair in which the subjects sit. In this study, two types of experiments are carried out. The first type investigates the effects of using the motion tracking system while calibrating the amplifiers. To calibrate the amplifiers, a 16Hz sinusoidal signal is applied to the system. In our experiments, the amplified and digitized calibration signal is recorded in 1-min sessions under three different conditions: 1) transmitter OFF; 2) transmitter ON but the receiver has no motion; 3) transmitter ON while the receiver is in motion. In the second type of experiments, the electrodes are placed on the scalp and the effects of using the sensors on the recorded EEG signals from subjects are investigated. Two male right-handed subjects (27 and 33 years old) participate in this study. While the receiver (sen52  sor) is mounted on subjects’ middle finger, EEG data is collected from the subjects under the following situations. Task 1) Performing no specific mental task with eyes closed. Task 2) Looking at a sea-sky picture on the monitor. Task 3) Counting the number of faces hidden in a picture on the monitor. Task 4) Performing a guided movement task. In this task, at random intervals (mean 6 s), a small box image moving with constant speed is displayed on a monitor. The subject is asked to activate a mechanical switch by extending his hand when this box reaches a marked target position. Each subject participates in one 90-min session. EEG data during the three mental tasks (tasks 13) for two 1-min periods and during the guided movement task (Task 4) for 80 trials are recorded. Each such experiment is repeated twice, once with the electromagnetic system ON and once when it is OFF. Note that the time between the experiments with the tracking system ON and OFF was short to reduce the probability of significant change in brain activity. For the second type of experiments, due to the nonstationary nature of the EEG signal, we need to verify that changes in the recorded EEG signal (which may be affected by the tracking system when it is ON) are due only to underlying brain activity and not the tracking device. To overcome this problem, we recorded the EEG in different mental states for many trials and for different subjects to ensure that such nonstationarity does not affect our conclusion. An alternative option would be to use a brain phantom to synthesize the EEG signal. In that case, the recordings with the device in the ON and OFF conditions would be completely comparable, and any changes observed would be attributable to the device. While brain phantoms have been used for algorithm validation, the conductivity and anisotropic properties of a human head and unknown factors in real-world conditions are not considered in such phantoms [8], [9]. Consequently, the use of a brain phantom would not necessarily guarantee that the tracking device does not generate any artifacts in the real EEG signal.  4.3  Methods  To investigate the effects of the tracking system on calibration and on the EEG, the collected data of both types of experiments are analyzed. The data are prefiltered within the range of 0.1-55 Hz, using a 20-point finite-impulse response filter. To ensure that the tracking system does not affect the frequency content of the recorded signal, power spectral density (PSD) analysis is applied on both the calibration signal data and the EEG signals recorded from the two subjects. As we use 15 channels for each experiment, the power spectrum of 1 s epochs with 80% overlap is calculated for each of the 15 calibration signals and 15 EEG channel signals. For Task 4, PSD values are calculated for the interval starting at 1.5 s before the movement execution and ending 1.5 s after movement execution. The signal windows that are contaminated by ocular artifacts are excluded from the analysis. To calculate the power spectrum of a signal, the periodogram method is used with a 512-point fast Fourier transform. 53  IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 53, NO. 3, MARCH 2006  Fig. 1. Power spectrum of the calibration signal in three different cases:  Fig. 2. Ensemble averages of the power s different conditions in all channels for sub averages of PSD values for tasks 1–4 at the sa ON, solid line: transmitter OFF).  OFFthe (solidcalibration line with circle), transmitter ON while transmitter of receiver iscases: in Figure 4.1: Power spectrum signal in three different transmitter OFF motion (solid line with star), and transmitter ON while receiver is not in motion (solid line with circle), transmitter ON while receiver is in motion (solid line with star), and (solid line with triangle). transmitter ON while receiver is not in motion (solid line with triangle).  In order averaged  compared to the rest of the spectrum. For example, if we assume that the system introduces an artifact in the EEG at 20 Hz, then we expect to to maximizeseethe signal power to noise ratio, each intask, PSDs of all channels are a significant increase at this for frequency each ofthe the spectra of each subject in different mental tasks. frequency components in all trials, while together. Averaging highlights consistent  attenuating inconsistent frequency components in all trials. IV. RESULTS  Because of the nonlinearity of the signals and the differences in distance and orientation of A. Calibration Experiments  the various electrodes from thethedevice, any artifact the device may have We study power spectrum of the calibration data recorded underwould not necessarily the three different conditions stated These are; 1)On transmitter appear with exactly the same characteristics on above. all electrodes. the other hand, averaging OFF;  2) transmitter ON while the receiver has no motion; 3) transmitter  the 15 EEG channels ON could out most differences between the two cases (when the tracking whilewipe the receiver is in motion. Fig. 1 shows the ensemble average of eachitofistheOFF). PSDs ofThus, all channels for the four different cases (see system is ON and when we repeat a procedure similar to that explained Section II-B). If the signals from the electromagnetic tracking system  earlier for each channel implied above,signal, the this same doesseparately. contaminate theAs measured calibration willtracking take one of system is used in all Fig. 3. Ensemble averages of the power s forms: 1) A frequency component or more whoseorvalues are frequencies not different conditions experiments. Thus iftwo the system introduces artifacts in one more of the EEG in electrode location “c equal to 16 Hz will be introduced in the calibration signal when the  ensemble averages of PSD values for tasks 1  signals, we expect those components tovalue introduce changessignal in thetransmitter power ON spectra , solid line:oftransmitter OFF). in the PSD of the calibration sensorfrequency is ON; 2) a change at 16 Hz will happen in all three cases. As we see from Fig. 1, there each of the signals collected from both subjects and for all the four mental activities. Moreover, is no difference between the PSDs obtained under the three different 10 Hz when field the transmitter is since the transmitterconditions. generates a power 31 does mW,notthe electromagnetic can Thisfields implieswith that the tracking of system cause any  OFF is g transmitter is ON, thus implying that the artifacts in the coils calibration signal used.of milliamps [7], and our recorded EEG is in generate current in the receiver in the order 10 Hz are not caused by the electromagn by some source mainly brain activ the order of microvolts [10], we expect the power of the interfering frequencies to beother very large B. EEG Recording Experiments As an example of analysis for a single compared to the rest ofThethe spectrum. example, assumeeach that the semble systemaverages introduces results for subject For 1 are shown in Fig.if2. we Specifically graph of the PSDs of the EE in Fig. shows ensemble PSDs of the EEG over (electrode C1) for an artifact in the EEG at220 Hz,thethen we averages expect oftothesee a significant power increase at subject this 2 in the four all 15 channels, for each of the four mental tasks. Each graph shows (Section II-B). With the same reasoning frequency in each of the spectradata offor each mentaltotasks. the resulting bothsubject cases, thein topdifferent graph corresponds Task 1, tent frequency component(s) was seen this followed by Task 2, Task 3 and the bottom graph corresponds to ON in comparison to the case when the In order to better quantify the effects o Task 4. We use a similar reasoning as that we used in the calibration 4.4 Results experiments. If the signals from the electromagnetic tracking system system, we now use all available informa do contaminate the measured EEG, we would then expect to see one or for the four tasks. The frequency range more frequency components appearing consistently in all waveforms EEG is divided into narrow 1 Hz frequen 4.4.1 Calibration Experiments where X is an in obtained in the case when the transmitter is ON. Fig. 2 shows that for X Hz up to tasks 2 and 3, the values of the PSDs of the EEG around, e.g., 10 Hz for band includes X Hz but does not include We study the power spectrum calibration data recorded the greater than thoseunder for which thethree banddifferent in the EEGcondiwe calculate the PSD the case whenofthethe transmitter is ON are OFF. If the electromagnetic tracking system is the source transmitter is which the transmitter is ON and the tran tions stated above. These are; 1) transmitter OFF; 2) transmitter ON while the receiver has no of these frequency components in the measured EEG, we should see for all mental tasks and for both subje motion; 3) transmitter while the receiver is as inwell. motion. Fig. shows ensemble thisON phenomenon in the other tasks However, Fig.4.1 2 shows thattheeach frequencyaverage band with initial value for Task 1 and Task 4, this is not the case. In these cases the PSD around bin represents the number of occurrenc  54  Authorized licensed use limited to: The University of British Columbia Library. Downloaded on April 26,2010 at 08:32:20 UTC from  of each of the PSDs of all channels for the four different cases (see Section 4.2.2). If the signals from the electromagnetic tracking system does contaminate the measured calibration signal, this will take one of two forms: 1) A frequency component or more whose values are not equal to 16 Hz will be introduced in the calibration signal when the sensor is ON; 2) a change in the PSD value of the calibration signal at 16 Hz will happen in all three cases. As we see from Fig. 4.1, there is no difference between the PSDs obtained under the three different conditions. This implies that the tracking system does not cause any artifacts in the calibration signal used.  4.4.2  EEG Recording Experiments  The results for subject 1 are shown in Fig. 4.2. Specifically each graph in Fig. 4.2 shows the ensemble averages of the PSDs of the EEG over all 15 channels, for each of the four mental tasks. Each graph shows the resulting data for both cases, the top graph corresponds to Task 1, this followed by Task 2, Task 3 and the bottom graph corresponds to Task 4. We use a similar reasoning as that we used in the calibration experiments. If the signals from the electromagnetic tracking system do contaminate the measured EEG, we would then expect to see one or more frequency components appearing consistently in all waveforms obtained in the case when the transmitter is ON. Fig. 4.2 shows that for tasks 2 and 3, the values of the PSDs of the EEG around, e.g., 10 Hz for the case when the transmitter is ON are greater than those for which the transmitter is OFF. If the electromagnetic tracking system is the source of these frequency components in the measured EEG, we should see this phenomenon in the other tasks as well. However, Fig. 4.2 shows that for Task 1 and Task 4, this is not the case. In these cases the PSD around 10 Hz when the transmitter is OFF is greater than that in which the transmitter is ON, thus implying that the frequency components around 10 Hz are not caused by the electromagnetic tracking system, but rather by some other source mainly brain activity. As an example of analysis for a single channel, Fig. 4.3 shows the ensemble averages of the PSDs of the EEG for one of the 15 channels (electrode C1) for subject 2 in the four mental tasks mentioned before (Section 4.2.2). With the same reasoning mentioned above, no consistent frequency component(s) was seen in the case when the system is ON in comparison to the case when the system is OFF. In order to better quantify the effects of the electromagnetic tracking system, we now use all available information from the two subjects and for the four tasks. The frequency range of 0.1 to 55 Hz of the filtered EEG is divided into narrow 1 Hz frequency bands. Band X ranges from X Hz up to X + 1 Hz where X is an integer between 0 and 54. Each band includes X Hz but does not include X + 1 Hz. For each frequency band in the EEG we calculate the PSD changes between the cases for which the transmitter is ON and the transmitter is OFF. This is repeated for all mental tasks and for both subjects. Two bins are created for each frequency band with initial values of zero. For each band, one bin represents the number of occurrences of the case for which when the transmitter is ON, the average PSD is found to be higher in value than that with the transmitter OFF. The other bin represents the total number of cases where the average PSDs are less. For any given band, the absolute values of both bins add up to eight since we have four mental tasks for each of two subjects. 55  IONS ON BIOMEDICAL ENGINEERING, VOL. 53, NO. 3, MARCH 2006  561  ONS ON BIOMEDICAL ENGINEERING, VOL. 53, NO. 3, MARCH 2006  561  Fig. 2. Ensemble averages of the power spectral density values in the four conditions in allpower channelsspectral for subject density 1. Rows 1–4 show ensemble Figure 4.2: Ensembledifferent averages of the values in the four different condiaverages of PSD values for tasks 1–4 at the same order (dashed line: transmitter tions in all channels ON for, solid subject 1. Rows line: transmitter OFF).1–4 show ensemble averages of PSD values for tasks spectrum of the calibration signal in three different cases: 1-4ONatwhile thereceiver sameisorder (dashed line: transmitter ON, solid line: transmitter OFF). (solid line with circle), transmitter in e with star), and transmitter ON while receiver is not in motion riangle).  e rest of the spectrum. For example, if we assume that oduces an artifact in the EEG at 20 Hz, then we expect to t power increase at this frequency in each of the spectra in different mental tasks.  IV.calibration RESULTSsignal in three different cases: spectrum of the (solid line with circle), transmitter ON while receiver is in Experiments e with star), and transmitter ON while receiver is not in motion eriangle). power spectrum of the calibration data recorded under  Fig. 2. Ensemble averages of the power spectral density values in the four different conditions in all channels for subject 1. Rows 1–4 show ensemble averages of PSD values for tasks 1–4 at the same order (dashed line: transmitter ON, solid line: transmitter OFF).  ent conditions stated above. These are; 1) transmitter tter ON while the receiver has no motion; 3) transmitter e rest of the spectrum. For example, if we assume that ceiver is in motion. Fig. 1 shows the ensemble average oduces an artifact in the EEG at 20 Hz, then we expect to PSDs of all channels for the four different cases (see t power increase at this frequency in each of the spectra f the signals from the electromagnetic tracking system in different mental tasks. ate the measured calibration signal, this will take one of Fig. 3. Ensemble averages of the power spectral density values in the four A frequency component or more whose values are not different conditions in electrode location “c1” for subject 2. Rows 1–4 show IV. R ESULTS will be introduced in the calibration signal when the ensemble averages of PSD values for tasks 1–4 at the same order (dashed line: 2)Experiments a change in the PSD value of the calibration signal transmitter ON, solid line: transmitter OFF). happen in all three cases. As we see from Fig. 1, there power spectrum of the calibration data recorded under e between the PSDs obtained under the three different ent conditions stated above. These are; 1) transmitter 10 Hz when the transmitter is OFF is greater than that in which the s implies that the tracking system does not cause any tter ON while the receiver has no motion; 3) transmitter transmitter is ON, thus implying that the frequency components around calibration signal used. ceiver is in motion. Fig. 1 shows the ensemble average 10 Hz are not caused by the electromagnetic tracking system, but rather PSDs of all channels for the four different cases (see by some other source mainly brain activity. ding Experiments As an example of analysis for a single channel, Fig. 3 shows the enf the signals from the electromagnetic tracking system or 1 are shown in Fig.signal, 2. Specifically eachone graph te subject the measured calibration this will take of semble averages of the PSDs of the EEG for one of the 15 channels Fig. 3. Ensemble averages of the power spectral density values in the four s the ensemble averages or of more the PSDs of values the EEG (electrode C1) for in subject 2 inlocation the four“c1” mental tasks mentioned before A frequency component whose areover not different conditions for subject 2. Rows 1–4 Figure 4.3: Ensemble averages of theelectrode power spectralmentioned density values in show the four different condi,will for each of the four tasks. Each graph shows (Section averages II-B). With thevalues same for reasoning no consisbe introduced in mental the calibration signal when the ensemble of PSD tasks 1–4 at the sameabove, order (dashed line: tions in electrode “c1” for subject 2. Rows show ensemble of PSD values transmitter ON, solid line: transmitter OFF ). in 1-4 ata both cases, graph corresponds to Task 1,location tent frequency component(s) was seen the case when the systemaverages is ) afor change in the the PSDtopvalue of the calibration signal ON in comparison to the case the system isON, OFF. solid line: transmitter OFF). yappen Task in 2, all Task 3 and the As bottom graph corresponds to same forsee tasks 1-4 1,atthere the order (dashed line:when transmitter three cases. we from Fig. In order to better quantify the effects of the electromagnetic tracking e abetween similar the reasoning as that we usedthe in three the calibration PSDs obtained under different 10 Hz when the transmitter is OFF is greater than that in which the signals electromagnetic system sthe implies thatfrom the the tracking system doestracking not cause any system, we now use all available information from the two subjects and transmitter is ON, thus implying that the frequency components around the measured EEG, we would then expect to see one or for the four tasks. The frequency range of 0.1 to 55 Hz of the filtered calibration signal used. 10 Hz are not caused by the electromagnetic tracking system, but rather y components appearing consistently in all waveforms EEG is divided into narrow 1 Hz frequency bands. Band X ranges from by some other source mainly brain activity. where X is an integer between 0 and 54. Each caseExperiments when the transmitter is ON. Fig. 2 shows that for X Hz up to ding As an example of analysis for a single channel, Fig. 3 shows the en. For each frequency he values of the PSDs of the EEG around, e.g., 10 Hz for band includes X Hz but does not include or subject 1 are shown in Fig. 2. Specifically each graph semble averages of the PSDs of the EEG for one of the 15 channels he transmitter is ON are greater than those for which the band in the EEG we calculate the PSD changes between the cases for s the ensemble averages of the PSDs of the EEG over (electrode C1) for subject 2 in the four mental tasks mentioned before 56 FF. If the electromagnetic tracking system is the source which the transmitter is ON and the transmitter is OFF. This is repeated , for each of the four mental tasks. Each graph shows (Section II-B). With the same reasoning mentioned above, no consisncy components in the measured EEG, we should see for all mental tasks and for both subjects. Two bins are created for ta for both cases, the top graph corresponds to Task 1, tent frequency component(s) was seen in the case when the system is on in the other tasks as well. However, Fig. 2 shows that each frequency band with initial values of zero. For each band, one  562  Fig. 4.  IEEE TRANSACTIONS ON BIOMEDICAL ENGINEER  Number of increases (positive value bars) and decreases (negative value  Fig. 5. Number of increases (positive value b  in power in each narrow frequency band when the transmitter is ON as bars) invalue power inbars) each narrow Figure 4.4: Number bars) of increases (positive value bars) and decreases (negative in frequency band compared to when it is OFF (row 1: shows number of increases and decreases the transmitter is ON as compared to when it power in each narrow frequency band when theof increases transmitter is ON as compared it inis0–27 Hz band, ro in 0–27 Hz band, row 2: shows number and decreases in 28–55 Hz increasesto andwhen decreases band). 28–55 Hz band). OFF (row 1: shows number of increases and decreases in 0-27 Hz band, rowand2:decreases showsinnumber of increases and decreases in 28-55 Hz band).  the transmitter is ON, the average PSD is found to be higher in value sors operate in the 8–14 kHz range, and than that with the transmitter OFF. The other bin represents the total change in the power spectral density of number of cases average PSDs less. any given and band, also The results of this analysis onwhere PSDtheaverages overareall 15For channels one of wefor conclude thatthe it is15 safe to use this t the absolute values of both bins add up to eight since we have four mental environment and that the observ channels (Channel C1) aretasks shown in Figs. 4 and 5, respectively. In these figures, there are two mental for each of two subjects. are due to changes in the brain state or oth resultsthat of thisshow analysisthe on PSD averages all 15 channels bars for each frequencyTheband number ofover increases and and decreases PSD when system.in Even if these sensors cause artifa also for one of the 15 channels (Channel C1) are shown in Figs. 4 and at higher frequencies (above 55 Hz) whic the transmitter is ON5, as compared to when is OFF. same analysis is repeated respectively. In these figures,itthere are twoNote bars forthat each the frequency processing filter, and thus do not affect t band thatofshow number increases and decreases in PSD when for each of the 15 channels the the EEG andofconsequently, 15 graphs like the ones Figs. provides 4 and evidence for Thisinresearch the transmitter is ON as compared to when it is OFF. Note that the same using this type of tracking system durin 5 were generated foranalysis each EEG channel. from ofthe tracking system is repeated for each If of signals the 15 channels the electromagnetic EEG and con- troduce any artifacts in the EEG signals sequently, 15 graphs like the onesfrequency in Figs. 4 andband 5 wereingenerated for result in an introduced artifact in a particular the measured EEG, we should each EEG channel. If signals from the electromagnetic tracking system REFERENCE observe a power increase bandartifact in allintasks and frequency for bothband subjects. result ininanthat introduced a particular in the The results of our G. Pfurtscheller and F. H. Lopes da measured EEG, weensemble should observe a power increase in that band in all analysis on each channel and also averages over all channels of the [1]EEG show that synchronization and desynchronizatio tasks and for both subjects. The results of our analysis on each channel physiol., vol. 110, this phenomenon does occur inaverages any ofover theall1-Hz frequency of the EEG (from 0 tono. 11, pp. 1842–1 and not also ensemble channels of the EEGbands show that [2] J. Müller-Gerking, G. Pfurtscheller, an this phenomenon does not occur in any the of thetransmitter 1-Hz frequencyON, bands the of powermovement-related 55 Hz). As seen in Figs. 4 and 5, after turning decreases inEEG in a memoriz the EEG (from 0 to 55 Hz). As seen in Figs. 4 and 5, after turning the Neurophysiol., vol. 111, no. 8, pp. 13 some cases and in other cases ON it , increases. Therefore, the electromagnetic systemM.does [3] J. Johnston, Rearick, and S. Slobo the power decreases in some cases and in other cases tracking transmitter potentials associated it increases. Therefore, the the electromagnetic systemthe doescause not of power increases with progressive not introduce any significant artifact in 0-55 Hz tracking range and Clin. Neurophysiol., vol. 112, no. 1, p introduce any significant artifact in the 0–55 Hz range and the cause [4] G. E. Birch, S. G. Mason, and J. F. B and decreases, observed in different conditions and subjects, can instead be attributed to brain of power increases and decreases, observed in different conditions and computer interface research at the Nei Neural Syst. Rehab. Eng., vol. 11, no. subjects, can instead be attributed to brain state changes. state changes. V. CONCLUSION Experimental results show that the electromagnetic tracking system (FASTRAK) does not introduce any consistent frequency component in the calibration signal or the EEG signal of interest. Since these sen-  [5] K. M. Stanney, Handbook of Virtual E tation, and Applications. Mahwah, N 2002. [6] J. B. Green, P. A. St Arnold, L. Rozhk “Bereitschaft (readiness potential) and tion in movement generation: spinal c Rehabil. Res. Dev., vol. 40, no. 3, pp.  57  Authorized licensed use limited to: The University of British Columbia Library. Downloaded on April 26,2010 at 08:32:20 UTC from  IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 53, NO. 3, MARCH 2006  of increases (positive value bars) and decreases (negative value Fig. 5. Number of increases (positive value bars) and decreases (negative value n each narrow frequency band when the transmitter is ON as bars) in power in each narrow frequency in electrode location “C1” when Figure 4.5: Number of increases (positive valueband bars) and decreases (negative value bars) in en it is OFF (row 1: shows number of increases and decreases the transmitter is ON as compared to when it is OFF (row 1: shows number of in each narrow frequency bandinin electrode location “C1” ofwhen the transmitter is ON row 2: shows number of increasespower and decreases in 28–55 Hz increases and decreases 0–27 Hz band, row 2: shows number increases in 28–55 as compared to whenanditdecreases is OFF (rowHz1:band). shows number of increases and decreases in 0-27 Hz  band, row 2: shows number of increases and decreases in 28-55 Hz band).  is ON, the average PSD is found to be higher in value the transmitter OFF. The other bin represents the total s where the average PSDs are less. For any given band, alues of both bins add up to eight since we have four r each of two subjects. f this analysis on PSD averages over all 15 channels and the 15 channels (Channel C1) are shown in Figs. 4 and In these figures, there are two bars for each frequency w the number of increases and decreases in PSD when is ON as compared to when it is OFF. Note that the same eated for each of the 15 channels of the EEG and conraphs like the ones in Figs. 4 and 5 were generated for nel. If signals from the electromagnetic tracking system roduced artifact in a particular frequency band in the , we should observe a power increase in that band in all oth subjects. The results of our analysis on each channel mble averages over all channels of the EEG show that on does not occur in any of the 1-Hz frequency bands of 0 to 55 Hz). As seen in Figs. 4 and 5, after turning the  sors operate in the 8–14 kHz range, and we expect to see significant change in the power spectral density of the EEG, given these results, we conclude that it is safe to use this tracking system in our experimental environment and that the observed power changes in the EEG are due to changes in the brain state or other sources and not the tracking system. Even if these sensors cause artifacts, such artifacts would occur at higher frequencies (above 55 Hz) which will be canceled by the preprocessing filter, and thus do not affect the results of our experiments. This research provides evidence for the research community that using this type of tracking system during EEG recording does not introduce any artifacts in the EEG signals recorded. REFERENCES [1] G. Pfurtscheller and F. H. Lopes da Silva, “Event-related EEG/MEG synchronization and desynchronization: basic principles,” Clin. Neurophysiol., vol. 110, no. 11, pp. 1842–1857, 1999. [2] J. Müller-Gerking, G. Pfurtscheller, and H. Flyvbjerg, “Classification of movement-related EEG in a memorized delay task experiment,” Clin. Neurophysiol., vol. 111, no. 8, pp. 1353–1365, 2000.  58  4.5  Conclusion  Experimental results show that the electromagnetic tracking system (FASTRAK) does not introduce any consistent frequency component in the calibration signal or the EEG signal of interest. Since these sensors operate in the 814 kHz range, and we expect to see significant change in the power spectral density of the EEG, given these results, we conclude that it is safe to use this tracking system in our experimental environment and that the observed power changes in the EEG are due to changes in the brain state or other sources and not the tracking system. Even if these sensors cause artifacts, such artifacts would occur at higher frequencies (above 55 Hz) which will be canceled by the preprocessing filter, and thus do not affect the results of our experiments. This research provides evidence for the research community that using this type of tracking system during EEG recording does not introduce any artifacts in the EEG signals recorded.  59  References [1] G. Pfurtscheller and F. H. Lopes da Silva, “Event-related EEG/MEG synchronization and desynchronization: basic principles,” Clin. Neurophysiol., vol. 110, no. 11, pp. 1842–1857, 1999. [2] J. M¨ uller-Gerking, G. Pfurtscheller, and H. Flyvbjerg, “Classification of movement-related EEG in a memorized delay task experiment,” Clin. Neurophysiol., vol. 111, no. 8, pp. 1353–1365, 2000. [3] J. Johnston, M. Rearick, and S. Slobounov, “Movement-related cortical potentials associated with progressive muscle fatigue in a grasping task,” Clin. Neurophysiol., vol. 112, no. 1, pp. 68–77, 2001. [4] G. E. Birch, S. G. Mason, and J. F. Borisoff, “Currentt rends in braincomputer interface research at the Neil Squire foundation,” IEEE Trans. Neural Syst. Rehab. Eng., vol. 11, no. 2, pp. 123–126, Jun. 2003. [5] K. M. Stanney, Handbook of Virtual Environments: Design, Implementation, and Applications. Mahwah, NJ: Lawrence Erlbaum Associates, 2002. [6] J. B. Green, P. A. St Arnold, L. Rozhkov, D.M. Strother, and N. Garrott, “Bereitschaft (readiness potential) and supplemental motor area interaction in movement generation: spinal cord injury and normal subjects,” J. Rehabil. Res. Dev., vol. 40, no. 3, pp. 225–234, May–June 2003. [7] 3SPACE FASTRAK User’s Manual, Polhemus Incorporated, Colchester, VTt, 2002. [8] R. M. Leahy, J. C. Mosher, M. E. Spencer, M. X. Huang, and J. D. Lewine, “A study of dipole localization accuracy for MEG and EEG using a human skull phantom,” Electroencephalogr. Clin. Neurophysiol., vol. 107, no. 2, pp. 159–173, Aug 1998. [9] D. L. Collins, A. P. Zijdenbos,V.Kollokian, J. G. Sled, N. J. Kabani, C. J. Holmes, and A. C. Evans, “Design and construction of a realistic digital brain phantom,” IEEE Trans. Med. Imag., vol. 17, no. 3, pp. 463–468, Jun. 1998. [10] J. Malmivuo and R. Plonsey, Bioelectromagnetism—Principles and Applications of Bioelectric and Biomagnetic Fields. New York: Oxford Univ. Press, 1995.  60  Chapter 5  Quantitative Evaluation of Ocular Artifact Removal Methods Based on Real and Estimated EOG Signals 4  5.1  Introduction  Eye movements and blinks pose a serious problem for electroencephalogram (EEG) measurements, and their removal from measured EEG is an ongoing research problem [1–4]. Almost all current approaches to ocular artifact (OA) detection and removal [4–9] use one or more electro-oculogram (EOG) signals either directly or indirectly as a reference. A few remove OAs without the need for a reference EOG signal. These approaches, however, are often not fully automated (e.g., require manual selection of either a threshold [10] or of components to reject [11]), are not able to handle all sources of OA (e.g., only blinks [10]) or are not suitable for real-time use (e.g., require large amounts of data [1]). In all cases, the performance of the OA removal (OAR) method is normally evaluated using simulated data. Artifact-free EEG data and an artifact signal are combined artificially and processed using the OAR algorithm. Some feature of the output of the algorithm (e.g., signal-to-noise ratio, or SNR) is compared to the original artifact-free EEG. For real EEG, the artifact-free (“true”) EEG is not known, so the performance of the algorithm on real data is usually reported subjectively [2, 3, 12], often based on visual inspection of the resulting waveforms. Puthusserypady et. al. [13] measured the ratio of the power of the artifact signal removed to the EEG signal remaining as a metric for real data, proposing that the higher the ratio, the better the performance of the algorithm. They assumed, however, that the algorithm is only being applied to data with significant OA. For data that does not contain OA, a higher ratio is not necessarily indicative of better performance. Therefore, a metric that evaluates the performance of an OAR algorithm consistently on data that has periods both with and without OA is needed. In this chapter, we propose a new metric that indicates how much an OAR algorithm may distort the underlying EEG. When combined with the power ratio of Puthusserypady et. al. [13], the metric can be used to effectively evaluate OAR algorithms on real EEG data. We applied the metric to evaluate two OAR algorithms ([5] and [7]) that are online, fully automated, 4  A version of this chapter has been published. Noureddin, B., Lawrence, P.D., Birch, G.E., 2008. Quantitative Evaluation of Ocular Artifact Removal Methods Based on Real and Estimated EOG Signals. In Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. pp. 5041-5044, August 2008.  61  and shown to perform well during OAs. We found a trade-off between how well each algorithm removes OAs and how likely it is to distort the EEG. In addition, like most OAR methods, the algorithms used require a reference EOG signal. However, for real-world, real-time, online applications (e.g., a brain computer interface or BCI), it is highly desirable (and sometimes even a requirement) not to place electrodes on the subject’s face. Although methods exist to remove OAs, in principle, without the EOG signal [3, 11], they are only suitable for offline analysis. We therefore used a linear combination of available EEG channels to estimate the EOG signal required by many OAR algorithms, thus providing an alternative to directly measuring the EOG. We explored various combinations of EEG channels, and report in this chapter the effect of using estimated (as opposed to measured) EOG on the performance of the above two OAR algorithms ([5] and [7]).  5.2 5.2.1  Methods Performance Metric  Typically, OAR methods are evaluated using simulated data. The top half of Figure 5.1 shows the approach generally taken for OAR. A (ocular artifact), N (noise) and B (true EEG) are generated, and the resulting Y (EEG with artifact removed) is compared to B. For example, in [2], Ee (EEG electrode signal), Y, and B are used to measure how well an algorithm removes OAs from measured EEG in simulations. For real data, however, A, N and B are unknown, making such a metric unsuitable. Artifact removal Eo  A N B EOG estimation (EOG estimated based on measured EEG)  +  OAR  Y  OAR  ~ Y  Ee  ~ Eo EOG estimation  Figure 5.1: OA removal & EOG estimation: A is the ocular artifact signal, B is the underlying brain signal, N is measurement noise, Eo is the signal measured at EOG electrode sites, Ee is the signal measured at EEG electrode sites, “OAR” is the OA removal algorithm, and Y and ˜ are the estimations of the true EEG at the EEG electrode sites. Ideally, Y=Y=B. ˜ Y  Puthusserypady et. al. [13] proposed the following metric for real EEG: C  N  C  N  (Ee − Y)2 /  R= 1  1  Y2 1  (5.1)  1  where N is the number of samples and C the number of channels recorded. The numerator represents the power of the artifact signal removed from all EEG electrodes, and the denomina62  tor the power of the remaining EEG. During an artifact, higher R values are better; otherwise, lower values of R are desirable. To use an extreme example, if the OAR algorithm considered the entire signal to be artifact, Y→ 0 and R→ ∞: R is maximal, but clearly the algorithm has poor performance. The evaluation needs to measure how much the algorithm distorts the EEG. If the denominator measures the power of the measured EEG as follows: C  C 2  Ee 2  (Ee − Y) /  R = 1  (5.2)  1  then whenever R >1, the power in the removed artifact signal is higher than that of the original EEG, indicating that the algorithm has likely removed too much signal or introduced new artifacts. The more often this occurs, the worse the performance of the algorithm. Therefore, we measured both R (how well an algorithm removes artifacts) and the percentage of samples  in which R >1 (how often it removes too much signal).  The metrics were calculated for two OAR algorithms. The RLS algorithm described in [5] has fast convergence and is suitable for online use. The time-varying H∞ algorithm of [7] converges more slowly but was previously shown to perform slightly better than the RLS algorithm. The implementation found in [14] was used for both algorithms.  5.2.2  EOG Approximation  As shown in the bottom half of Figure 5.1 , the measured EEG can be used to estimate the signals measured simultaneously at the EOG sites as follows:  w  =  E o Ee #  ˜o E  =  wEe  (5.3)  where Ee # is the pseudo-inverse of Ee and w is calculated using training data. The calcu˜ o . The choice of training and lated w is then used on the remaining test data to calculate E test data (both of which contain both contaminated and artifact-free EEG) is described below.  5.2.3  Data Collection  In order to test the new metric  and the EOG estimation, 57 channels of EEG (including  2 linked mastoids – see Figure 5.2) and 7 channels of EOG (one above and below each eye, one on each outer canthus and one on the nasion) were collected from one subject, sitting approximately 50cm in front of a computer monitor. All signals were sampled by the same amplifier at 200Hz, with a low-pass filter of 70Hz. The signals were further filtered digitally at 30Hz. Two sessions of data were collected, and the subject was instructed to perform 9 tasks in each session. For the first task, the subject was shown a picture of clouds and asked to relax. During the second task, the subject was shown a picture with hidden faces, and asked to count the number of faces (mental task). Three sets of 45 seconds of each of the first 2 63  tasks were collected for each session. During the third task, a dot was displayed in the centre of the computer monitor, and the subject was asked to stare at the dot for 45 seconds. This was repeated six times for a total of 270 seconds per session. During the fourth task, a 4x4 grid of numbers was displayed on the monitor. Each number was sequentially highlighted for 2 seconds, and the subject was instructed to follow the highlighted number. During the fifth task, the same grid was shown, but the subject was instructed to blink at least once during each 2-sec interval. The sixth and seventh tasks were the same as the fourth and fifth, respectively, but the subject was instructed to also perform a hand extension during each 2-sec interval. A total of 16 trials per session of 32 seconds each were collected from the subject for this set of tasks. During the eighth task, a small red dot was shown sequentially for 2 seconds on each corner of the computer monitor, and the subject was instructed to follow the dot. The ninth task was the same as the eighth task, except the subject was instructed to blink at least once during each 2-sec interval. A total of 16 trials per session of 24 seconds each were collected from the subject for this set of tasks. The resulting data consisted of EEG and EOG measured during a range of mental states, including periods with and without blinks and eye movements, taken over two sessions collected on different days. The first trial of each task (training data) was used to calculate w, which was then used on the remaining trials (test data) to calculate ˜ o. E  Fp  Fpz  Fp  AF3  AFz  AF4  AF7  F8  F7  FT7  T7  F5  FC5  C5  CP5  TP7  P7  AF8  F3  F1  Fz  F2  F6  F4  FC3  FC1  FCz  FC2  FC4  FC6  C3  C1  Cz  C2  C4  C6  CP3  CP1  CPz  CP2  CP4  CP6  P1  P5  Pz  P2  FC8  T8  TP8  P6 P8  PO3  POz  PO4  PO7  PO8 O1  Oz  O2  Figure 5.2: EEG electrodes  64  5.3 5.3.1  Results Performance Metric  Table 5.1 shows average values for R and  for each of the two algorithms for each of the two  sessions separately, as well as the overall average over all sessions. In each case, the results are shown for all the data collected, for only those segments known to contain ocular artifacts, and for only those segments not containing artifacts. Each EOG channel was band-pass filtered between 1-2Hz. Segments where the magnitude of the vertical EOG (difference between the electrodes above and below each eye) was greater than 10µV or the magnitude of the horizontal EOG (difference between each outer canthus and nasion electrodes) was greater than 15µV were marked as containing artifacts. All other segments were considered artifact-free. The results consistently show that for higher R values (better performance according to [13]), the value is higher, demonstrating that there is a tradeoff between how much artifact an algorithm removes and how much it risks introducing new artifacts. Table 5.1: Comparison of metrics, using actual measured EOG.  RLS session 1 RLS session 2 RLS all sessions H∞ session 1 H∞ session 2 H∞ all sessions  All data R ε (%) 2.4 16 1.6 12 2.0 14 12.0 23 5.1 20 8.7 21  OA segments only R ε (%) 3.3 23 2.6 20 2.9 22 19.0 29 9.9 27 14.0 28  Non-OA segments only R ε (%) 0.7 10 0.3 6 0.5 8 2.1 17 1.3 14 1.7 15  Table 5.2: Results using EOG approximated from 55 EEG channels.  RLS session 1 RLS session 2 RLS all sessions H∞ session 1 H∞ session 2 H∞ all sessions  5.3.2  All data R ε (%) 3.3 22 2.9 21 2.8 20 18.0 26 8.4 24 14.0 25  OA segments Non-OA only segments only R ε (%) R ε (%) 4.3 26 1.0 18 4.3 26 1.1 17 3.8 25 0.9 16 27.0 30 4.0 22 15.0 28 2.9 20 22.0 29 3.7 21  EOG Estimation  The first trial of each of the 9 tasks was used to calculate w in (5.3) using all 55 channels ˜ o and ultimately Y. ˜ The values for R of EEG. The resulting w was then used to calculate E and  ˜ (see Table 5.2). Here, “all sessions” means that w was were then calculated using Y  calculated using data from the first trials of session 1 only, but applied to both session 1 and  65  session 2 data for EOG estimation and OA removal. This shows the result of using a subjectspecific (as opposed to session-specific) mapping of EEG to EOG. Interestingly, the value for R is consistently higher when using estimated EOG, which might lead one to believe that using the estimated EOG is actually better than using measured EOG. However, the higher value for  in each case shows that such a conclusion would be erroneous, since the estimation of the  EOG causes both algorithms to introduce more artifacts. The choice of EEG electrodes used to estimate the EOG affects the quality of the estimation. ˜ calculated using a variety of combinations of Therefore, R and were determined for w and Y EEG electrodes: all 55 electrodes (full head); Fp, AF, F, FC and C electrodes (anterior); Fp, AF and F electrodes (frontal); Fp and AF electrodes; and Fpz, Cz, AF7 and AF8 electrodes. The results are shown in Table 5.3 and Table 5.3 for the RLS and H∞ algorithms, respectively, for “all sessions” as described above. Table 5.3: Results of using measured EOG and EOG approximated from different configurations of EEG using the RLS algorithm.  EOG Full head Anterior only Frontal only Fp/AF only Fpz+Cz+AF7+AF8  All data R ε (%) 2.0 14 2.8 20 2.9 21 3.3 21 3.2 18 5.3 19  OA segments Non-OA only segments only R ε (%) R ε (%) 2.9 22 0.5 8 3.8 25 0.9 16 4.0 25 0.9 17 4.5 24 1.1 17 4.3 22 1.1 14 6.9 23 2.3 16  Table 5.4: Results of using measured EOG and EOG approximated from different configurations of EEG using the H∞ algorithm.  EOG Full head Anterior only Frontal only Fp/AF only Fpz+Cz+AF7+AF8  5.4  All data R ε (%) 8.7 21 14.0 25 14.0 26 13.0 25 12.0 23 15.0 23  OA segments Non-OA only segments only R ε (%) R ε (%) 14.0 28 1.7 15 22.0 29 3.7 21 22.0 30 3.7 22 21.0 29 3.5 21 18.0 26 3.2 19 22.0 27 4.7 20  Discussion  The results show that the H∞ algorithm removes more OA (R is consistently higher in Table 5.4) than the RLS algorithm, as reported in [7], but that it may also distort the EEG more ( is consistently higher in Table 5.1). The choice of algorithm will depend on whether the preference is to maximize OA removal or minimize EEG distortion. In addition, for both algorithms, R is 66  consistently lower when there is no OA and higher when there is OA, which confirms that the algorithms remove more when an OA exists. However, during OA removal, both algorithms also potentially remove more EEG signal on average during periods with OA ( is higher). A comparison of R and for the algorithms using both measured and estimated EOG signals shows that, when estimating EOG, more OA is removed (R is higher for estimated EOG than for measured EOG) and the likelihood of distorting the EEG is increased from 14% and 21% using measured EOG to anywhere between 18-21% and 23-26% (see “All data” in Table 5.3 and Table 5.4) for the RLS and H∞ algorithms, respectively. In both cases, using estimated EOG affects the performance when there are no artifacts (see “Non-OA segments only” in Table 5.3 and Table 5.4). Ultimately, whether the effect is sufficiently small depends on the application. Overall, the H∞ algorithm is less affected by using estimated EOG, regardless of the combination of EEG electrodes used. For both algorithms, using the Fpz, Cz, AF7 and AF8 electrodes seems to give the best estimation results.  5.5  Conclusions  The preliminary results reported in this chapter demonstrate that our new metric, which measures the likelihood of an OAR algorithm to distort the EEG, effectively complements a previous metric, which measures how much artifact is removed. Both metrics are suitable for use with real EEG data, and both are needed to properly evaluate the performance of OAR algorithms. Using the metrics with two existing OAR algorithms showed that the more OA each algorithm removes, the more likely it is to distort the EEG. The two algorithms evaluated, like most other OAR algorithms, require a reference EOG signal, yet for some applications (e.g., BCI), it is preferable or necessary to avoid attaching electrodes around the eyes. The new metric was therefore also used to assess how much estimating the reference EOG signal from EEG electrodes (instead of measuring the EOG directly) would affect the performance of the selected algorithms. The results show that using four specific EEG electrodes (Fpz, Cz, AF7 and AF8) to estimate the reference EOG signal increased the likelihood of distorting the EEG from 14% to 19% for one algorithm, and from 21% to 23% in the other, which may be sufficiently low to be used in applications where it is not desirable or possible to use EOG electrodes. The new metric can be used to analyze data from more subjects to verify the results above. Also, since OA removal is a pre-processing step for other applications, the ultimate performance of OAR algorithms will need to be measured in the context of specific applications. For example, analyzing the error rate or false positive rate of a BCI would provide an additional, practical measure of the effectiveness of OAR algorithms on real data.  67  References [1] G. Gomez-Herrero, W. D. Clercq, H. Anwar, O. Kara, K. Egiazarian, S. V. Huffel, and W. V. Paesschen, “Automatic removal of ocular artifacts in the EEG without an EOG reference channel,” Proceedings of the 7th Nordic Signal Processing Symposium (NORSIG), 2006, pp. 130–133, 2006. [2] J. J. M. Kierkels, G. J. M. van Boxtel, and L. L. M. Vogten, “A model-based objective evaluation of eye movement correction in EEG recordings,” Biomedical Engineering, IEEE Transactions on, vol. 53, no. 2, pp. 246–253, 2006. [3] K. H. Ting, P. C. W. Fung, C. Q. Chang, and F. H. Y. Chan, “Automatic correction of artifact from single-trial event-related potentials by blind source separation using second order statistics only,” Medical engineering physics, vol. 28, no. 8, pp. 780–794, 2006. [4] G. L. Wallstrom, R. E. Kass, A. Miller, J. F. Cohn, and N. A. Fox, “Automatic correction of ocular artifacts in the EEG: a comparison of regression-based and component-based methods,” International Journal of Psychophysiology, vol. 53, no. 2, pp. 105–119, 2004. [5] P. He, G. Wilson, and C. Russell, “Removal of ocular artifacts from electro-encephalogram by adaptive filtering,” Medical and biological engineering and computing, vol. 42, no. 3, pp. 407–412, 2004. [6] T. Liu and D. Yao, “Removal of the ocular artifacts from EEG data using a cascaded spatiotemporal processing,” Computer methods and programs in biomedicine, vol. 83, no. 2, pp. 95–103, 2006. [7] S. Puthusserypady and T. Ratnarajah, “H∞ adaptive filters for eye blink artifact minimization from electroencephalogram,” IEEE, Signal Processing Letters, vol. 12, no. 12, pp. 816–819, 2005. [8] A. Schlogl, C. Keinrath, D. Zimmermann, R. Scherer, R. Leeb, and G. Pfurtscheller, “A fully automated correction method of EOG artifacts in EEG recordings,” Clinical Neurophysiology, vol. 118, no. 1, pp. 98–104, 2007. [9] D. Talsma, “Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure,” Psychophysiology, vol. 45, no. 2, pp. 216–228, 2008. [10] Y. Li, “An extended em algorithm for joint feature extraction and classification in braincomputer interfaces,” Neural computation, vol. 18, no. 11, p. 2730, 2006.  68  [11] C. A. Joyce, I. F. Gorodnitsky, and M. Kutas, “Automatic removal of eye movement and blink artifacts from EEG data using blind component separation,” Psychophysiology, vol. 41, no. 2, pp. 313–325, 2004. [12] N. Ille,  P. Berg,  and M. Scherg,  “Artifact correction of the ongoing eeg  using spatial filters based on artifact and brain signal topographies,” nal of Clinical Neurophysiology, [Online].  Available:  vol. 19,  no. 2,  pp. 113–124,  Jour-  March 2002.  http://ovidsp.ovid.com/ovidweb.cgi?T=JS&NEWS=N&PAGE=  fulltext&AN=00004691-200203000-00002&D=ovfte [13] S. Puthusserypady and T. Ratnarajah, “Robust adaptive techniques for minimization of EOG artefacts from EEG signals,” Signal Processing, vol. 86, no. 9, pp. 2351–2363, 9 2006. [14] Delorme, “EEGLAB: an open source toolbox for analysis of single-trial eeg dynamics including independent component analysis,” Journal of Neuroscience Methods, vol. 134, no. 1, p. 9, 2004.  69  Chapter 6  Effects of Task and EEG-based Reference Signal on Performance of On-line Ocular Artifact Removal from Real EEG 5  6.1  Introduction  Eye movements and blinks pose a serious problem for electroencephalogram (EEG) measurements, and their removal from measured EEG is an ongoing research problem [1–4]. Almost all current approaches to ocular artifact (OA) detection and removal [4–9] use one or more electro-oculogram (EOG) signals either directly or indirectly as a reference. A few remove OAs without the need for a reference EOG signal. These approaches, however, are often not fully automated (e.g., require manual selection of either a threshold [10] or of components to reject [11] ), are not able to handle all sources of OA (e.g., only blinks [10] ) or are not suitable for real-time use (e.g., require large amounts of data [1] ). In all cases, the performance of the OA removal (OAR) method is normally evaluated using simulated data. Artifact-free EEG data and an artifact signal are combined artificially and processed using the OAR algorithm. Some feature of the output of the algorithm (e.g., signal-to-noise ratio, or SNR) is compared to the original artifact-free EEG. For real EEG, the artifact-free (“true”) EEG is not known, so the performance of the algorithm on real data is usually reported subjectively [2, 3, 12], often based on visual inspection of the resulting waveforms. Puthusserypady et. al. [13] measured the ratio of the power of the artifact signal removed to the EEG signal remaining as a metric for real data, proposing that the higher the ratio, the better the performance of the algorithm. They assumed, however, that the algorithm is only being applied to data with significant OA. For data that does not contain OA, a higher ratio is not necessarily indicative of better performance. The metric proposed in [14] indicates the likelihood that a given algorithm distorts the underlying EEG. Using both the metrics in [13] and [14] separately, the performance of an OAR algorithm during periods both with and without OA could be evaluated. However, a single metric that evaluates the performance of an OAR algorithm consistently on data that has periods both with and without OA would be 5  A version of this chapter has been published. Noureddin, B., Lawrence, P.D., Birch, G.E., 2009. Effects of Task and EEG-based Reference Signal on Performance of On-line Ocular Artifact Removal from Real EEG. In Proceedings of the 4th IEEE EMBS Conference on Neural Engineering. pp. 614-617, April 2009.  70  preferable. In this chapter, the ability of the metric proposed by Puthusserypady et. al. [13] to measure how much artifact is removed is combined with a measure of how much an OA removal algorithm is likely to distort underlying EEG in a single metric. The metric can be used to effectively evaluate OAR algorithms on real EEG data. The metric was applied to evaluate the H∞ algorithm [7] , which is online, fully automated, and shown to perform well during OAs for multiple subjects. Further, the metric is used to evaluate the performance of the algorithm during periods of (i) eye movements and blinks (EM), (ii) a motor related potential (MRP), and (iii) neither EM nor MRP. In addition, like most OAR methods, the H∞ algorithm requires a reference EOG signal. However, for real-world, real-time, online applications (e.g., a brain computer interface or BCI), it is highly desirable (and sometimes even a requirement) not to place electrodes on the subject’s face. Although methods exist to remove OAs, in principle, without the EOG signal [3][11], they are only suitable for offline analysis. Therefore a linear combination of three EEG channels was used as the reference signal instead of a measured EOG signal required by many OAR algorithms, thus providing an alternative to directly measuring the EOG. The effect of using the EEG-based reference signal on the performance of the OAR algorithm was investigated and is reported.  6.2  Methods  Typically, OAR methods are evaluated using simulated data. The top half of Figure 6.1 shows the approach generally taken for OAR. A (ocular artifact), N (noise) and B (true EEG) are generated, and the resulting Y (EEG with artifact removed) is compared to B. For example, in [2], Ee (EEG electrode signal), Y, and B are used to measure how well an algorithm removes OAs from measured EEG in simulations. For real data, however, A, N and B are unknown, making such a metric unsuitable. Artifact removal Eo  A N B EOG estimation (EOG estimated based on measured EEG)  +  OAR  Y  OAR  ~ Y  Ee  ~ Eo EOG estimation  Figure 6.1: OA removal & EOG estimation: A is the ocular artifact signal, B is the underlying brain signal, N is measurement noise, Eo is the signal measured at EOG electrode sites, Ee is the signal measured at EEG electrode sites, “OAR” is the OA removal algorithm, and Y and ˜ are the estimations of the true EEG at the EEG electrode sites. Ideally, Y=Y=B. ˜ Y  71  Puthusserypady et. al. [13] proposed the following metric for real EEG: C  N  C  N  Y2  (Ee − Y)2 /  R= 1  1  1  (6.1)  1  where N is the number of samples and C the number of channels recorded. The numerator represents the power of the artifact signal removed from all EEG electrodes, and the denominator the power of the remaining EEG. During an artifact, higher R values are better; otherwise, lower values of R are desirable. To use an extreme example, if the OAR algorithm considered the entire signal to be artifact, Y→ 0 and R→ ∞: R is maximal, but clearly the algorithm has poor performance. The evaluation needs to measure how much the algorithm distorts the EEG. If the denominator measures the power of the measured EEG as follows: C  C 2  Ee 2  (Ee − Y) /  R = 1  (6.2)  1  then whenever R >1, the power in the removed artifact signal is higher than that of the original EEG, indicating that the algorithm has likely removed too much signal or introduced new artifacts. The more often this occurs, the worse the performance of the algorithm. Therefore, both R (how well an algorithm removes artifacts) and the percentage of samples in which R >1 (how often it removes too much signal) were measured. The results were combined into a single metric: Q = R(1 − )  (6.3)  The metric Q was calculated for the time-varying H∞ algorithm of [7], which is an adaptive noise cancellation technique that makes no assumptions on the statistical nature of the signals. It finds filter weights w ˆ such that Y = Ee − Eo w ˆ according to the following algorithm:  where n is the sample and  g,  η and ρ are positive constants set to 1.5, 10−5 and .005,  respectively, using the implementation found in [15].  6.2.1  EOG Approximation  As shown in the bottom half of Figure 6.1 , the measured EEG can be used to estimate the signals measured simultaneously at the EOG sites. For this study, EEG electrodes AF7, AF8 and Fp were used to estimate the EOG. Specifically, for horizontal eye movements, the difference between AF7 and AF8 were used, and for vertical eye movements Fp was used. Unlike in [14],  72  the estimation is done without need to ever measure EOG.  6.2.2  Data Collection  In order to test the new metric Q and the effects of EM and MRP, 57 channels of EEG (including 2 linked mastoids) and 5 channels of EOG (one below each eye, one on each outer canthus and one on the nasion) were collected from four subjects, sitting approximately 50cm in front of a computer monitor. All signals were sampled by the same amplifier at 200Hz, with a low-pass filter of 70Hz. The signals were further filtered digitally at 30Hz. A dot was displayed in the centre of the computer monitor, and each subject was asked to stare at the dot for 45 seconds. Then a 4x4 grid of numbers was displayed on the monitor. Each number was sequentially highlighted for 2 seconds, and the subject was instructed to follow the highlighted number. This was repeated three times for each subject. Then the grid was shown again, but the subject was instructed to also perform a hand extension during each 2-sec interval. This was repeated five times for each subject. Finally, the dot was again displayed, and the subject asked to stare at the dot for another 45 seconds. All the data for each subject was concatenated. The resulting data consisted of EEG and EOG measured during periods with and without blinks and eye movements, and with and without an MRP (hand extension). Table 6.1: Comparison of metrics, using mean and standard deviation over all subjects. EOG ref indicates when EOG electrodes were used as a reference signal, and EEG ref when 3 EEG electrodes were used as a reference for the algorithm. R m  s  m  s  m  s  Idle  s  Q  EOG ref  EM  m  ε (%)  R'  EOG ref  4.83  3.11  0.69  0.08  21.6  3.8  3.72  2.32  EEG ref  3.96  3.08  0.62  0.08  18.9  3.8  3.14  2.35  MRP  Reference signal  EOG ref EEG ref  1.19 1.21  0.62 0.85  0.72 0.68  0.20 0.25  25 23.8  10.1 13.6  0.85 0.85  0.36 0.5  6.3 6.3.1  0.49  0.13  0.40  0.06  9.1  2.3  0.45  0.11  EEG ref  0.77  0.52  0.46  0.09  11.3  3.9  0.67  0.42  Results Performance Metric  Each EOG channel was band-pass filtered between 1-2Hz. A vertical EOG signal was obtained using the average of the difference between the nasion electrode and the two electrodes beneath each eye. A horizontal EOG signal was obtained by taking the difference between the two electrodes at the outer canthi of each eye. Segments where the magnitude of the vertical EOG was greater than 10µV or the magnitude of the horizontal EOG was greater than 15µV were marked as EM. During the hand extension, a physical switch provided the timing of the movement, which was marked in the data as MRP. The algorithm was first applied using the EOG channels as a reference to the entire data for each subject, and the values for R, 73  Table 6.2: ANOVA of Q metric. “T” is the effect of task (EM, MRP or idle), and “R” is the effect of the choice of reference signal (EOG-based or EEG-based). “TxR” is the interaction between the two factors. Values in parentheses are for the R part of the metric.  Source T R TxR  Type III Sum of Squares 39.86 (0.328) 0.09 (0.001) 0.68 (0.017)  df 2 1 2  Mean Square 19.93 (0.164) 0.09 (0.001) 0.34 (0.008)  F 7.63 (6.86) 0.24 (0.21) 2.73 (3.52)  p 0.02 (0.03) 0.66 (0.68) 0.14 (0.10)  and Q calculated for samples containing only EM, samples containing only MRP, and samples containing neither EM nor MRP. The algorithm was then applied again to the same data, using the AF7, AF8 and Fp channels as a reference, and R,  and Q calculated again for each of the  three sets of samples. Table 6.1 shows the mean and standard deviation values of R, R ,  and Q over all subjects,  for periods of “idle” (no EM or MRP) , EM and MRP. The data in bold was obtained using an EOG reference signal, and the rest was obtained using the 3 EEG channels as the reference signal. An examination of the mean values for the EOG reference reveals that more OA is removed during EM than idle periods (R is higher), but that the algorithm is also more likely to distort underlying EEG ( is higher). The metric Q is also higher during EM periods (3.72 vs. 0.45), but the increase is not as high as for R (4.83 vs. 0.49), since there is more likelihood of distortion. Similarly, during an MRP, the algorithm removes more OA than during idle periods, but is also most likely to distort the EEG. This is reflected in the metric Q: while the value for R during an MRP is 2.4 times as large as during idle periods (1.19 vs. 0.49), the value for Q is only 1.9 times as large (0.85 vs. 0.45). That is, the increase in Q is reduced because of the increased likelihood of EEG distortion.  6.3.2  Effects of Task and Choice of Reference Signal  Table 6.1 also shows two additional findings. First, the mean values for all metrics are consistently higher during EM periods than both idle and MRP, and also higher during MRP than idle. That is, whether the algorithm is applied to periods of EM or MRP seems to affect the performance of the algorithm: it both removes more from the EEG, and is more likely to distort it. Second, the mean values of all metrics do not seem to change much from when the reference signal used by the algorithm is based on EOG channels to when it is based on frontal EEG channels. Figure 6.2 shows results at Fpz during a blink for one subject using each reference signal. To verify these findings, an ANOVA was performed on the data from all subjects using the Q metric, with the results shown in Table 6.2. For this study, the task (i.e., whether a subject was performing an eye movement or hand extension) significantly (p < .05) affects the  74  Figure 6.2: Comparison of EOG- and EEG-based OA removal during blink.  performance of the algorithm. Further, there is no evidence that the choice of reference signal (based on EOG electrodes or on EEG electrodes) significantly (p < .05) affects the performance of the algorithm. Finally, there is no evidence that the effect of the task depends on the choice of reference signal.  6.4  Discussion  The proposed Q metric measures the ability of an algorithm to remove OA as well as the likelihood that it will distort underlying EEG. The results show that the value for Q is consistent with what is expected from the R and  metrics previously introduced.  Using the Q metric, the results also show that the performance of the algorithm is significantly affected by the presence of EM or MRP. This means that when reporting the performance of an OA removal algorithm, it is necessary to report its performance during periods of EM and MRP separately. Further, there is no evidence that using a reference signal based on 3 frontal EEG electrodes significantly affects the performance of the algorithm. In this study, the mean value for Q during EM is 16% lower when using EEG channels as the reference, and less than 1% lower during MRP. Thus, for applications where it is not possible or desirable to use EOG electrodes, it may be possible to use the Fp, AF7 and AF8 electrodes as a reference signal to the H∞ algorithm for OA removal, with no significant decrease in performance.  6.5  Conclusions  The effectiveness of a new metric for quantitatively measuring the ability of OA removal methods to both remove artifacts and to minimize distortion of EEG was demonstrated. It was also shown that the H∞ algorithm’s performance is significantly (p < .05) different during periods with eye movements and blinks (EM), during a hand extension task (MRP), 75  and during periods without EM or MRP. Thus, it is necessary to report the performance of OA removal algorithms during periods of EM and MRP separately. Finally, since the H∞ algorithm – like most other OA removal algorithms – requires a reference signal, the effect of using 3 frontal EEG electrodes as the reference instead of the usual EOG electrodes was examined. In this study, no evidence was found that using the EEG electrodes significantly (p < .05) affected the performance of the algorithm. On average, the performance decreased by 16% during EM and less than 1% during MRP. Therefore, it is possible to avoid the use of EOG electrodes for OA removal, which is necessary or desirable for certain applications such as BCI. Since OA removal is a pre-processing step for other applications, the ultimate performance of OA removal algorithms will need to be measured in the context of specific applications. For example, analyzing the error rate or false positive rate of a BCI would provide an additional, practical measure of the effectiveness of OAR algorithms on real data.  76  References [1] G. Gomez-Herrero, W. D. Clercq, H. Anwar, O. Kara, K. Egiazarian, S. V. Huffel, and W. V. Paesschen, “Automatic removal of ocular artifacts in the EEG without an EOG reference channel,” Proceedings of the 7th Nordic Signal Processing Symposium (NORSIG), 2006, pp. 130–133, 2006. [2] J. J. M. Kierkels, G. J. M. van Boxtel, and L. L. M. Vogten, “A model-based objective evaluation of eye movement correction in EEG recordings,” Biomedical Engineering, IEEE Transactions on, vol. 53, no. 2, pp. 246–253, 2006. [3] K. H. Ting, P. C. W. Fung, C. Q. Chang, and F. H. Y. Chan, “Automatic correction of artifact from single-trial event-related potentials by blind source separation using second order statistics only,” Medical engineering physics, vol. 28, no. 8, pp. 780–794, 2006. [4] G. L. Wallstrom, R. E. Kass, A. Miller, J. F. Cohn, and N. A. Fox, “Automatic correction of ocular artifacts in the EEG: a comparison of regression-based and component-based methods,” International Journal of Psychophysiology, vol. 53, no. 2, pp. 105–119, 2004. [5] P. He, G. Wilson, and C. Russell, “Removal of ocular artifacts from electro-encephalogram by adaptive filtering,” Medical and biological engineering and computing, vol. 42, no. 3, pp. 407–412, 2004. [6] T. Liu and D. Yao, “Removal of the ocular artifacts from EEG data using a cascaded spatiotemporal processing,” Computer methods and programs in biomedicine, vol. 83, no. 2, pp. 95–103, 2006. [7] S. Puthusserypady and T. Ratnarajah, “H∞ adaptive filters for eye blink artifact minimization from electroencephalogram,” IEEE, Signal Processing Letters, vol. 12, no. 12, pp. 816–819, 2005. [8] A. Schlogl, C. Keinrath, D. Zimmermann, R. Scherer, R. Leeb, and G. Pfurtscheller, “A fully automated correction method of EOG artifacts in EEG recordings,” Clinical Neurophysiology, vol. 118, no. 1, pp. 98–104, 2007. [9] D. Talsma, “Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure,” Psychophysiology, vol. 45, no. 2, pp. 216–228, 2008. [10] Y. Li, “An extended em algorithm for joint feature extraction and classification in braincomputer interfaces,” Neural computation, vol. 18, no. 11, p. 2730, 2006.  77  [11] C. A. Joyce, I. F. Gorodnitsky, and M. Kutas, “Automatic removal of eye movement and blink artifacts from EEG data using blind component separation,” Psychophysiology, vol. 41, no. 2, pp. 313–325, 2004. [12] N. Ille,  P. Berg,  and M. Scherg,  “Artifact correction of the ongoing eeg  using spatial filters based on artifact and brain signal topographies,” nal of Clinical Neurophysiology, [Online].  Available:  vol. 19,  no. 2,  pp. 113–124,  Jour-  March 2002.  http://ovidsp.ovid.com/ovidweb.cgi?T=JS&NEWS=N&PAGE=  fulltext&AN=00004691-200203000-00002&D=ovfte [13] S. Puthusserypady and T. Ratnarajah, “Robust adaptive techniques for minimization of EOG artefacts from EEG signals,” Signal Processing, vol. 86, no. 9, pp. 2351–2363, 9 2006. [14] B. Noureddin, P. D. Lawrence, and G. E. Birch, “Quantitative evaluation of ocular artifact removal methods based on real and estimated EOG signals,” in Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS 2008), 2008, pp. 5041–5044. [15] Delorme, “EEGLAB: an open source toolbox for analysis of single-trial eeg dynamics including independent component analysis,” Journal of Neuroscience Methods, vol. 134, no. 1, p. 9, 2004.  78  Chapter 7  Online Removal of Eye Movement and Blink EEG Artifacts Using a High Speed Eye Tracker 6  7.1  Introduction  An ongoing research problem [1–4] is the online removal of the effects of eye movements and blinks from the electroencephalogram (EEG). Some ocular artifact (OA) removal techniques require only EEG signals. However, they need manual selection of a threshold [5] or of components to reject [6], are not able to handle all sources of OA (e.g., only blinks [5]) or are not suitable for real-time use (e.g., require large amounts of data [1]). Other approaches to automated, online OA removal [4, 7–11] use electro-oculogram (EOG) signals as a reference. A novel approach is presented for using an eye tracker [12] to replace EOG for any OA removal method that requires an EOG reference, thus making any such method more suitable for practical, online applications (e.g., a brain computer interface), especially where only scalp EEG measurements are available (e.g., no EOG electrodes are attached to the face). The approach uses the eye tracker to generate reference signals related to eye movements and blinks. Using the eye tracker is also useful in applications where it can be combined with EEG for more powerful user interfaces. In order to handle both eye movements and blinks, a new blink signal generation (BSG) algorithm was also developed and is described in this chapter. Unlike other approaches [13– 17], it can operate in low-lighting conditions without color information, does not require any training, calibration or user-specific parameters, and avoids complicated corner detection and feature extraction image processing algorithms. Instead, it simply tracks changes in the intensity of the region around the pupil in the images from the eye tracker. Neither the overall approach to using an eye tracker as an EOG replacement nor the BSG algorithm require calibration or user intervention. Using an eye tracker is expected to reduce the unintended distortion of EEG signals by OA removal methods often caused by using an EOG reference (although investigating such reductions is beyond the scope of this chapter). Kierkels et. al. [2] also proposed the use of an eye tracker for OA removal. They used an eye tracker to generate pupil positions, which were then used as inputs to a Kalman Filter to remove the effects of eye movements. Although they achieved superior results over other OA 6  A version of this chapter has been submitted for publication. Noureddin, B., Lawrence, P.D., Birch, G.E., 2010. Online Removal of Eye Movement and Blink EEG Artifacts Using a High Speed Eye Tracker.  79  Vo  VN Σ  VB  E  TOAR  Y  VA  Figure 7.1: General approach to OA removal: VB is the signal component caused by brain sources, VO by ocular sources, VA by other “artifact” sources (e.g., EMG, EKG, etc.), and VN by noise sources (e.g., measurement noise). E is the measured scalp potential and Y is the output of the OA removal algorithm, and represents the EEG with OA removed. TOAR is a transformation (usually linear) that removes the OA. removal methods at the time, the present chapter describes an approach that can potentially be used by any method, including those that use optimal filters other than the Kalman Filter. Although the Kierkels et. al. method was shown to work well for eye movements, it was not able to handle blinks, and required a 30-second starting period for stabilizing the Kalman Filter (which required manual tuning), during which the OA removal algorithm could not be used reliably. The approach described here, which includes the use of the new BSG algorithm, can handle both eye movements and blinks, and does not require manual tuning, calibration or a stabilization period. Finally, the eye tracker used by Kierkels et. al. operated at 50Hz, while it has been shown [18] that a high-speed eye tracker is required in order to not miss important dynamics of eye movements and blinks. The present work uses a high-speed eye tracker capable of frame rates of over 400Hz. In the present work, the performance on real EEG of two adaptive filters [7][9] using an eye tracker is compared with using EOG and with using frontal EEG as a reference. To measure performance on real EEG, the “R” metric [19] is used to measure the amount of artifact removed during different types of eye movements and blinks at 52 different electrodes spread across the scalp. It is shown that for the OA removal algorithms investigated, during some eye movements, using an eye tracker is more effective at removing OA at frontal electrodes than using either EOG or frontal EEG. In summary, to the best of the authors’ knowledge, this is the first work that replaces the EOG with a high-speed eye tracker as a reference for any online OA removal method. It can successfully remove the effects of both eye movements and blinks without any calibration or manual tuning. The removal of blinks is made possible by a novel, fully automated online algorithm for extracting the time course of a blink from eye tracker images, whose output is shown to be as effective as EOG for removing blink artifacts. It is also the first work to fully evaluate the performance of online OA removal using an eye tracker over electrodes spread across the entire scalp for both eye movements and blinks of different amplitudes.  7.2  Methods  The general approach to OA removal is shown in Fig. 7.1. Brain electric signals are assumed to conduct to the scalp (VB ), combining with ocular signals from eye movements and blinks (VO ), other artifact signals (VA ), and measurement and ambient noise (VN ). Generally, VA is 80  Pupil found?  Keep previous value for eye movement signals  No  HET ,VET Yes  Fixation Yes detected?  Calculate new values for horizontal and vertical eye movement signals from pupil center  Base intensities exist?  No  Calculate base intensities  X1 , X2 , X3  No Base intensities exist?  No Wait until next fixation  Yes  Calculate new value for blink signal using current pupil center BET  Yes Calculate new value for blink signal using pupil center from last fixation  Figure 7.2: Overall operation of eye tracker for every image captured. The signals HET , VET and BET are extracted as a reference for OA removal. assumed to be zero, and VN is modeled as white Gaussian noise. Further, these EEG sources are assumed to combine linearly to form the measured scalp potentials. The ocular artifact removal method TOAR is a transformation that aims to remove VO from E, resulting in Y (ideally Y = VB ). Several methods [4, 7–11] shown to be effective at OA removal use the EOG as a reference to calculate components to be subtracted from E, with Y being the difference. That is: Y = E − WX  (7.1)  where X is a “reference input” or vector of measured reference signals (in this case EOG), and W is an array of filter coefficients. Two such methods are based on (a) the recursive least squares (RLS) algorithm [7] which has fast convergence and (b) the time-varying H∞ algorithm [9] which converges more slowly but has been shown to perform better than RLS for OA removal. Both algorithms adapt W at each sample such that the total power of Y is minimized (see [7][9] for detailed equations). The effect of using a reference X based on an eye tracker signal and frontal EEG channels instead of EOG was evaluated using each of the RLS and H∞ algorithms (see [20] for the implementation used).  7.2.1  Data Collection  Simultaneous EEG, EOG and eye tracking data was collected from 4 subjects (1 female and 3 males) aged 19-63, sitting approximately 50cm in front of a computer monitor. Each subject 81  was instructed to perform 4 tasks (see Table 7.1), providing data during 9 types of OA: small left saccade (SL), small right saccade (SR), small downward saccade (SD), large left saccade (LL), large right saccade (LR), large upward saccade (LU), large downward saccade (SD), small blink (SB), and large blink (LB). Table 7.1: Data collection tasks. Numbers in brackets show how many times task was performed. Task 1 2 3 4  Description Stare at dot in center of computer monitor (45 secs) [3] Follow sequentially highlighted number in 4x4 grid on monitor for 2 s/num [3] Follow dot on each corner (clockwise order) of monitor for 2 s [3] Follow dot on each corner (counterclockwise order) of monitor for 2 s [3]  For each subject, 55 EEG channels (Fp1, Fpz, Fp2, AF7, AF3, AFz, AF4, AF8, F7, F5, F3, F1, Fz, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCz, FC2, FC4, FC6, FT8, C5, C3, C1, Cz, C2, C4, C6, CP5, CP3, CP1, CPz, CP2, CP4, CP6, P7, P5, P1, Pz, P2, P6, P8, PO7, PO3, POz, PO4, PO8, O1, Oz, O2), 2 linked mastoids and 6 channels of EOG (see below) were collected during a single 3-hour session. All signals were sampled by the same amplifier at 200Hz, with a 70Hz low-pass filter and a 0.5-30Hz digital band-pass filter (BPF). Data was processed using a PC with a 2.67GHz Intel Core2 CPU and 2GB of RAM. Horizontal (HEOG ) and vertical (VEOG ) EOG were calculated as follows: HEOG = HR − HL  (7.2)  VEOG = ((VBR − VT R ) + (VBL − VT L ))/2  (7.3)  where HR and HL were measured at the left and right outer canthi of the eyes, respectively and VT R , VBR , VT L and VBL were measured above (denoted by “T”) and below (denoted by “B”) the right eye and above and below the left eye, respectively. In addition, an eye tracking device [12] was used to collect eye movement and blink data at an average rate of approximately 400 frames per second simultaneously with the EEG and EOG data. Before the start of each task, it was ensured that the subject’s eye was in the field of view of the eye tracker’s camera. For each frame, the eye tracker provided the following: 1. A grayscale image of the eye of the subject. 2. A timestamp of when the image was captured. This timestamp was based on an internal clock of the eye tracker, which was also queried whenever recording was started by the EEG amplifier. This allowed accurate synchronization between the eye images and the EEG samples. 3. A flag indicating whether a pupil was found. 4. The parameters (center, axes lengths and eccentricity, all measured in pixels) of the ellipse that best fit the pupil, if one was found in the image. 5. A flag indicating whether the subject was performing a fixation at the time. 82  5R R S2  30◦  P0 Pupil  30◦  S3  S1  Figure 7.3: Areas of eye tracker images used for extracting blink signal (not drawn to scale). R is the pupil radius (average of lengths of ellipse axes). The above information was used to extract 3 signals (see Fig. 7.2). The x- and y-coordinates of the pupil center were filtered using a 0.5-30Hz BPF to produce 2 signals (HET and VET ) that correspond to horizontal and vertical eye movements, respectively. The third signal (BET ) corresponds to predicted changes in EEG during blinks as follows (see Fig. 7.3). The first time the pupil was found and a fixation was detected (i.e., the pupil position was stable), base intensities X1 , X2 and X3 were calculated: Xk = Ik /µ0 ,  k = 1, 2, 3  (7.4)  where I1 , I2 and I3 are image intensities corresponding to S1 , S2 and S3 in the eye image as shown in Fig. 7.3, and µ0 is the average intensity of the entire eye image. For each subsequent image, intensities Y1 , Y2 and Y3 were calculated: Yk = Ik /µ,  k = 1, 2, 3  (7.5)  where I1 , I2 and I3 are as above, and µ is the average intensity of the entire eye image. If the image was from a fixation, the current pupil position was used as the point P0 in Fig. 7.3. Otherwise, the pupil position from the last fixation was used for P0 , since during a blink the actual pupil position is likely inaccurate or not available. For each image, a signal Z was computed from X and Y : N1  N2  (Y1 − X1 )2 +  Z=2 i=1  N3  (Y2 − X2 )2 + i=1  (Y3 − X3 )2  (7.6)  i=1  where N1 , N2 and N3 are the number of pixels in S1 , S2 and S3 , respectively. The resulting Z was filtered using a 0.5-30Hz BPF to produce the blink signal BET . Finally, the timestamps from the eye tracker were used to synchronize HET , VET and BET with the EEG. Fig. 7.4 shows examples of EOG and eye tracker signals corresponding to eye movements and blinks.  7.2.2  Reference Inputs  From the data collected, 3 different reference inputs (X in (7.1)) were constructed for use by each of the algorithms (AF7, AF8, Fpz are signals from those frontal EEG channels, and HEOG , VEOG , HET , VET , and BET are as above):  83  (a)  (b)  Figure 7.4: Eye tracking vs EOG during (a) simultaneous horizontal (left) and vertical (up) saccade, and (b) blink. Sample eye images are shown above and EOG and eye tracking signals below, with arrows pointing to the time instant each image was captured. In (a), the x shows the current pupil position, and the dot shows the pupil position at the start of the saccade. In (b), the ellipse shows the area where the intensity value is tracked during the blink.  84  1. X = [HEOG , VEOG ]T (EOG reference). 2. X = [AF7, AF8, Fpz]T (frontal EEG or “fEEG” reference, as in [21]). 3. X = [HET , VET , BET , AF7, AF8, Fpz]T (eye-tracker-based reference or “ET+fEEG”). See below for the reason for combining the eye tracker signal with EEG signals.  Table 7.2: Contribution by specific sources to various measured signals. Specific Sources Eye rotation Eyelid movement OA-related EMG OA-related EEG Change in skin impedance Amplifier noise Background EEG or ERP Change in corneo-retinal potential Image processing errors Camera noise  Measured Signals EOG electrodes  Frontal EEG electrodes  Other EEG electrodes  Eye tracker  1 1 1 1  3 2  ERP - Event Related Potential. “ ” - substantial part of signal. 1 - Source less detected at other EEG electrodes than at frontal EEG or EOG. 2 - Source less detected at frontal EEG electrodes than at EOG. 3 - Source less detected at EOG electrodes than at frontal EEG.  7.2.3  OA Removal Evaluation  EEG, EOG and eye tracker measurements contain common information about the movement of the eye and eyelid, as well as other information that is not in common [22][23], as shown in Table 7.2. For real, recorded data, it is impossible to separate the components. Therefore, it would be misleading to compare, for example, EOG with an eye tracker signal: using EOG might remove more of the signal, but it may also result in removing parts of the EEG that are unrelated to the OA (“Background EEG or ERP”). To address this problem, the comparison was carried out on the average of several trials of each of 9 types of OA (see above). In this way, any components that were unrelated to the OA were minimized. Since the average signal is mostly OA-related, removing more signal could consistently be considered a good measure of the performance of the OA removal algorithm, and by extension the choice of reference input. The eye tracker has the advantage that it does not contain information about EEG unrelated to the OA, thus minimizing distortion after OA removal. However, since there are OA-related components (e.g., OA-related EMG or EEG) that are picked up by EOG and EEG electrodes but not by the eye tracker, the eye tracker signals were combined with frontal EEG signals. Including the EEG channels helps remove EEG sources that are time-locked with OA, while including the eye tracking signals minimizes the removal of non-OA related components. Initial 85  attempts to use just the eye tracker signals HET , VET and BET resulted in poor OA removal by both algorithms, thus confirming the above. OA removal methods are normally evaluated using simulated data. Artifact-free EEG and an artifact signal are combined artificially and processed using the OA removal algorithm (see Fig. 7.1). Metrics such as the signal to noise ratio of the output (Y ) can be compared to the artifact-free EEG (VB ). For real EEG, however, VB , VN and VA are unknown, so the performance on real data is reported subjectively [3, 24, 25], often based on visual inspection of the resulting waveforms. As shown previously [26], the “R” metric [19] can be used to measure the amount of OA removed: N  N  (E(k) − Y(k))2 /  R= k=1  Y(k)2  (7.7)  k=1  where N is the number of samples. For each subject, each algorithm was applied to every trial of each type of OA at each EEG channel using each of the 3 reference inputs (see Section 7.2.2). The ensemble averages of the original and OA-removed trials were used for X and Y , respectively in (7.7) to calculate R in each case.  7.3  Results  Statistical tests (t-test) were performed for each combination of (i) 52 electrodes, (ii) 2 algorithms (RLS and H∞ ) and (iii) 9 OA types to compare the R value using the ET+fEEG reference versus the EOG reference in each case. That is, for each set of variables (electrode location, OA removal algorithm and OA type), the signficance of the difference in the value of R between the two reference signals used by the given OA removal algorithm was determined using a pairwise t-test using the data from the 4 subjects. The difference between using ET+fEEG versus fEEG was similarly tested. Out of the resulting set of pairwise t-tests, the combinations that showed a significant difference between ET+fEEG and EOG and/or fEEG are shown in Table 7.3 (all other cases were found not to have significant differences (p < .05) between using the ET+fEEG reference and either the EOG or fEEG references). In each row, the last two columns show the significance of the comparison between the R values of ET+fEEG vs. (a) EOG and (b) fEEG. Fig. 7.5 shows an example of the above graphically for each algorithm. Bar graphs showing mean R values at each electrode are shown, with a box indicating that the difference is significant. Overall, more OA is removed at frontal electrodes, and the H∞ algorithm removes more OA than RLS, both as expected. In addition, Fig. 7.6 shows sample plots of the EEG before and after OA removal using both algorithms with each of the three reference inputs. In each case, both sample individual trials and the average of several trials is shown. For the RLS algorithm at Fp1 during small right saccades (SR), using the ET+fEEG reference produces significantly (p<.05) better results (R=27.9 in the first line of the top portion of Table 7.3) than the EOG (R=1.6) reference. This is confirmed in Fig. 7.6(a), which shows that both in single trials and the ensemble average, the ET+fEEG reference removes more signal during  86  (a)  (b)  Figure 7.5: Logarithm of mean R values for (a) RLS and (b) H∞ algorithms during small upward saccades. Electrodes where the difference between means is significant are highlighted with boxes. Amplitudes are scaled the same in both (a) and (b) to show the relative performance of the algorithms at each electrode.  87  the OA. Similarly, for the H∞ algorithm at Fp2 during small upward saccades (SU), using the ET+fEEG reference is significantly (p<.05) better (R=648 in the first line of the bottom portion of Table 7.3) than the EOG (R=46.8) reference, which is confirmed in Fig. 7.6(b). Finally, Tables 7.4 and 7.5 show the R value (how much OA is removed) for each of the two algorithms. In each row, R is shown separately for horizontal and vertical saccades and blinks, using each of the reference inputs. In this case, the average R values for small and large amplitudes were used. Table 7.3: Mean R values for each electrode, algorithm, OA type, and reference input combination where the R value for ET+fEEG was significantly different than EOG and/or fEEG. Columns marked “t-test Result” show the results of the pairwise comparison of ET+fEEG reference with EOG and fEEG in each case. “S: Higher R”: difference in means is significant (p < .05) and ET+fEEG removes more OA. “S: Lower R”: difference in means is significant and ET+fEEG removes less OA. “N”: difference between mean R of ET+fEEG and other reference (either EOG or fEEG) is not significant. Pairwise Comparison of ET+fEEG vs. EOG Mean R values t-test Channel Fp1 Fp1 Fp1 Fp2 Fp2 Fp2 AF3 AFz F1 Fz F3 FC1 FCz C1 Cz Fp2 CP3 CP3 P7 P7 O1 P2 PO3 PO4 PO8 Pz POz  7.4  Algorithm OA type RLS SR RLS LR RLS LU RLS SU RLS LU RLS LD RLS SL RLS SR RLS SR RLS SR RLS SL RLS SR RLS SR RLS SR RLS SR H∞ SU H∞ LL H∞ LR H∞ LL H∞ LR H∞ SU H∞ SR H∞ SR H∞ SR H∞ SR SR H∞ H∞ SR  EOG  ET+fEEG  1.6 1.8 58.6 16.5 83.6 94.7 0.5 0.4 0.2 0.3 0.6 0.2 0.4 0.1 0.2 46.8 139.3 205.1 129.5 114.5 1.3 5 4.3 4.1 3.1 4.7 4.3  15.7 151.5 572.3 401.9 1101.3 2158.3 3.8 7.1 2.8 2.5 1.4 1 1 0.4 0.5 470.2 6.3 10.5 7.3 9.5 0.6 0.7 1.1 0.6 0.5 0.7 0.7  S: S: S: S: S: S: S: S: S: S: S: S: S: S: S: S: N N N N N S: S: S: S: S: S:  Pairwise Comparison of ET+fEEG vs. fEEG Mean R values t-test  Result  fEEG  ET+fEEG  Higher Higher Higher Higher Higher Higher Higher Higher Higher Higher Higher Higher Higher Higher Higher Higher  27.9 210.3 1002.5 385.7 1742.8 2157.5 4.8 9 3.7 3.4 1.8 1.4 1.6 0.6 0.6 648 13.1 27.2 13.9 21.5 1.2 1.8 2.1 1.5 1.1 1.8 1.7  15.7 151.5 572.3 401.9 1101.3 2158.3 3.8 7.1 2.8 2.5 1.4 1 1 0.4 0.5 470.2 6.3 10.5 7.3 9.5 0.6 0.7 1.1 0.6 0.5 0.7 0.7  Lower Lower Lower Lower Lower Lower  R R R R R R R R R R R R R R R R  R R R R R R  Result N N N N N N N N N N N N N N N N S: S: S: S: S: N N N N N S:  Higher Higher Higher Higher Higher  R R R R R  Higher R  Discussion  For the H∞ method, at some electrodes (CP3, P7, O1), for certain OA types (LL, LR, SU), using the eye tracker was shown to remove significantly (p<.05) more OA than using the AF7, AF8 and Fpz electrodes as a reference, as indicated on the right hand side of the bottom portion of Table 7.3 by “S: Higher R”. For all other cases, using the eye tracker resulted, on average, 88  (a)  (b)  Figure 7.6: OA removal using (a) RLS at Fp1 during small right saccade and (b) H∞ at Fp2 during small upward saccade. In each case, 3 sample trials and the average of several trials is shown. The original EEG, and the corresponding artifact-removed signal using EOG, EEG, and eye tracker combined with EEG signals are plotted.  89  in as much OA being removed as using the AF7, AF8 and Fpz electrodes as a reference. Although at a few posterior locations, for small right saccades, using an EOG reference removed more OA than using the eye tracker, for many applications removing more OA from those electrodes may not be of practical importance. Specifically, as can be seen in the bottom 6 rows of Table 7.3 as well as in Fig. 7.5, there is very little power during OAs at those electrodes compared to frontal electrodes where using the eye tracker resulted in significantly (p<.05) more OA being removed than using an EOG reference, and where the power during the OA is much higher (e.g., R=648 for ET+fEEG and 46.8 for EOG at Fp2: eye tracker removes 13.8 times as much OA) than the posterior electrodes (e.g., R=1.8 for ET+fEEG and 5 for EOG at P2: EOG removes 2.8 times as much OA). For the RLS method, at some electrodes, for certain OA types (see top portion of Table 7.3), using the eye tracker removed significantly more OA than using EOG. For this method, in all cases considered, using the eye tracker resulted, on average, in as much OA being removed as using the AF7, AF8 and Fpz electrodes as a reference. The BSG algorithm was evaluated in two ways. First, its output was found, as shown in Fig. 7.4(b), to match the shape of VEOG , which is a widely accepted measure of the timecourse of a blink. Second, replacing the VEOG with the BSG output for removing blink artifacts was investigated. Using the ET+fEEG reference (which relies on the BSG output to contain information about blinks) resulted, on average, in as much OA being removed as using EOG during blinks of different amplitudes for both the RLS and H∞ algorithms. Therefore, it can be concluded that the BSG algorithm generated a signal that was as effective a reference as EOG for both OA removal algorithms during blinks.  7.5  Conclusions  A novel approach was presented for using an eye tracker to replace EOG for any OA removal method that requires an EOG reference. The performance on real EEG of the RLS and H∞ algorithms using (i) EOG, (ii) frontal EEG and (ii) an eye tracker with frontal EEG as reference inputs was compared for different eye movements and blinks of different amplitudes at 52 electrodes spread across the entire scalp. In addition, this was the first work using an eye tracker for online OA removal that was able to successfully remove the effects of both eye movements and blinks. This was possible because of a novel online algorithm (BSG) for extracting the time course of a blink from eye tracker images. The results confirmed that the BSG algorithm can be used as part of an effective eye tracker-based OA removal method. For the RLS method, using the eye tracker was found overall to be better than using EOG and as good as using frontal EEG channels at removing OA over the range of OA types and electrodes investigated. For the time-varying H∞ algorithm, using the eye tracker was found overall to be better than using frontal EEG channels at removing OA over the range of OA types and electrodes investigated. Compared to EOG, the eye tracker reference removed significantly more OA at Fp2 (eye tracker removed almost 14 times as much) during small upward saccades and less OA at P2, PO3, PO4, PO8, Pz and POz during small right saccades (EOG removed 90  less than 3 times as much at each electrode). If one looks at the RLS algorithm alone, it may be argued that since using the eye tracker does not provide significant performance improvement over using frontal EEG channels, there is no benefit to adding the eye tracker. However, the H∞ algorithm has been previously shown [26] to perform better than the RLS algorithm for OA removal. This finding was consistent with the results in the present work, which also showed that the H∞ algorithm significantly (p<.05) benefits from using the eye tracker instead of frontal EEG as a reference input. Further, the only electrodes at which the EOG reference removed more OA than using the eye tracker reference (and then only during small right saccades) are at posterior locations (P2, PO3, PO4, PO8, Pz and POz in Table 7.3) with very low power during the OA. Hence, the superiority of using EOG at those electrodes for one type of OA may not be of practical importance for many applications. In contrast, the superiority of using the eye tracker reference at Fp2 may be of greater practical benefit. Further, eliminating the need for EOG electrodes attached to the face is critical for practical daily applications. Finally, the eye tracker can be used in applications where EEG and eye tracking are combined for other purposes. The use of the eye tracker and novel BSG algorithm provides the means for tracking point-ofgaze and blink dynamics simultaneously with EEG data collection and processing, which is often desired or required in clinical studies and a variety of human computer interface applications such as neuromarketing, brain-computer interfaces, and pilot and driver drowsiness monitoring. The last two applications could particularly benefit from the BSG algorithm, as it can be used to measure blink frequency and speed as an indicator of driver or pilot alertness.  91  ETC.  strom, R. E. Kass, A. Miller, J. F. Cohn, and N. A. Fox, correction of ocular artifacts in the EEG: a comparison of based and component-based methods,” International Journal hysiology, vol. 53, no. 2, pp. 105–119, 2004. Ma, W. Lu, and Y. Li, “Automatic removal of the eye blink m EEG using an ICA-based template matching approach,” cal measurement, vol. 27, no. 4, pp. 425–436, 2006. e, I. F. Gorodnitsky, and M. Kutas, “Automatic removal of ent and blink artifacts from EEG data using blind component ” Psychophysiology, vol. 41, no. 2, pp. 313–325, 2004. Wilson, and C. Russell, “Removal of ocular artifacts from ephalogram by adaptive filtering,” Medical and biological g and computing, vol. 42, no. 3, pp. 407–412, 2004. D. Yao, “Removal of the ocular artifacts from EEG data scaded spatio-temporal processing,” Computer methods and n biomedicine, vol. 83, no. 2, pp. 95–103, 2006. erypady and T. Ratnarajah, “H∞ adaptive filters for eye ct minimization from electroencephalogram,” IEEE, Signal Letters, vol. 12, no. 12, pp. 816–819, 2005. , C. Keinrath, D. Zimmermann, R. Scherer, R. Leeb, and eller, “A fully automated correction method of EOG artifacts ordings,” Clinical Neurophysiology, vol. 118, no. 1, pp. 98–  “Auto-adaptive averaging: Detecting artifacts in event-related ata using a fully automated procedure,” Psychophysiology, . 2, pp. 216–228, 2008. sey, B. Noureddin, and P. Lawrence, “Fixation precision in noncontact eye-gaze tracking,” Systems, Man, and Cybert B: Cybernetics, IEEE Transactions on, vol. 38, no. 2, pp. 008. lace, R. Derakhshani, S. P. K. Tankasala, and D. L. Filion, ion of startle eyeblink metrics using neural networks,” in tworks, 2009. IJCNN 2009. International Joint Conference pp. 1908–1914. A. Rosenfeld, and Z. Duric, “A method of detecting and ses and eyelids in video,” Pattern Recognition, vol. 35, no. 6, 401, 2002. T. Kanade, and J. F. Cohn, “Dual-state parametric eye n Proceedings of the Fourth IEEE International Conference tic Face and Gesture Recognition, 2000, pp. 110–115. “Extraction and tracking of the eyelids,” in Proceedings of International Conference on Acoustics, Speech, and Signal (ICASSP ’00), vol. 6, 2000, pp. 2357–2360 vol.4. Y.-J. Zhang, “Detecting eye blink states by tracking iris and attern Recognition Letters, vol. 27, no. 6, pp. 667–675, 2006. din, P. D. Lawrence, and G. E. Birch, “Time-frequency eye blinks and saccades in EOG for EEG artifact removal,” ings of 3rd International IEEE/EMBS Conference on Neural g (CNE ’07), 2007, pp. 564–567. erypady and T. Ratnarajah, “Robust adaptive techniques for on of EOG artefacts from EEG signals,” Signal processing, . 9, pp. 2351–2363, 2006. e and S. Makeig, “EEGLAB: an open source toolbox for single-trial eeg dynamics including independent component J.Neurosci.Methods, vol. 134, no. 1, pp. 9–21, 2004. din, P. D. Lawrence, and G. E. Birch, “Effects of task based reference signal on performance of on-line ocular moval from real EEG,” in Proceedings of 4th International S Conference on Neural Engineering (NER ’09), 2009, pp.  uo and R. Plonsey, Bioelectromagnetism - Principles and ns of Bioelectric and Biomagnetic Fields. Oxford University 5. meyer and F. L. da Silva, Electroencephalography: Basic Prinnical Applications, and Related Fields, 4th Ed. Baltimore, USA: Lippincott Williams & Wilkins, 1998. Kierkels, G. J. M. van Boxtel, and L. L. M. Vogten, “A d objective evaluation of eye movement correction in EEG ” Biomedical Engineering, IEEE Transactions on, vol. 53, 246–253, 2006. Berg, and M. Scherg, “Artifact correction of the ongoing EEG al filters based on artifact and brain signal topographies,” clinical neurophysiology, vol. 19, no. 2, pp. 113–124, 2002. din, P. D. Lawrence, and G. E. Birch, “Quantitative evaluation artifact removal methods based on real and estimated EOG Proceedings of the 30th Annual International Conference of  7  the IEEE Engineering in Medicine and Biology Society (EMBS 2008), 2008, pp. 5041–5044.  A PPENDIX TABLE IV M EAN R- VALUES FOR RLS ALGORITHM  Table 7.4: Mean R-values for RLS algorithm Horizontal saccade EOG  fEEG  Fp1 1.7 Fp2 6.9 AF3 2.8 AFz 2.3 AF4 13.1 F7 73.8 F5 32.8 F3 6.9 F1 0.9 Fz 3.8 F2 11.7 F4 24.2 F6 64.8 F8 116.3 FT7 73.9 FC5 29.7 FC3 4.6 FC1 0.9 FCz 2.6 FC2 9 FC4 30.8 FC6 65 FT8 115.4 C5 9.2 C3 1.4 C1 0.9 Cz 2.4 C2 6.3 C4 15.6 C6 39.6 CP5 2.5 CP3 1.1 CP1 1.1 CPz 2 CP2 4.2 CP4 8.7 CP6 19.3 P7 1.8 P5 1.5 P1 1.5 Pz 1.8 P2 3.1 P6 11.9 P8 17.5 PO7 1.7 PO3 2.4 POz 2.8 PO4 3.7 PO8 9.3 O1 3.2 Oz 5.3 O2 7.3  73.2 177.4 22.3 12.4 73.5 146.5 110.5 19.7 2.3 3.8 9.5 26.5 95.6 107.1 104.4 41.6 5.5 1.3 2.2 5.3 18 43.9 77.8 8.7 1.7 1 1.8 3.8 8.8 20.4 2.8 1.3 1.1 1.4 2.5 5 10.2 1.8 1.3 1.2 1.3 1.9 6.9 9.2 1.2 1.4 1.8 2.2 5 1.7 3 4.5  ET+fEEG 97.1 260.5 24.8 14.9 89.2 258.4 145.4 19 2.9 4.9 11.3 29.2 104.1 166.6 136.1 42.7 6.4 1.6 2.8 6.7 20.8 49.7 104.2 11.1 2.1 1.3 2.2 4.7 10.5 24.4 3.1 1.5 1.3 1.8 3.2 6.2 12.6 2.2 1.6 1.4 1.7 2.5 8.4 12 1.8 2 2.5 3.2 7.1 3 4.7 6.4  Vertical saccade EOG  fEEG  45.5 64.9 47.6 70.5 66.7 30.4 44.7 31 44.5 40.9 35.1 23.2 40.7 24.2 34.7 67.5 38.1 19.7 15.8 15.2 17.2 21.7 20.5 24 15.4 11.2 8.7 9.1 10.3 12.4 14.1 11.4 7.2 6 6.5 6.7 8 5.8 7.6 5.3 4.3 4.4 8.4 5.5 4.2 3.5 3.9 3.6 4.2 4 3.2 3.4  486.4 1220.5 216 160.6 155 64.2 87.2 53.6 38.9 30.8 26 30.2 41.5 29.2 45.8 45.7 29.3 14.6 12.2 12.6 15.3 18.3 19.5 17.5 13 10.4 8.8 9.6 11 13.7 7.4 7.8 7.2 6.8 7.1 7.4 8.6 1.8 2.6 3.9 3.5 3.9 6.8 2.9 1.2 1.8 2.3 2.2 2.2 1.4 1.3 1.5  ET+fEEG 696.2 1428.6 197.5 175 165.5 83.7 85.2 49.5 53.8 39 31.5 47.3 56.9 43.7 34.7 49.3 37.7 20.5 14 15.5 19.7 21.5 22.7 27.7 18.2 13.4 10.3 11.9 12.7 13.1 16.3 12 10.3 8.8 9.5 9.9 10 6.1 7.6 6.8 5.4 5.9 10.8 5.9 4.6 3.6 4.3 4.1 4.7 4.5 3.7 3.7  Blink EOG  fEEG  233.5 465 205.2 203 203.5 157 129.7 73.3 58 50.8 49.8 69.8 96.2 109.7 35.2 49.2 37.1 26.1 19.9 23.3 32.9 26.9 36.3 29.9 23.1 15 12.2 17.2 17.7 19.6 18.5 10.9 12.3 8.9 9.9 9.2 10.3 6.6 8.1 6.8 4.8 4.6 9.3 4.2 2.6 3.5 2 1 1.4 1.5 1 1  1235.7 3205.6 312.2 367.3 345.6 218.4 183.4 82.5 56.6 48.6 47.3 75.8 132.1 175.5 43.6 53.1 36 20 15 17.3 26 31.8 35.4 33.5 19.4 11.5 9 11.8 14.3 17.8 19.3 16.6 10.6 7.8 8.2 9 10.2 6.1 7.8 6.7 5 4.8 10.7 4.4 2.6 3.7 2.2 1.1 1.5 1.6 1.1 1.2  ET+fEEG 1373.4 2613.2 310.6 302 296 284.8 186.5 74.9 46.9 40.8 42.4 66.2 131.8 182.4 56.5 60.3 33.5 18.9 13.4 16.5 26.3 35.2 42.8 31.9 17.8 10.1 8.7 11 14.6 20 18.6 11.9 9.4 6.8 7.5 8.7 10.5 6.3 7.8 5.6 4.5 4.5 11.5 5.2 2.5 3.4 2.3 1 1.5 1.6 1 1.1  92  JOURNAL TITLE, ETC.  TABLE V  M EAN R- VALUES FOR H ∞for ALGORITHM Table 7.5: Mean R-values H∞ algorithm Horizontal saccade Fp1 Fp2 AF3 AFz AF4 F7 F5 F3 F1 Fz F2 F4 F6 F8 FT7 FC5 FC3 FC1 FCz FC2 FC4 FC6 FT8 C5 C3 C1 Cz C2 C4 C6 CP5 CP3 CP1 CPz CP2 CP4 CP6 P7 P5 P1 Pz P2 P6 P8 PO7 PO3 POz PO4 PO8 O1 Oz O2  EOG  fEEG  2653.9 3669.2 1102.3 2088.5 5387.1 8918.4 6554.9 1653.1 315.8 2046.2 4562.2 5354.6 11368.3 19501.7 8391.6 4498.4 1223.8 231.3 911.9 2820.4 6752.7 10499.9 11226.1 1272 197.8 285.4 777.2 2043.7 3932.6 5926.9 162.8 117.6 288.3 613.8 1232.6 2449.6 3840.6 83.8 117 349 575.8 992.7 2273.2 2877.4 291.2 488.4 809.4 1411.7 1399.8 732.7 1577.2 1331.3  390.3 1106.5 165.8 86.4 498.4 1052.5 537.5 87.3 10.3 28.9 86.5 215.6 612.3 969.7 641.8 316.6 40.5 8 19.6 56.6 182.1 425.2 749.4 66.9 9.3 8.9 20 46.4 106.6 220.4 11.7 6.6 9.9 18.1 35.7 69.3 129.2 7.5 7.3 13 19.3 30.4 98.9 128.4 13.3 18.3 29.8 44.3 66.5 27.9 54.2 66.5  PLACE PHOTO HERE  Vertical saccade EOG  ET+fEEG 654.6 3227.9 2338 3626.1 273.5 2365.1 187.2 4250.1 993.5 2403.8 2406.4 1326 1321 1858.3 163.6 1043.4 21.9 1664.7 64.6 1528 193.8 1425.8 454.7 904 1515 844.4 3187.4 663.7 1305 752.4 620 1840.6 87.5 1403.2 17.3 868.2 44.7 719.8 128.1 691.3 409.7 701.4 1001.4 654.8 1805.4 447.5 151.7 573.9 21 520.2 20.1 579.5 45.3 544 101.1 598.6 229 623.6 472.3 503.7 24.5 306.4 14.9 315 22.6 370.2 39.9 443.4 75.6 579.2 142.3 597.4 266.8 520.8 14.4 227.2 15 222.1 28.3 276.3 39.7 267.4 62.1 326.8 184 721.9 258.1 295.8 30 218.7 42.8 244.5 61.5 204.6 90.9 212.7 141.5 155.4 66 147.7 121.9 153.3 129.7 131.1  fEEG 2749.1 4378.6 1076.1 1502.8 979.8 533.6 739.2 193.2 311.4 272.3 216.8 161.7 275 203.1 232.5 274.4 157.2 95.4 80.1 85.8 90 110.3 97.1 84.1 61.9 52.4 46 54.4 60.2 66.1 51.3 45.7 40.1 40 45.3 50.9 58.2 34 35.6 35.7 35.8 39.3 80.1 46.4 28.9 29.5 31.4 32.1 26.5 21.1 24.1 24.5  Blink EOG  ET+fEEG 3510.6 1404.3 4847.7 2258.4 1513.4 1362 1814.3 1223.3 1409.3 1078.9 655.4 1198.8 944.7 1130.1 231.6 642.3 340.6 612.1 344 481.8 274.2 363 218.1 480.8 388.9 530.4 429.9 689.6 348.3 189.3 326.3 636.1 181.2 402.8 116.7 177.2 106.1 114.3 114 132.1 124.8 164.8 150.5 184.7 138.9 289.4 116.3 168.9 83.7 161.1 74.8 85.7 61.8 54.8 71.6 67.6 72.1 94.4 78 85.2 72.6 105.3 65.7 38.2 52.7 42.1 51 31 55.1 34.4 56.9 35.7 51 35.9 50.6 51.6 50.8 36.9 48.3 18.8 43.9 12.9 46.7 14.8 93.2 40.4 37.2 15.2 41.9 17.4 44.7 16.5 43.9 9.5 39 4.2 30.7 6.2 31.7 8.4 42.1 5.2 34.4 4.6  fEEG 9472.8 15951.6 3069.3 3444.6 2818.7 1560.3 1879.5 663.7 618.2 620.3 526.5 648.8 816 926.7 207.2 458.9 364.7 267.3 185.2 176.6 305.7 357.8 195.8 129.1 145.7 75 65.3 82.3 92.2 119.6 62 111 35.6 24.2 26.2 28.2 35.6 17.3 23.7 15.9 10.7 11.3 31.8 13.1 8.4 10.9 6.8 3.4 4.6 5.1 3.4 3.2  ET+fEEG 10603.6 11683.5 1940 3065.9 2990.5 1332.2 1196.6 632.6 490.4 485.5 513.1 636.8 838.5 953.7 195.7 349 315.1 207.4 149.5 170.3 284 357.2 183 127.3 131.8 67.5 66.4 92.6 104 109.1 55.8 124.9 38.9 26.8 28.3 29.7 35.5 15.4 20.4 15.9 11.2 11.2 26.5 11.7 6.9 8.8 5.9 3.1 4.1 4.5 3.1 3  Peter Lawrence P received the B.A.S ing from the Univ Canada, in 1965, PLACE engineering from PHOTO Saskatoon, SK, C HERE degree in computi Case Western Rese 1970. Between 197 searcher with the Chalmers Universi Between 1972 and 1974, he was a Resear with the Mechanical Engineering Departm Technology, Cambridge. Since 1974, he h British Columbia, Vancouver, BC, Canada, in the Department of Electrical and Compute interests include the application of real-time between humans and machines, image p modeling and control.  Gary E. Birch Gar Electrical Engineer torate in Electrical Processing), in 19 PLACE British Columbia, PHOTO He was appointed HERE velopment at the then in 1994 was a specific areas of gies, EEG signal interface, digital s interface systems, biological systems, robot control systems and service delivery progra He is also Adjunct Professor with the Depar Engineering, University of British Columbi  Borna Noureddin Borna Noureddin received the B.Eng. degree in computer engineering from the University of Victoria, Victoria, BC, Canada, and the M.A.Sc. degree in electrical engineering from the University of British Columbia, Vancouver, BC, where he is currently working toward the Ph.D. degree. His research interests include computer vision, biomedical signal and image processing, biophysical modeling, and humancomputer interaction.  93  References [1] G. Gomez-Herrero, W. D. Clercq, H. Anwar, O. Kara, K. Egiazarian, S. V. Huffel, and W. V. Paesschen, “Automatic removal of ocular artifacts in the EEG without an EOG reference channel,” Proceedings of the 7th Nordic Signal Processing Symposium (NORSIG), 2006, pp. 130–133, 2006. [2] J. J. M. Kierkels, J. Riani, J. W. M. Bergmans, and G. J. M. van Boxtel, “Using an eye tracker for accurate eye movement artifact correction,” Biomedical Engineering, IEEE Transactions on, vol. 54, no. 7, pp. 1256–1267, 2007. [3] K. H. Ting, P. C. W. Fung, C. Q. Chang, and F. H. Y. Chan, “Automatic correction of artifact from single-trial event-related potentials by blind source separation using second order statistics only,” Medical engineering physics, vol. 28, no. 8, pp. 780–794, 2006. [4] G. L. Wallstrom, R. E. Kass, A. Miller, J. F. Cohn, and N. A. Fox, “Automatic correction of ocular artifacts in the EEG: a comparison of regression-based and component-based methods,” International Journal of Psychophysiology, vol. 53, no. 2, pp. 105–119, 2004. [5] Y. Li, Z. Ma, W. Lu, and Y. Li, “Automatic removal of the eye blink artifact from EEG using an ICA-based template matching approach,” Physiological measurement, vol. 27, no. 4, pp. 425–436, 2006. [6] C. A. Joyce, I. F. Gorodnitsky, and M. Kutas, “Automatic removal of eye movement and blink artifacts from EEG data using blind component separation,” Psychophysiology, vol. 41, no. 2, pp. 313–325, 2004. [7] P. He, G. Wilson, and C. Russell, “Removal of ocular artifacts from electro-encephalogram by adaptive filtering,” Medical and biological engineering and computing, vol. 42, no. 3, pp. 407–412, 2004. [8] T. Liu and D. Yao, “Removal of the ocular artifacts from EEG data using a cascaded spatiotemporal processing,” Computer methods and programs in biomedicine, vol. 83, no. 2, pp. 95–103, 2006. [9] S. Puthusserypady and T. Ratnarajah, “H∞ adaptive filters for eye blink artifact minimization from electroencephalogram,” IEEE, Signal Processing Letters, vol. 12, no. 12, pp. 816–819, 2005. [10] A. Schlogl, C. Keinrath, D. Zimmermann, R. Scherer, R. Leeb, and G. Pfurtscheller, “A fully automated correction method of EOG artifacts in EEG recordings,” Clinical Neurophysiology, vol. 118, no. 1, pp. 98–104, 2007. 94  [11] D. Talsma, “Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure,” Psychophysiology, vol. 45, no. 2, pp. 216–228, 2008. [12] C. Hennessey, B. Noureddin, and P. Lawrence, “Fixation precision in high-speed noncontact eye-gaze tracking,” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 38, no. 2, pp. 289–298, 2008. [13] C. T. Lovelace, R. Derakhshani, S. P. K. Tankasala, and D. L. Filion, “Classification of startle eyeblink metrics using neural networks,” in Neural Networks, 2009. IJCNN 2009. International Joint Conference on, 2009, pp. 1908–1914. [14] S. Sirohey, A. Rosenfeld, and Z. Duric, “A method of detecting and tracking irises and eyelids in video,” Pattern Recognition, vol. 35, no. 6, pp. 1389–1401, 2002. [15] Y. li Tian, T. Kanade, and J. F. Cohn, “Dual-state parametric eye tracking,” in Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on, 2000, pp. 110–115. [16] M. Pardas, “Extraction and tracking of the eyelids,” in Acoustics, Speech, and Signal Processing, 2000. ICASSP ’00. Proceedings. 2000 IEEE International Conference on, vol. 6, 2000, pp. 2357–2360 vol.4. [17] H. Tan and Y.-J. Zhang, “Detecting eye blink states by tracking iris and eyelids,” Pattern Recognition Letters, vol. 27, no. 6, pp. 667–675, 2006. [18] B. Noureddin, P. D. Lawrence, and G. E. Birch, “Time-frequency analysis of eye blinks and saccades in EOG for EEG artifact removal,” Neural Engineering, 2007. CNE ’07. 3rd International IEEE/EMBS Conference on, pp. 564–567, 2007. [19] S. Puthusserypady and T. Ratnarajah, “Robust adaptive techniques for minimization of EOG artefacts from EEG signals,” Signal processing, vol. 86, no. 9, pp. 2351–2363, 2006. [20] A. Delorme and S. Makeig, “EEGLAB: an open source toolbox for analysis of single-trial eeg dynamics including independent component analysis,” J.Neurosci.Methods, vol. 134, no. 1, pp. 9–21, 2004. [21] B. Noureddin, P. D. Lawrence, and G. E. Birch, “Effects of task and EEG-based reference signal on performance of on-line ocular artifact removal from real EEG,” in Proceedings of the 4th International IEEE/EMBS Conference on Neural Engineering (NER ’09), 2009, pp. 614–617. [22] J. Malmivuo and R. Plonsey, Bioelectromagnetism - Principles and Applications of Bioelectric and Biomagnetic Fields.  Oxford University Press, 1995.  [23] E. Niedermeyer and F. L. da Silva, Electroencephalography: Basic Principles, Clinical Applications, and Related Fields, 4th Ed. Baltimore, Maryland USA: Lippincott Williams & Wilkins, 1998. 95  [24] J. J. M. Kierkels, G. J. M. van Boxtel, and L. L. M. Vogten, “A model-based objective evaluation of eye movement correction in EEG recordings,” Biomedical Engineering, IEEE Transactions on, vol. 53, no. 2, pp. 246–253, 2006. [25] N. Ille, P. Berg, and M. Scherg, “Artifact correction of the ongoing EEG using spatial filters based on artifact and brain signal topographies,” Journal of clinical neurophysiology, vol. 19, no. 2, pp. 113–124, 2002. [26] B. Noureddin, P. D. Lawrence, and G. E. Birch, “Quantitative evaluation of ocular artifact removal methods based on real and estimated EOG signals,” in Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS 2008), 2008, pp. 5041–5044.  96  Chapter 8  Dipole modeling for low distortion, automated online ocular artifact removal from EEG without EOG 7  8.1  Introduction  Removal of eye movements and blinks from the electroencephalogram (EEG) is an ongoing research problem [1–4]. Current approaches to automated ocular artifact (OA) detection and removal [4–9] use electro-oculogram (EOG) signals as a reference. A few remove OAs without a reference EOG, but are often not fully automated (e.g., need manual selection of either a threshold [10] or of components to reject [11]), are not able to handle all sources of OA (e.g., only blinks [10]) or are not suitable for real-time use (e.g., require large amounts of data [1]). A novel OA removal method is presented that uses a volume conduction model of the head and a source dipole model of the eyes, and is suitable for real-world, real-time, online applications (e.g., a brain computer interface), especially where only scalp EEG measurements are available during normal operation (e.g., no EOG electrodes are attached to the face). The authors are not aware of any existing online, fully automated method that successfully remove OA without using EOG electrodes in the real-time algorithm. The following design criteria were used: 1. Eye movements and blinks: It should remove eye movement and blink effects. Some methods (e.g., [2]) only handle certain eye movements or do not handle blinks. The proposed method can handle all eye movements and blinks. 2. Low EEG distortion: It should minimize distortion of EEG components unrelated to OAs. A previously introduced metric “ ” [12] can be used on real EEG to measure the likelihood that a given method distorts non-OA components. It measures how often the power of the removed OA component exceeds the power of the original EEG, thus introducing additional information into (or distorting) the residual EEG. It was shown [9][12] that other methods often introduce considerable amounts of such distortion. The present chapter confirms this finding for four existing methods, and further shows that the proposed method does not distort the EEG in this way. 7  A version of this chapter has been submitted for publication. Noureddin, B., Lawrence, P.D., Birch, G.E., 2010. Dipole modeling for low distortion, automated online ocular artifact removal from EEG without EOG.  97  3. Automated and online: It should be online, computationally efficient, and not require manual intervention (e.g., selection of a threshold or components to reject). Blind source separation (BSS) [3] and independent component analysis (ICA) [10] require too much data for practical online use. For example, for a stable decomposition, ICA and BSS methods require at least 20N2 (N is the number of channels) data points [13]: at a 200Hz sampling rate, a 5 minute window of data for 55 channels or a 14 second window for 12 channels. Since EEG is not stationary, especially beyond 10 seconds [14], ICA and BSS are not practical for online use. Such methods are also computationally very intensive, and although there have been efforts to adapt them for online use [15], they still require manual intervention. The latter point also applies to principle component analysis (PCA) and canonical correlation analysis (CCA). It is difficult to automatically identify components that contain only OAs. The method proposed in this chapter was designed to: operate on a sample-by-sample basis, be computationally very efficient, and not require any manual intervention. 4. Minimal calibration: Other attempts to use head models for OA removal (even offline) use a spherical shell (SS), boundary element (BE) or finite element (FE) model usually based on subject-specific magnetic resonance images (MRI). The proposed method can use any head model, and does not need subject-specific MRI. In this chapter the performance is shown using both an SS model [16] and a BE model [17] based on a normative MRI database [13]. 5. No EOG required: For many applications, it is highly desirable (sometimes a requirement) not to place electrodes on the subject’s face. Other online methods (e.g., regression and adaptive filtering) require an EOG reference. In the present work, a new OA removal method using different models for the head and the source dipoles is examined. Its performance on real EEG is compared with four existing methods: two adaptive filters [5][7], PCA [18][19] and CCA [20]. To measure performance on real EEG, as in [12], the “R” metric [21] is used to measure the amount of artifact removed, and the “ ” metric [12] to measure the likelihood of distorting non-OA components. The metrics are shown separately for periods with eye movements and periods without eye movements, as this has been shown [9][12] to significantly affect performance. The average performance (R and ) on real EEG is reported, along with the effect of “mental states” (eye movement, event-related potential or ERP, and neither) and “tasks” (e.g., looking at a picture, following a dot on a computer screen) on the performance of each algorithm, providing a complete analysis under different experimental conditions. The contribution of this chapter is a model-based, real-time method for removing OAs from EEG consistently over a range of mental states and tasks, requiring no EOG electrodes during data collection. The method requires a short per subject training sequence (during which EOG electrodes are used) prior to normal EEG real-time data processing.  98  8.2  Methods  The general approach to OA removal is shown in Fig. 8.1(a). Brain electric signals are assumed to conduct to the scalp (VB ), combining with ocular signals from eye movements and blinks (VO ), other artifact signals (VA ), and measurement and ambient noise (VN ). Generally, VA is assumed to be zero, and VN modeled as white Gaussian noise. Further these EEG sources are assumed to combine linearly to form the measured scalp potentials. The ocular artifact removal method TOAR is a transformation that aims to remove VO from E, resulting in Y (ideally Y=VB ). TOAR often uses some combination of the statistical, spatial and temporal content of E, and can also use a priori information about physiology to calculate Y. For example, in component analysis techniques (e.g., PCA and CCA), E is separated into individual components. Those resembling OA are removed, and the remaining re-assembled into Y. Adaptive filters (e.g., recursive least squares (RLS) and time-varying H∞ ) treat the EOG as a reference signal to be extracted from E, with Y being the difference. The proposed method uses two biophysical models: (i) the OA sources (equivalent current dipoles representing changes in eye/eyelid positions – see Section 8.2.3) and (ii) electric field propagation from each source to each electrode on the head surface (see Section 8.2.2). Fig. 8.1(b) shows the method’s operation and model parameter calibration. For OA, the eye (source) locations are fixed for each subject. Dipole locations best explaining eye movements and blinks can be computed using source dipole analysis [14][22][23]. Therefore, using calibration, eye dipole locations that cause VO can be pre-computed [24]. For a given set of dipole locations, a unique solution exists to the forward problem of estimating VO. Thus, TOAR consists of subtracting the estimated VO from E. The forward solution can use a set of concentric spherical shells for the brain, skull and scalp (and optionally the cerebrospinal fluid), each of which is approximated as a homogeneous, isotropic volume conductor. It has also been shown that using a boundary element or finite element model of each of the volumes (e.g., brain, skull, scalp) using realistic measurements of each volume (e.g., from MRI) instead of approximating them with spherical shells yields a more accurate forward solution, since none of the volumes are actually spherical. In either case, measured, estimated or normative conductivities (and radii, in the case of concentric shells) are defined for each volume. This chapter describes using both an SS and BE head model using normative MRI, thus avoiding subject-specific MRI measurements while still comparing the relative gains of using a realistic head model.  8.2.1  Data Acquisition  To evaluate the method, 55 EEG channels, 2 linked mastoids and 7 channels of EOG (see Fig. 8.2) were collected from 4 subjects, sitting approximately 50cm in front of a computer monitor. All signals were sampled by the same amplifier at 200Hz, with a 70Hz low-pass filter (LPF) and a 30Hz digital LPF. Three sessions of data were collected on different days. Each subject was instructed to perform 12 different tasks in each session (see Table 8.1). Two Polhemus FASTRAK R (POL) sensors (one fixed and one on the subject’s fingers) were used to measure each hand extension’s 3-D time course (the POL does not interfere with EEG measurement  99  Table 8.1: Data collection tasks (HE=hand extension): timing of HE determined using physical switch activated when subject’s hand extends. Saccades/blinks allowed and expected for last two tasks. Numbers in brackets show how many times task was performed. Task 1 2 3 4 5 6 7 8 9 10 11 12  Description Stare at dot in center of computer monitor (45 secs) [2] Same as task 1, but also perform HE at own pace [4] Stare at sequentially highlighted numbers in 4x4 grid on monitor for 2 secs/number [3] Same as task 3, but also blink once in each 2-sec interval [3] Same as task 3, but also perform HE in each 2-sec interval [5] Same as task 4, but also perform HE in each 2-sec interval [5] Stare at dot shown on each corner (clockwise order) of monitor for 2 secs [3] Stare at dot shown on each corner (counterclockwise order) of monitor for 2 secs [3] Same as task 7, but also perform HE in each 2-sec interval [5] Same as task 8, but also perform HE in each 2-sec interval [5] Look at picture of clouds for 45 secs [3] Count hidden faces in painting shown on screen for 45 secs [3]  [25]). EEG and EOG taken over 3 sessions on different days during periods both with and without saccades, blinks and an ERP (hand extension) was thus collected.  100  Calibration Choose head model  Measure electrode locations  Project electrodes to scalp Head model  Σ E TOAR Y (a)  VN  Collect EEG (E) Compute eEOG using EEG and w  VEOG > 50µV  |VEOG| > 15µV  Find peak in +/500ms window and add to blink trials  Find peak in +/500ms window and add to vertical eye movement trials  Find peak in +/500ms window and add to horizontal eye movement trials  Compute scaled average of all trials  Compute scaled average of all trials  VB VO VA  Normal Operation  Collect EEG Collect EOG (E) Compute HEOG and VEOG  Compute scaled average of all trials  |HEOG| > 10µV  Compute mapping w from EEG (AF7, AF8, Fpz) to EOG using the averages of horizontal and vertical eye movement trials  Estimate HEOG and VEOG  |HEOG| > 10µV or |VEOG| > 15µV ?  Yes  Find location of eye dipoles that best predicts the averages  Y = E – LL#E  Compute lead field matrix L: maps all 6 dipole locations (x, y and z directions) to electrode locations using chosen head model  Y=E  No  (b)  Figure 8.1: OA removal: (a) General approach to OA removal: VB is the signal component caused by brain sources, VO by ocular sources, VA by other “artifact” sources (e.g., EMG, EKG, etc.), and VN by noise sources (e.g., measurement noise). E is the measured scalp potential and Y is the output of the OA removal algorithm, and represents the EEG with OA removed. TOAR is a transformation (usually linear) that removes the OA; (b) Flowchart showing how calibration data was used by the proposed method to compute the model parameters needed during normal EEG data collection. 101  V1  V3 Nz z  H2  H1 V2  V4  y x  Figure 8.2: Placement of EOG electrodes during calibration and for system evaluation. EOG reference signals (with 1-2Hz band-pass filter) were calculated as follows: HEOG=H1-H2, VEOG1=V1-V2, VEOG2=V3-V4, VEOG=(VEOG1+VEOG2)/2. x-/y-/z- axes are as shown, with the origin at the center of the head.  8.2.2  Head Model  Two head models were compared, using a POL device to measure electrode locations: (i) an SS model (see Appendix B for details) with nominal shell radii of 10cm (scalp), 9.2cm (skull) and 8.8cm (brain) and relative conductivities 1, 0.0125 and 1, respectively, and measured electrode locations mapped to the outer sphere, and (ii) a BE model using normative MR images [26] with compartments representing the scalp, skull and brain, fiducial markers Nz (nasion), LPA and RPA (left and right pre-auricular) measured with the POL device and used to warp the electrode locations to the outermost compartment (scalp) surface.  8.2.3  Calibration  Calibration data consisting of trials from tasks 1 and 3 (see Section 8.2.1) were extracted for each type of OA: reference signals HEOG and VEOG were computed (see Fig. 8.2), and each time the amplitude of either exceeded a fixed threshold, the EEG +/-500ms around the next peak was extracted as a trial. Since eye movements and blinks have different amplitudes, trials were scaled so that the amplitudes of the reference signals matched those of the first trial. The same scaling factors were applied to the EEG. This data was used to calculate L and w in Fig. 8.1(b) as follows. To calculate L, a head model (see Section 8.2.2) and dipole locations are required. Equivalent dipoles locations (three for each eye used to model each of horizontal eye movements, vertical eye movements and blinks) best describing the average of the resulting scaled, time-locked EEG calibration trials were determined using constrained nonlinear optimization provided in the Fieldtrip (http://fieldtrip.fcdonders.nl) and EEGLAB toolboxes [13]. Dipole locations were constrained as follows (see Fig. 8.2): (i) x-coordinate between those of V3 and H2 (left eye) and V1 and H1 (right eye), (ii) y-coordinate between those of LC and H2 (left eye) and RC and H1 (right eye), and (iii) z-coordinate between those of V3 and V4 (left eye) and V1 and V2 (right eye), where LC and RC are the left and right inner canthi measured using the POL device during calibration. For dipole location optimization, 3 error measures were compared: relative residual variance  102  (RV), residual variance (VAR), and absolute difference (ABS): RV = ΣΣ(X − Y)2 /ΣX2  (8.1)  VAR = ΣΣ(X − Y)2  (8.2)  ABS = ΣΣ|X − Y|  (8.3)  where X is the average scaled, time-locked EEG trials, and Y is the corresponding potentials predicted by the model. This calibration must be carried out for each subject (per-subject calibration), and was additionally carried out for each session (per-session calibration) to test possible accuracy improvements. Once dipole locations were estimated, they were used on remaining data in the same session for per-session calibration, and all data for subsequent sessions for per-subject calibration. The above calibration trials were also used to calculate w in Fig. 8.1(b). We previously showed [12] that EOG can be estimated from 3 EEG channels:  w  =  EOG · EEGc #  (8.4)  eEOG  =  w · EEG  (8.5)  where EOG is the measured calibration signals at EOG electrodes, EEGc is a matrix whose rows consist of the signals at Fpz, AF7, and AF8 during calibration, EEGc # is its pseudoinverse, and EEG is a matrix whose rows consist of the signals at Fpz, AF7, and AF8 during normal operation. The resulting eEOG (estimated EOG) was used for subsequently recorded data (normal operation) as an OA detector: when HEOG or VEOG (using eEOG instead of EOG) indicated, as above, that a given sample was part of an OA, the contribution of the eye dipoles was subtracted from all EEG channels. At all other times the EEG was left untouched (see Fig. 8.1(b)). Without this step, the method would try to model and remove EEG when there is no actual OA present. The proposed method thus only used measured EOG briefly during calibration. Table 8.2: R values for SS head model with dipole locations recalculated every session using three different error measures: RV, VAR, ABS.  Idle OA ERP  RV 0.1 7.5 0.0  Subject 1 VAR ABS 0.1 0.1 8.5 8.0 0.0 0.0  RV 2.0 12.1 1.9  Subject 2 VAR ABS 2.2 2.8 12.1 13.4 2.2 3.0  RV 0.2 53.5 0.2  Subject 3 VAR ABS 0.2 0.2 58.1 55.5 0.2 0.3  RV 0.5 8.7 0.3  Subject 4 VAR ABS 0.5 0.5 10.1 9.4 0.3 0.3  RV 0.7 20.5 0.6  Average VAR 0.8 22.2 0.7  ABS 0.9 21.6 0.9  103  Table 8.3: R values for SS head model with dipole locations calculated in the 1st session using three different error measures: RV, VAR, ABS. Idle OA ERP  RV 0.0 3.7 0.0  Subject 1 VAR ABS 0.1 0.0 7.0 4.8 0.0 0.0  RV 1.5 10.7 1.5  Subject 2 VAR ABS 1.2 1.4 8.9 11.0 1.1 1.4  RV 0.2 61.3 0.0  Subject 3 VAR ABS 0.2 0.1 67.1 60.9 0.0 0.0  RV 0.5 9.0 0.4  Subject 4 VAR ABS 0.5 0.5 8.6 8.4 0.4 0.4  RV 0.6 21.2 0.5  Average VAR 0.5 22.9 0.4  ABS 0.5 21.3 0.5  Table 8.4: R values for BE head model with dipole locations recalculated every session using three different error measures: RV, VAR , ABS. Idle OA ERP  8.2.4  RV 0.1 10.6 0.0  Subject 1 VAR ABS 0.1 0.1 12.0 12.8 0.0 0.0  RV 2.0 14.7 1.9  Subject 2 VAR ABS 2.3 5.5 19.3 32.0 2.2 7.4  RV 0.2 75.7 0.3  Subject 3 VAR ABS 0.2 0.2 89 114 0.3 0.3  RV 0.6 9.8 0.3  Subject 4 VAR ABS 0.6 0.6 15.0 14.6 0.4 0.4  RV 0.7 27.7 0.6  Average VAR 0.8 33.9 0.7  ABS 1.6 43.2 2.0  Metrics  OA removal methods are normally evaluated using simulated data. Artifact-free EEG and an artifact signal are combined artificially and processed using the OA removal algorithm (see Fig. 8.1(b)). Metrics such as the signal to noise ratio of the output (Y) can be compared to the artifact-free EEG (VB ). For real EEG, VB , VN and VA are unknown, so the performance on real data is reported subjectively [27][3][19], often based on visual inspection of the resulting waveforms. It was previously [12] proposed that an “R” metric [21] be used to measure the amount of OA removed, and the “ ” metric [12] to measure the likelihood of distorting non-OA components. These metrics are defined as follows: C  N  C  N  2  Y2  (E − Y) /  R= 1  1  1  (8.6)  1    C 2 1 if 1 (E−Y) ) > 1 C 2 1 E R = ,ε =   0 otherwise  N 1  R  (8.7)  N  where N is the number of samples and C the number of channels recorded. R and  were  calculated on non-calibration data (i.e., during normal operation) for various head models and other methods (see Section 8.2.5), for periods with OA, with an ERP, and with neither. Periods with ERP were identified as in Section 8.2.1, and periods with OA were identified using measured EOG (in this case, EOG was not used by the proposed method, but simply for  Table 8.5: R values for BE head model with dipole locations calculated in the 1st session using three different error measures: RV, VAR, ABS. Idle OA ERP  RV 0.1 10.1 0.0  Subject 1 VAR ABS 0.1 0.1 11.0 11.8 0.0 0.0  RV 1.2 11.7 1.2  Subject 2 VAR ABS 1.9 3.5 18.1 26.3 1.8 4.0  RV 0.2 108 0.0  Subject 3 VAR ABS 0.2 0.2 123 127 0.0 0.0  RV 0.5 10.1 0.4  Subject 4 VAR ABS 0.5 0.6 12.0 12.0 0.4 0.4  RV 0.5 35.0 0.4  Average VAR 0.7 41.1 0.6  ABS 1.1 44.2 1.1  104  marking periods with OA for evaluation purposes).  105  Table 8.6: R values for five algorithms: BMAR (B), RLS (R), H∞ (H), PCA (P) and CCA (C).  B 0.1 11.8 0.0  Idle OA ERP  Subject 1 R H P 0.3 0.5 5.5 3.5 5.0 11.7 0.4 0.5 14.4  Table 8.7:  Idle OA ERP  B 0 0 0  R 7 24 7  C 2.8 8.2 5.6  B 3.5 26.3 4.0  Subject 2 R H P 1.2 2.8 6.1 3.6 8.6 8.6 1.1 2.5 13.7  C B 3.8 0.2 6.7 127 7.7 0.0  Subject 3 R H P 0.3 0.4 3.6 11.4 22.7 6.2 0.2 0.6 11.8  C 2.2 5.6 5.7  B 0.6 12.0 0.4  Subject 4 R H P C 0.3 0.4 4.3 4.3 2.3 4.4 6.9 4.8 0.2 0.2 4.5 6.0  B 1.1 44.2 1.1  R 0.5 5.2 0.5  Average H P C 1.0 4.9 3.3 10.2 8.4 6.3 0.9 11.1 6.3  values (%) for five algorithms: BMAR (B), RLS (R), H∞ (H), PCA (P) AND CCA (C).  Subject 1 H P 8 38 28 37 9 38  C 26 26 25  B 0 0 0  R 8 21 7  Subject 2 H P 13 45 29 42 15 43  C 31 32 27  B 0 0 0  R 5 29 1  Subject 3 H P 7 48 34 42 9 37  C 31 34 21  B 0 0 0  R 5 16 4  Subject 4 H P 5 36 21 37 4 29  C 23 24 19  B 0 0 0  R 6 22 5  Average H P 8 42 28 40 9 37  C 28 29 23  106  8.2.5  Other Methods  The above metrics were used to compare the proposed method with 4 other online algorithms: 1) the RLS algorithm [5] which has fast convergence, 2) the time-varying H∞ algorithm [7] which converges more slowly but was previously shown to perform slightly better than RLS, 3) PCA [19] and 4) CCA [20]. The latter two methods do not require as much data as ICA to converge, and are less computationally intensive than ICA or other BSS methods. Initial attempts to use ICA and BSS required several orders of magnitude more computation time than the proposed method and at least an order of magnitude more than either PCA or CCA. The implementation in the AAR toolbox (http://www.cs.tut.fi/˜gomezher/projects/eeg/aar.htm) was used for the existing algorithms and for the automatic identification of OA components in the PCA and CCA methods [1].  8.3  Results  Table 8.2 shows R values for an SS head model with per-session calibration. Periods with OA, hand extension (ERP), and neither (Idle) are shown separately. High R during OA and low R otherwise is preferred. Table 8.3 shows the same results with per-subject calibration. Similarly, Tables 8.4 and 8.5 show the results of using a BE head model. On average, during OA, using the ABS error measure yielded higher R (more OA removed: 21.6 in Table 8.1, 21.3 in Table 8.3) than using RV (20.5, 21.2) but lower than using VAR (22.2, 22.9) for the SS head model and for the BE head model, it resulted in higher R (43.2, 44.2) than using RV (27.7, 35.0) or VAR (33.9, 41.1). During ERP and Idle, the ABS error measure’s R value was slightly higher. On average, the BE head model’s R value was higher. There was, however, no statistically significant (p < .05) difference among the three error measures or the two head models. There was also no statistically significant (p < .05) difference between using per-subject and using per-session calibration. Based on the above, a BE head model using the ABS error measure and per-subject calibration (henceforth “BMAR” or Biophysical Model-based Artifact Removal method) was compared with the algorithms described in Section 8.2.5 (see Table 8.6 for results). This head model and dipole localization parameters combination provides the best calibration option and removes the most artifact during OA while minimally affecting non-OA EEG periods. That is, the BMAR method’s R value (44.2) is higher than the other methods during OA, and only slightly higher (1.1) than RLS and H∞ and lower than either PCA or CCA during non-OA periods (see also Fig. 8.3). Fig. 8.4 shows a sample EEG segment and the output of all 5 algorithms at 4 EEG channels. The 2-sec segment contains parts with only OA, with only ERP and with neither. The BMAR method follows the original EEG except during OA and a bit after. The method uses frontal EEG electrodes to estimate the EOG signals (see Section 8.2.3), which are then thresholded to detect when an OA occurs. Since frontal EEG electrodes can contain components not related to OA, it is possible that occasionally this method of OA detection can cause the BMAR method to continue removing signal during non-OA periods, as shown during the period of 700ms-900ms 107  in Fig. 8.4. All other methods introduce visible deviation from the original EEG during both OA and non-OA periods. Fig. 8.5 shows the same results, but this time plotting the logarithm of the squared magnitude of the signals in order to show more details of the residual signal when a given method removes substantial portions of the original EEG. For example, during the OA portion, from Fig. 8.4(a) it appears as if the BMAR method has removed all the signal at Fpz (including non-OA components), while from Fig. 8.4(a) it is clear that there still remains some residual EEG. Table 8.7 compares  for the methods (for the same data). The BMAR method is the only  one with =0 since it is the only one that never removes more power than originally present in the EEG. Also, to evaluate the consistency of performance of each method, 3 ANOVA analyses of the effects on R of: (i) mental state (Idle, OA, ERP), (ii) task (e.g., counting, relaxed, etc.) during OA and (iii) task during non-OA periods were performed, and the results are shown in Table 8.8. For example, for the BMAR method, p < .05 in column 1: there is significant evidence that the method is affected by the mental state (p < .05 indicates 95% confidence that the null hypothesis (that the mean R’s across mental states are equal) can be rejected). This is as expected, since more signal is removed (higher R) during OA periods than non-OA ones. Likewise, there is no evidence (p > .05) that the task affects the BMAR method’s performance. The only other method with such consistency is H∞ , which has a lower average R (i.e., removes less signal) during OA periods. Table 8.8: ANOVA results showing the significance (p values) of the effects on R of (i) mental state (Idle/OA/ERP), (ii) task (e.g., counting, relaxed, etc.) during non-OA periods, and (iii) task during OA periods.  RLS H PCA CCA BMAR  8.4  ANOVA for Mental State .011 .029 .013 .060 .045  ANOVA for Task Without OA .448 .586 .012 .000 .376  ANOVA for Task With OA .007 .055 .455 .394 .060  Discussion  During OA, the BE head model (Table 8.4 and Table 8.5) has consistently higher R than the SS model (Table 8.2 and Table 8.3). That is, using the BE model removes more eye movement and blink effects. At other times, the BE model has the same or slightly higher R (in this case, since R represents signal removed when there is no OA, lower R is preferred). Overall, a BE head model, instead of an SS model, and the ABS (R=44.2 for the BE head model), instead of the RV (R=35.0) or VAR (R=41.1) error measures, provide the best combination because of their superior ability to remove OA during eye movements and blinks. For this data, the BMAR model removes more OA (R=44.2) during eye movements and blinks than any of the other methods (R<=10.2), and is the only one that never removes more power than that of the original EEG, thus eliminating a common type of distortion caused by OA removal. Even if the 108  50  45  40  35  30 BMAR RLS Hinf PCA CCA  25  20  15  10  5  0 Idle  OA  ERP  Figure 8.3: R values for all algorithms during periods of OA only, ERP only, and neither (Idle).  109  (a)  (b)  (c)  (d)  Figure 8.4: Sample plots of original EEG (thicker dotted line) and signals with OA removed by RLS, H∞ , PCA, CCA and BMAR methods at four electrode positions over a 2000ms period. The data contains a segment with an eye movement (OA), a segment with a hand extension (ERP), and samples with neither.  110  (a)  (b)  (c)  (d)  Figure 8.5: Sample plots of the logarithm of the square magnitude of the original EEG (thicker dotted line) and signals with OA removed by RLS, H∞ , PCA, CCA and BMAR methods at four electrode positions over a 2000ms period. The data contains a segment with an eye movement (OA), a segment with a hand extension (ERP), and samples with neither. A logarithm of the squared magnitude is used to show more details of the residual signal when a given method removes substantial portions of the original EEG.  111  same OA detector (using eEOG) were used to “gate” the other methods, the BMAR method still removes more OA during eye movements and blinks, making it overall the best choice. A closer examination of Table 8.8 revealed that R for the BMAR method was significantly (p < .05) affected by the mental state (Idle, OA or ERP). Table 8.6 also shows higher R for BMAR during OA than during Idle and ERP. Toegether, these results show that BMAR removes more signal during OA than during Idle or ERP, and that the differences is statistically significant. Further, there is no evidence that R for the BMAR method is significantly (p < .05) affected by the task during either OA or non-OA periods: it is unaffected by the task being performed. The only other method that is significantly affected by mental state but not by task during both OA and non-OA periods is the H∞ method, whose R value during OA (10.2 in Table 8.6) is substantially less than that of the BMAR method (44.2 in Table 8.6).  8.5  Conclusions  A novel OA removal method (BMAR) has been proposed and evaluated. The method operates in real time, fully automatically. No intervention or manually selected threshold is required. It only requires a short once-per-subject calibration and does not require subject-specific MRI. It also does not require EOG electrodes during real-time operation. For the data considered, the method performed consistently over a variety of tasks. Specifically, the task did not significantly (p < .05) affect the performance of the method (all but one other method were significantly affected by the task). In removing both saccades and blinks, it removed more than 4 times as much OA as other methods (R=44.2 for BMAR, R<=10.2 for all other methods). In terms of distortion, it was the only method that never removed more power than was present in the original EEG ( = 0 for BMAR,  > 0 for all other methods),  thus avoiding a common type of distortion.  112  References [1] G. Gomez-Herrero, W. D. Clercq, H. Anwar, O. Kara, K. Egiazarian, S. V. Huffel, and W. V. Paesschen, “Automatic removal of ocular artifacts in the EEG without an EOG reference channel,” Proceedings of the 7th Nordic Signal Processing Symposium (NORSIG), 2006, pp. 130–133, 2006. [2] J. J. M. Kierkels, G. J. M. van Boxtel, and L. L. M. Vogten, “A model-based objective evaluation of eye movement correction in EEG recordings,” Biomedical Engineering, IEEE Transactions on, vol. 53, no. 2, pp. 246–253, 2006. [3] K. H. Ting, P. C. W. Fung, C. Q. Chang, and F. H. Y. Chan, “Automatic correction of artifact from single-trial event-related potentials by blind source separation using second order statistics only,” Medical engineering physics, vol. 28, no. 8, pp. 780–794, 2006. [4] G. L. Wallstrom, R. E. Kass, A. Miller, J. F. Cohn, and N. A. Fox, “Automatic correction of ocular artifacts in the EEG: a comparison of regression-based and component-based methods,” International Journal of Psychophysiology, vol. 53, no. 2, pp. 105–119, 2004. [5] P. He, G. Wilson, and C. Russell, “Removal of ocular artifacts from electro-encephalogram by adaptive filtering,” Medical and biological engineering and computing, vol. 42, no. 3, pp. 407–412, 2004. [6] T. Liu and D. Yao, “Removal of the ocular artifacts from EEG data using a cascaded spatiotemporal processing,” Computer methods and programs in biomedicine, vol. 83, no. 2, pp. 95–103, 2006. [7] S. Puthusserypady and T. Ratnarajah, “H∞ adaptive filters for eye blink artifact minimization from electroencephalogram,” IEEE, Signal Processing Letters, vol. 12, no. 12, pp. 816–819, 2005. [8] A. Schlogl, C. Keinrath, D. Zimmermann, R. Scherer, R. Leeb, and G. Pfurtscheller, “A fully automated correction method of EOG artifacts in EEG recordings,” Clinical Neurophysiology, vol. 118, no. 1, pp. 98–104, 2007. [9] D. Talsma, “Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure,” Psychophysiology, vol. 45, no. 2, pp. 216–228, 2008. [10] Y. Li, “An extended em algorithm for joint feature extraction and classification in braincomputer interfaces,” Neural computation, vol. 18, no. 11, p. 2730, 2006.  113  [11] C. A. Joyce, I. F. Gorodnitsky, and M. Kutas, “Automatic removal of eye movement and blink artifacts from EEG data using blind component separation,” Psychophysiology, vol. 41, no. 2, pp. 313–325, 2004. [12] B. Noureddin, P. D. Lawrence, and G. E. Birch, “Quantitative evaluation of ocular artifact removal methods based on real and estimated EOG signals,” in Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS 2008), 2008, pp. 5041–5044. [13] Delorme, “EEGLAB: an open source toolbox for analysis of single-trial eeg dynamics including independent component analysis,” Journal of Neuroscience Methods, vol. 134, no. 1, p. 9, 2004. [14] E. Niedermeyer and F. L. da Silva, Electroencephalography: Basic Principles, Clinical Applications, and Related Fields, 4th Ed. Baltimore, Maryland USA: Lippincott Williams & Wilkins, 1998. [15] C. J. James and O. J. Gibson, “Temporally constrained ica: an application to artifact rejection in electromagnetic brain signal analysis,” Biomedical Engineering, IEEE Transactions on, vol. 50; 50, no. 9, pp. 1108–1116, 2003. [16] B. N. Cuffin and D. Cohen, “Comparison of the magnetoencephalogram and electroencephalogram,” Electroencephalography and clinical neurophysiology, vol. 47, no. 2, pp. 132– 146, 1979. [17] T. F. Oostendorp and A. van Oosterom, “Source parameter estimation in inhomogeneous volume conductors of arbitrary shape,” Biomedical Engineering, IEEE Transactions on, vol. 36, no. 3, pp. 382–391, 1989. [18] T. D. Lagerlund, F. W. Sharbrough, and N. E. Busacker, “Spatial filtering of multichannel electroencephalographic recordings through principal component analysis by singular value decomposition,” Journal of Clinical Neurophysiology, vol. 14, no. 1, pp. 73–82, 1997. [Online]. Available: http://ovidsp.ovid.com/ovidweb.cgi?T=JS&NEWS=N& PAGE=fulltext&AN=00004691-199701000-00007&D=ovftc [19] N. Ille,  P. Berg,  and M. Scherg,  “Artifact correction of the ongoing eeg  using spatial filters based on artifact and brain signal topographies,” nal of Clinical Neurophysiology, [Online].  Available:  vol. 19,  no. 2,  pp. 113–124,  Jour-  March 2002.  http://ovidsp.ovid.com/ovidweb.cgi?T=JS&NEWS=N&PAGE=  fulltext&AN=00004691-200203000-00002&D=ovfte [20] W. D. Clercq, A. Vergult, B. Vanrumste, W. V. Paesschen, and S. V. Huffel, “Canonical correlation analysis applied to remove muscle artifacts from the electroencephalogram,” Biomedical Engineering, IEEE Transactions on, vol. 53; 53, no. 12, pp. 2583–2587, 2006. [21] S. Puthusserypady and T. Ratnarajah, “Robust adaptive techniques for minimization of EOG artefacts from EEG signals,” Signal Processing, vol. 86, no. 9, pp. 2351–2363, 9 2006. 114  [22] R. J. Croft and R. J. Barry, “Removal of ocular artifact from the eeg: a review,” Neurophysiologie Clinique/Clinical Neurophysiology, vol. 30, no. 1, pp. 5–19, 2000. [23] P. L. Nunez, Electric Fields of the Brain.  New York: Oxford University Press, 1981.  [24] P. Berg and M. Scherg, “Dipole models of eye movements and blinks,” Electroencephalography and clinical neurophysiology, vol. 79, no. 1, pp. 36–44, 1991. [25] A. Bashashati, B. Noureddin, R. K. Ward, P. D. Lawrence, and G. E. Birch, “An experimental study to investigate the effects of a motion tracking electromagnetic sensor during EEG data acquisition,” IEEE Transactions on Biomedical Engineering, vol. 53, no. 3, pp. 559–563, 2006. [26] R. K. S. Kwan, A. C. Evans, and G. B. Pike, “Mri simulation-based evaluation of imageprocessing and classification methods,” Medical Imaging, IEEE Transactions on, vol. 18, no. 11, pp. 1085–1097, 1999. [27] J. J. M. Kierkels, J. Riani, J. W. M. Bergmans, and G. J. M. van Boxtel, “Using an eye tracker for accurate eye movement artifact correction,” Biomedical Engineering, IEEE Transactions on, vol. 54, no. 7, pp. 1256–1267, 2007.  115  Chapter 9  Conclusions In Chapters 2 through 6, new observations regarding factors affecting OA removal and new approaches to objectively evaluating OA removal methods on real EEG were described. The insights thereby gained were used to collect the data for and evaluate the performance of two novel OA removal techniques presented in Chapters 7 and 8. Figure 1.1 shows the relationships among the work described in the chapters. In Section 9.1 the conclusions of the previous chapters are summarized and discussed in the context of the overall thesis objectives. The strengths and weaknesses of the methods are outlined in Section 9.2. In Section 9.3 the results of a simulation test used to compare the OA removal methods considered in Chapter 8 against a “ground truth” are presented. Finally, a discussion of potential future work is presented in Section 9.4.  9.1 9.1.1  Discussion Ocular Artifact Characteristics  The first thesis objective was achieved with the study carried out and reported in Chapter 2 to examine more closely the frequency characteristics of EOG signals during eye movements and blinks. This was the first investigation, using a short-time Fourier transform, of the time-frequency content of EOG during eye movements and blinks. A time-frequency analysis was required since neither neural nor ocular signals are stationary, making simple spectral analysis unsuitable. It was found that large eye movements and blinks can have comparable amplitudes in the EOG. It further was discovered that the EOG can contain frequencies as high as 181Hz during eye movements and blinks. Contributions The findings of this work suggests that, for OA removal, EOG signals should be filtered below 181Hz or sampled above 362Hz to avoid aliasing. In practice, since most EEG signals of interest are at frequencies below 30Hz, it would be best to filter the EOG closer to 30Hz before sampling. Furthermore, any video-based eye tracking device to be used for OA removal would need to have a frame rate of at least 362Hz. This is because unlike EOG, it is not possible to apply an analog temporal filter to video. Instead, the images have to be sampled at a high enough rate (at least twice the Nyquist frequency of 181Hz in this case) to avoid aliasing. The eye movement signals (e.g., pupil position) extracted from the images can subsequently be digitally filtered. These recommendations were used to build the data collection system for the rest of 116  the work in this thesis. Specifically, an EEG and EOG amplifier with suitable sampling and filtering frequencies and a high-speed eye tracker with a sufficiently high frame rate was used to collect data.  9.1.2  High-speed Eye Tracking For Ocular Artifact Removal  Chapter 3 described a high-speed eye tracking system that was capable of sampling rates exceeding 400Hz, which was considerably faster than previous systems [1, 2], which operated at 30 Hz. This was made possible using hardware and software ROI and new techniques for streaming high-speed eye tracking images to off-the-shelf PC hardware. Contributions A high speed image processing and recording technique using a combination of software and hardware regions of interests was used to achieve higher frame rates than was previously reported. Such a high speed eye tracking device is essential for OA removal methods that seek to replace the EOG reference with an eye tracking-based signal, such as the one described in this thesis.  9.1.3  Electromagnetic Sensors and EEG  During EEG recording, it is often useful to physically record the time course of a motor task such as a hand movement, especially in the context of a BCI. One device that can be used effectively for this purpose is an electromagnetic motion tracking sensor. Chapter 4 presented the results of a new study carried out to determine the effects of using such a sensor on an EEG recording system. It is to be noted that although in Fig. 4.3 there appears to be a noticeable reduction in power at around 10Hz when the transmitter is turned on for subject 2, the same was not found in subject 1 as shown in Fig. 4.2. If the change was due to the motion tracking system, it would show up systematically for every subject. The study thus confirmed the hypothesis that in typical EEG frequencies, there is no evidence of any spectral interference or distortion of the EEG caused by the sensor. Contributions For the first time, a study was carried out to determine whether including an electromagnetic motion tracking sensor would interfere with EEG recordings. No evidence was found of any interference in the 0.1−55 Hz range. The sensors were thus used to record hand extension tasks for evaluating the effect of a motor task on OA removal methods presented in this thesis.  9.1.4  Performance Evaluation  The second thesis objective was achieved with the metric presented in Chapter 5 for measuring distortion caused by OA removal methods.  117  Unlike previous methods [3–6] that use simulated data or visual inspection of real data, the new metric can provide a quantitative measure of one type of common distortion of underlying EEG caused by OA removal methods. When combined with the metric proposed by Puthusserypady et. al. [7] for measuring how much OA is removed, it provides a more complete analysis of the performance of OA removal methods. Contributions The use of the new metric allowed an analysis of OA removal methods introduced in this thesis, and their objective comparison with existing methods. Specifically, the likelihood that a given OA removal method removes more power than was present in the original EEG was reported. The metric can be used as part of the evaluation of any OA removal method.  9.1.5  Factors Affecting Ocular Artifact Removal  Ocular artifact removal methods are normally evaluated only during eye movements and blinks. However, in a practical online application, they should perform well (i.e., not remove anything ideally) during periods without OA. Chapter 6 presented the results of a study to determine whether existing OA removal methods perform differently during non-OA periods. For nonOA periods, various tasks including a hand movement (which was recorded using the sensor described in Chapter 4) were investigated. The results showed that the methods considered remove less signal during non-OA periods, as expected. They also indicated, using the metric presented in Chapter 5, that the method that removes the most OA is also more likely to distort the underlying EEG. In addition, the effect of replacing the EOG reference with EEG channels for the OA removal methods considered was investigated. It was found that using EOG estimated from as few as four EEG electrodes only increased the likelihood of distorting the EEG from 14% to 19% and from 21% to 23% for the two methods considered. Contributions Previously, the performance of OA removal methods did not take into account periods where the subject was performing tasks other than eye movements and blinks. It had previously not been reported whether a motor task such as a hand extension affects the performance of OA removal methods. The findings of this study underscored the importance of reporting OA removal performance separately for periods with OA and for periods without OA but with other tasks being performed (e.g., hand movement). Furthermore, the results showed that it is possible to use frontal EEG channels instead of EOG for online OA removal methods that require an EOG reference input, and that for some applications (e.g., BCI), the slight reduction in performance may be acceptable in order to avoid using EOG electrodes. This finding was crucial to the the eye tracker-based OA removal technique developed as part of this thesis. It is also of practical benefit to online applications that require OA removal without the use of EOG electrodes.  118  9.1.6  Eye Tracking-Based Ocular Artifact Removal  The third thesis objective was achieved with the novel approach presented in Chapter 7 of using an eye tracker for online OA removal without the need for EOG. The results from Chapter 6 showed that using 3 frontal EEG channels instead of EOG was possible, but with a slight performance decrease. To address the performance decrease, those EEG channels were combined with the output of an eye tracker. The only other reported attempt to using an eye tracker for online OA removal was by Kierkels et. al. [6], who used an eye tracker to generate pupil positions as inputs to a Kalman Filter to remove the effects of eye movements. They were not able to handle blinks, however, and the Kalman Filter required a 30-second starting period and manual tuning of parameters for optimal performance. Furthermore, the eye tracker operated at 50Hz, which is below the minimum frame rate recommended in Chapter 2 of this thesis. The work presented used the findings from Chapter 2 and the system described in Chapter 3 to propose a means of replacing the EOG with an eye tracker-based reference for online OA removal methods that require an EOG reference. The high-speed eye tracker (which operated above the required minimal frame rate) was used in conjunction with (a) a new online algorithm (BSG) for extracting the time course of a blink from eye tracker images, and (b) three frontal EEG channels to remove both eye movement and blink artifacts. The method did not require any calibration, starting period or manual intervention. The effectiveness of the approach was demonstrated with two adaptive filters, with a full analysis of the results at electrodes across the scalp for eye movements and blinks of different amplitudes. Contributions The approach presented for using a high speed eye tracker for OA removal eliminates the need for EOG electrodes attached to the face, which is critical for practical daily applications. Two OA removal methods are shown to benefit from using the proposed eye-tracker method instead of either EOG or frontal EEG alone as a reference. The eye tracker can be used in applications where EEG and eye tracking are combined for other purposes. The use of the eye tracker and BSG algorithm provides the means for tracking point-of-gaze and blink dynamics simultaneously with EEG data collection and processing, which is often desired or required in clinical studies and a variety of human computer interface applications such as neuromarketing, brain-computer interfaces, and pilot and driver drowsiness monitoring. The last two applications could particularly benefit from the BSG algorithm, as it can be used to measure blink frequency and speed as an indicator of alertness, for example, for pilots and drivers.  9.1.7  Biophysical Model-Based Ocular Artifact Removal  The final thesis objective was achieved with the novel biophysical model-based OA removal method (BMAR) presented in Chapter 8. In contrast to previous methods [4, 6, 8–16], it worked without manual intervention and without the need for EOG electrodes during real-time  119  operation. It only required a short once-per-subject calibration and, unlike previous attempts to use biophysical models for OA removal [17], it operated in real-time and did not require subject-specific MRI. Using the metric in Chapter 5 and the findings described in Chapter 6, its performance on real EEG was compared with two adaptive filters [12][14] and two methods based on component analysis [5, 18, 19]. The performance of each was shown separately for periods with (a) eye movements and blinks, (b) a hand extension task and (c) neither. The effect of performing various non-OA tasks was also examined for each algorithm. The results showed that the new BMAR method consistently removed more OA than the other methods over a variety of tasks. In removing both saccades and blinks, it removed more than 4 times as much OA as other methods, and was the only one that never removed more power than was present in the original EEG, thus eliminating a common type of distortion caused by OA removal. Contributions The BMAR method provides a fully automated means of removing OA online without the need for EOG electrodes attached to the face during real-time operation. It can remove the effects of both eye movements and blinks with minimal calibration, and without the need for subjectspecific MRI. Also, unlike other methods, it does not assume the EEG to be stationary. It further avoids a type of distortion of the underlying EEG that is often caused by OA removal.  9.2  Strengths and Weaknesses  The time-frequency analysis of eye movements and blinks provides a recommendation, based on empirical evidence, for sampling and filtering frequencies for EOG and for a minimum frame rate for eye tracking devices used for OA removal, such as the one described in this thesis. However, the study undertaken only analyzed data from EOG electrodes, leaving the question open as to whether the higher frequency content would exist at EEG electrodes. The investigation of the effects of an electromagnetic motion tracking sensor on an EEG recording system found no evidence of spectral interference between the electromagnetic motion tracking sensor used in this thesis and an EEG recording system in the normal EEG frequency range. However, the study only included two subjects, and did not carry out extensive tests of EEG collected on different days under different conditions. The metric introduced to quantify EEG distortion caused by OA removal methods measures how often a given method removes more power than is present in the original EEG. This is a common type of distortion that is not reported in the literature. However, there are other types of distortion not captured by this metric. It does not measure, for example, whether a given method distorts underlying event-related potentials. The investigation of the effects of factors such as whether the individual is performing tasks other than eye movements or blinks (e.g., hand movement or counting) on OA removal performance showed that some methods are significantly affected by such factors. The study provided evidence that a more thorough reporting of OA removal performance than is currently reported  120  in the literature is needed to make a sound evaluation. Specifically, the results suggested that methods should be evaluated during both periods with and without eye movements and blinks, and over a range of tasks. The study carried out to test whether frontal EEG electrodes can be used to replace EOG as a reference input showed that three specific frontal EEG electrodes can be used to replace the EOG for some OA removal methods. For some applications, the results indicated that frontal EEG channels could be used without adversely affecting the performance of the method. The results, however, should be confirmed with more subjects (only four were used) and more methods (only one was included). The approach to using an eye tracker with three frontal EEG channels instead of EOG for use by any online OA removal method works for both eye movements and blinks. It requires no calibration or manual intervention, and can be used to eliminate the need for EOG electrodes for any OA removal method that otherwise requires an EOG reference. By using a high speed eye tracker and a novel blink signal generation algorithm, it captures all the necessary eye movement and blink dynamics, and simultaneously provides additional information about gaze and blink dynamics, which is useful in both clinical and human computer interface applications. A full analysis of performance at electrodes across the entire scalp for eye movements and blinks of different amplitudes showed that the two adaptive filters considered benefited from the use of the eye tracker. The approach relies on three frontal EEG electrodes, which may not always be available during EEG recording. Other frontal electrodes may be used instead, but they were not investigated in this work. Also, the effect of using the eye tracker-based reference on EEG distortion was not investigated, and remains an open question. Finally, in the context of EEG data collection, using EOG electrodes is less costly than a video-based eye tracker. Therefore, for applications that do not need an eye tracker and can tolerate the use of electrodes on the face (e.g., some clinical applications), the addition of the eye tracker may not be justified. The biophysical model-based OA removal method (BMAR) presented requires no EOG during normal operation, is fully automated, requires minimal calibration and no subjectspecific MRI. In terms of both OA removal and one type of distortion, it outperforms four existing methods considered. However, it still requires EOG for a calibration session, as well as the locations of electrodes on the scalp. There is also no comparison provided with using subjectspecific MRI or other types of head models (e.g., finite element ones) to see if those additional factors would significantly improve the performance of the method. In addition, only one type of distortion is reported. For example, the method may distort event-related potentials more than other methods. Finally, the method was only evaluated using four subjects, and compared to only four other methods.  9.3  Comparison Against “Ground Truth”  As a reality check on the BMAR method, it was decided to do a simulation in order to find a “ground truth” against which the output of the BMAR and other OA removal methods could be compared. The “artifact-free” brain signal VB in Fig. 7.1 was simulated using 14 seconds of EEG from 121  one subject during Task 1 (staring at a fixed dot in the middle of a computer screen) in Table 7.1. The subject was not performing any hand extension task or OA during the EEG segment chosen. To simulate the ocular artifact signal Vo in Fig. 7.1, the average of the calibration trials described in Section 8.2.3 for the same subject were used. For this simulation, VA and VN are assumed to be zero. For each of 3 OA types (horizontal eye movements, vertical eye movements and blinks), the average of the 1 sec calibration trials were added to VB at 2 sec time intervals to form E in Fig. 7.1. The resulting E was used by each OA removal algorithm shown in Table 9.1 to produce Y in Fig. 7.1. The following metrics were calculated: MSE =  NC c=1   1 if R (t) =   − Y (c, t))2  (9.1)  NC NT  SNR = 10log10 (  R=  NT t=1 (VB (c, t)  NC c=1  NC c=1  NC NT 2 c=1 t=1 VB (c, t) ) NT 2 t=1 (E(c, t) − Y (c, t))  NT 2 t=1 (E(c, t) − Y (c, t)) NC NT 2 c=1 t=1 Y (c, t)  NC 2 c=1 (E(c,t)−Y (c,t)) NC 2 c=1 E (c,t)  0 otherwise   > 1 ,ε =   (9.2)  (9.3)  NT 1  R (t) NT  (9.4)  where NC is the number of channels, NT is the number of samples, VB and E are as above, and Y is the output of the OA removal method, as in Fig. 7.1. The definitions for SNR, R and ε are the same as those found in [14], [20] and [21], respectively. As shown in Table 9.1, for all three types of OA, the SNR is higher for the BMAR method than any of the other methods considered, based on this simulation. For saccades, the MSE is also substantially lower than any of the other methods, and for blinks it is lower than RLS, H∞ and PCA, but slightly higher than CCA. The BMAR method is the only one with ε=0. These results are consistent with those using actual EEG data in Chapter 8. It is noted that even though the MSE and SNR metrics show that the BMAR method outperforms the other methods, the R value for BMAR is consistently lower. This reveals a weakness of using just the R metric. A high R indicates more artifact is removed, but does not measure how much the removal method may have distorted the underlying brain signals (VB ). For example, for horizontal saccades, the PCA method has the highest value for R (7.1), but a high MSE and low SNR. Although it removes more signal during the artifact removal process, it also introduces more distortion. This is captured by the ε metric, which shows that PCA has the highest value (34%) of any of the methods considered. Thus an accurate measure of the performance of a removal method is obtained only when both the R and ε metrics are used.  122  BMAR RLS H∞ PCA CCA  9.4  Table 9.1: BMAR vs. other methods using simulation data. Horizontal saccades Vertical saccades Blinks MSE SNR R ε MSE SNR R ε MSE SNR R (dB) (%) (dB) (%) (dB) 4.6 6.23 0.3 0 2.3 10.31 0.1 0 15.5 1.71 0.4 22.5 -1.06 2.3 31 22.0 -0.66 2.2 31 22.3 -3.61 4.2 17.3 -0.06 4.5 15 16.8 0.41 4.2 12 17.1 -3.22 11.2 18.1 -0.15 7.1 34 17.7 0.25 8.3 41 18.9 -2.98 10.4 14.0 0.97 3.8 22 13.5 1.4 4.1 25 14.8 -2.25 6.2  ε (%) 0 33 22 39 29  Future Work  A number of new observations about EOG signal characteristics during eye movements and blinks and evaluation of online OA removal methods were made in this thesis. Based on the observations and novel data collection techniques, two OA removal methods were presented for the first time. As EEG continues to be used in online clinical studies and human computer interface applications, effective online OA removal methods that do not require EOG electrodes attached to the face will become increasingly critical, as will quantitative measures of their performance in terms of both OA removal and EEG and ERP distortion. A number of potential areas of future work for developing and evaluating online ocular artifact removal without EOG are as follows. • Data recording techniques. Frequency characteristics of EEG signals during eye movements and blinks are needed to confirm whether the minimum sampling and filtering frequency for EOG and the minimum frame rate for video-based eye trackers recommended is necessary and sufficient at EEG electrodes. Also, the study to investigate the effects of an electromagnetic motion tracking sensor on an EEG recording system could be extended to include more subjects and data collected under varying conditions. • Improved OA removal evaluation measures. While a new metric was introduced to quantify one type of EEG distortion caused by OA removal methods, it does not measure all types of such distortion. One method for measuring distortion with real (i.e., not simulated) EEG would be to consider ensemble averages of ERPs occurring with and without OA. The average of ERP trials without OA can be compared to the average of ERP trials with OA after the OA removal method has been applied. In addition, in many cases the definition of “distortion” and the extent to which it is tolerable depends on the application. For example, in a BCI, measures such as false positive and true positive rates can be examined before and after OA removal to evaluate the performance of a given method. • Online ocular artifact removal using an eye tracker instead of EOG. The approach presented requires three frontal EEG electrodes. Further research may find ways to eliminate the need for those electrodes, or suggest other electrodes that could be used, in case those three specific ones are not already being recorded in the EEG. In addition, the analysis of performance would benefit from the inclusion of additional subjects, and the evaluation 123  of EEG distortion compared to an EOG reference. • Online ocular artifact removal without EOG or eye tracker. The BMAR method still requires EOG for a calibration session. The EOG is required for detecting periods with eye movements and blinks during the calibration. Other methods for detecting OA periods without EOG could be used instead. In addition, the method requires the locations of electrode positions on the scalp. It may be possible to use standard electrode positions without adversely affecting performance. At the same time, using subject-specific MRI or other types of head models may improve the performance. Furthermore, only one type of distortion was measured. As described above, the measurement of other types of distortion is needed to provide a more complete evaluation of the method. Finally, the evaluation would benefit from the inclusion of additional subjects and comparison with additional methods. • Comparison and combination of eye tracker-based and BMAR methods. It would be useful to compare the eye tracker-based and BMAR methods to evaluate which removes OA more effectively. In addition, it may be possible to combine the two methods, resulting in a new hybrid OA removal method that outperforms both. For example, currently the BMAR method uses frontal EEG electrodes to detect when an OA occurs, causing it to sometimes remove signal when there is no OA (see Section 8.3 and the period of 700ms900ms in Fig. 8.4). By using an eye tracker to detect periods of OA, such errors are less likely to occur, thus improving the overall performance of the method.  124  References [1] B. Noureddin, P. D. Lawrence, and G. E. Birch, “Online removal of eye movement and blink EEG artifacts using a high speed eye tracker,” in submission. [2] ——, “Dipole modeling for low distortion, automated online ocular artifact removal from EEG without EOG,” in submission. [3] J. J. M. Kierkels, G. J. M. van Boxtel, and L. L. M. Vogten, “A model-based objective evaluation of eye movement correction in EEG recordings,” Biomedical Engineering, IEEE Transactions on, vol. 53, no. 2, pp. 246–253, 2006. [4] K. H. Ting, P. C. W. Fung, C. Q. Chang, and F. H. Y. Chan, “Automatic correction of artifact from single-trial event-related potentials by blind source separation using second order statistics only,” Medical engineering physics, vol. 28, no. 8, pp. 780–794, 2006. [5] N. Ille, P. Berg, and M. Scherg, “Artifact correction of the ongoing EEG using spatial filters based on artifact and brain signal topographies,” Journal of clinical neurophysiology, vol. 19, no. 2, pp. 113–124, 2002. [6] J. J. M. Kierkels, J. Riani, J. W. M. Bergmans, and G. J. M. van Boxtel, “Using an eye tracker for accurate eye movement artifact correction,” Biomedical Engineering, IEEE Transactions on, vol. 54, no. 7, pp. 1256–1267, 2007. [7] S. Puthusserypady and T. Ratnarajah, “Robust adaptive techniques for minimization of EOG artefacts from EEG signals,” Signal processing, vol. 86, no. 9, pp. 2351–2363, 2006. [8] G. Gomez-Herrero, W. D. Clercq, H. Anwar, O. Kara, K. Egiazarian, S. V. Huffel, and W. V. Paesschen, “Automatic removal of ocular artifacts in the EEG without an EOG reference channel,” Proceedings of the 7th Nordic Signal Processing Symposium (NORSIG), 2006, pp. 130–133, 2006. [9] Y. Li, Z. Ma, W. Lu, and Y. Li, “Automatic removal of the eye blink artifact from EEG using an ICA-based template matching approach,” Physiological measurement, vol. 27, no. 4, pp. 425–436, 2006. [10] C. A. Joyce, I. F. Gorodnitsky, and M. Kutas, “Automatic removal of eye movement and blink artifacts from EEG data using blind component separation,” Psychophysiology, vol. 41, no. 2, pp. 313–325, 2004.  125  [11] G. L. Wallstrom, R. E. Kass, A. Miller, J. F. Cohn, and N. A. Fox, “Automatic correction of ocular artifacts in the EEG: a comparison of regression-based and component-based methods,” International Journal of Psychophysiology, vol. 53, no. 2, pp. 105–119, 2004. [12] P. He, G. Wilson, and C. Russell, “Removal of ocular artifacts from electro-encephalogram by adaptive filtering,” Medical and biological engineering and computing, vol. 42, no. 3, pp. 407–412, 2004. [13] T. Liu and D. Yao, “Removal of the ocular artifacts from EEG data using a cascaded spatiotemporal processing,” Computer methods and programs in biomedicine, vol. 83, no. 2, pp. 95–103, 2006. [14] S. Puthusserypady and T. Ratnarajah, “H∞ adaptive filters for eye blink artifact minimization from electroencephalogram,” IEEE, Signal Processing Letters, vol. 12, no. 12, pp. 816–819, 2005. [15] A. Schlogl, C. Keinrath, D. Zimmermann, R. Scherer, R. Leeb, and G. Pfurtscheller, “A fully automated correction method of EOG artifacts in EEG recordings,” Clinical Neurophysiology, vol. 118, no. 1, pp. 98–104, 2007. [16] D. Talsma, “Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure,” Psychophysiology, vol. 45, no. 2, pp. 216–228, 2008. [17] D. Flanagan, R. Agarwal, Y. H. Wang, and J. Gotman, “Improvement in the performance of automated spike detection using dipole source features for artefact rejection,” Clinical Neurophysiology, vol. 114, no. 1, pp. 38–49, JAN 2003. [18] T. D. Lagerlund, F. W. Sharbrough, and N. E. Busacker, “Spatial filtering of multichannel electroencephalographic recordings through principal component analysis by singular value decomposition,” Journal of Clinical Neurophysiology, vol. 14, no. 1, pp. 73–82, 1997. [Online]. Available: http://ovidsp.ovid.com/ovidweb.cgi?T=JS&NEWS=N& PAGE=fulltext&AN=00004691-199701000-00007&D=ovftc [19] W. D. Clercq, A. Vergult, B. Vanrumste, W. V. Paesschen, and S. V. Huffel, “Canonical correlation analysis applied to remove muscle artifacts from the electroencephalogram,” Biomedical Engineering, IEEE Transactions on, vol. 53; 53, no. 12, pp. 2583–2587, 2006. [20] S. Puthusserypady and T. Ratnarajah, “Robust adaptive techniques for minimization of EOG artefacts from EEG signals,” Signal Processing, vol. 86, no. 9, pp. 2351–2363, 9 2006. [21] B. Noureddin, P. D. Lawrence, and G. E. Birch, “Quantitative evaluation of ocular artifact removal methods based on real and estimated EOG signals,” in Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS 2008), 2008, pp. 5041–5044.  126  Appendix A  Methods This appendix provides details of the methods used in this thesis. As part of the objectives of the thesis, a number of studies were carried out that required the use of existing OA removal methods. Two existing methods were selected: one based on the recursive least squares or RLS algorithm [1] which has fast convergence and another based on the time-varying H∞ algorithm [2] which converges more slowly but has been shown to perform better than RLS for OA removal. Both methods are adaptive or nonlinear filtering techniques that have been shown to be suitable and effective for on-line removal of OA from EEG, given a good EOG signal as a reference. As mentioned in Section 1.4, nonlinear filters treat the EOG as a reference signal to be filtered and subtracted from the measured EEG as follows (see also Eq. 1.2 and Fig. 1.2): e(n) = s(n) − r(n)T H(n)  (A.1)  where e(n) (VˆBS in Eq. 1.2) is the set of estimated EEG signals without OA, s(n) (Vˆ in Eq. 1.2) is the set of measured scalp potentials, r(n) (TES in Eq. 1.2) is the set of reference EOG signals and H(n) (VE in Eq. 1.2) the adaptive filter coefficients. Each method uses a different procedure for adaptively calculating H(n). The reference signal is assembled as follows:   r1 (n)  r1 (n − 1) . . . r1 (n − 1 − M )       r (n) r (n − 1) . . . r (n − 1 − M )  2 2  2   r(n) =   ..  .. ..  .  . ... .   rL (n) rL (n − 1) . . . rL (n − 1 − M )  (A.2)  where rk (n), k = 1, 2, . . . , L is the set of L EOG signals collected, and M is the filter length, in samples. The two methods were also used to test the proposed approach of using an eye tracker instead of EOG (Chapter 7), and to provide a comparison during the evaluation of the biophysical model based method (Chapter 8). Sections A.1 and A.2 provide an overview of the RLS and H∞ algorithms, respectively, including a description of the parameters and the values chosen in this work. Since the work in this thesis did not aim to improve the two methods, but rather to use them for evaluation and comparison with new methods, the parameter values were drawn from [1] and [2] in order to provide a fair comparison with published results. In addition, improved evaluation of new and existing ocular artifact (OA) removal methods (one of the objectives of this thesis) included quantifying distortion of underlying EEG (i.e., the signal of interest). Section A.3 describes a metric introduced to measure one type of such distortion.  127  Finally, Section A.4 provides further details of how EOG signals are estimated from measured scalp EEG potentials in Chapters 5 and 8.  A.1  RLS Method  He et. al. [1] introduced an adaptive filtering method for OA removal that uses a recursive least squares (RLS) algorithm to calculate the filter coefficients. A filter length of M = 3 was used as in [1], noting that the method is not sensitive to the choice of M , and that the incremental increase in performance is noticeably reduced for M > 3. At each time sample, a gain factor K(n) was calculated as follows: −1  R(n − 1)  r(n)  K(n) = λ+ −1  R(n)  r(n)T  = λ−1 R(n − 1)  −1  −1  R(0)  (A.3)  −1  R(n − 1)  r(n)  − λ−1 K(n)r(n)T R(n − 1)  −1  = 100 · I  (A.4)  (A.5)  where I is the identity matrix, r(n) is as in Eq. A.2 and λ is a “forgetting factor” introduced to mitigate the effects of non-stationarity in s(n). A value of λ = 1 effectively weights all previous samples equally in updating the filter coefficients, while values of 0 < λ < 1 give more weight to recent samples. As in [1], a value of λ = 0.9999 was used. It was found that for low values of λ the algorithm would not converge, and for values between 0.995 and 1 the actual value of λ did not make a noticeable difference to the final performance of the method. The gain factor was then used to calculate filter coefficients H(n) as follows: H(n) = H(n − 1) + K(n)(s(n) − r(n)T H(n − 1))  (A.6)  H(0) = 0  (A.7)  which in turn was used to filter the measured EEG s(n) as in Eq. A.1.  A.2  H∞ Method  Puthusserypady et. al. [2] proposed another adaptive filtering method for OA removal using an H∞ -based noise cancellation scheme to calculate the filter coefficients. Again a filter length of M = 3 was used, as the results in [2] indicated that higher filter orders did not produce noticeably better performance. The gain factor K(n) at each time sample was calculated as follows: K(n) =  ˜ P(n)r(n) ˜ 1 + r(n)T P(n)r(n)  (A.8) 128  T ˜ P(n) = P−1 (n) − ε−2 g r(n)r(n)  −1  (A.9) −1  T P(n) = P−1 (n − 1) + (1 − ε−2 g )r(n − 1)r(n − 1)  + 10−5 · I  P(0) = 0.005 · I  (A.10)  (A.11)  where I is the identity matrix, r(n) is as in Eq. A.2, and εg and the initial conditions were the same as those of [2] (εg = 1.5). The gain factor was then used to calculate filter coefficients H(n) as follows: H(n) = H(n − 1) + K(n − 1)(s(n − 1) − r(n − 1)T H(n − 1))  (A.12)  H(0) = 0  (A.13)  which in turn was used to filter the measured EEG s(n) as in Eq. A.18 .  A.3  Distortion Metric  To evaluate one type of distortion using actual EEG (i.e., not in simulations), a new metric was introduced. The metric measures the likelihood of distorting non-OA components of the recorded EEG, and is defined as follows:   C 2 1 if  1 (E−Y) ) > 1 C 2 1 E R = ,ε =   0 otherwise  N 1  R  N  (A.14)  where N is the number of samples, C the number of channels recorded, E is the measured scalp potential and Y is the output of the OA removal algorithm, and represents the EEG with OA removed. The numerator of R represents the power of the signal removed, while the denominator represents the power of the original EEG at each time sample. Thus ε measures how often the power of the “artifact” removed exceeds that of the original EEG.  A.4  EOG Estimation  In Chapters 5, experiments were conducted to assess the feasibility of using a linear combination of available EEG channels to estimate the EOG signal required by many OA removal algorithms, thus providing an alternative to directly measuring the EOG. Various combinations of EEG channels were explored. Specifically, as shown in the bottom half of Figure 5.1 , the measured 8  ˆ Note that H(n) is w(n), K(n) is g(n), and s(n) is y(n) in [2].  129  EEG were used to estimate the signals measured simultaneously at the EOG sites as follows9 : w  =  E o Ee #  (A.15)  ˜o E  =  wEe  (A.16)  where Ee # is the pseudo-inverse of Ee (measured scalp potentials) and w (weight matrix) was calculated using training data as described in Section 5.3. The calculated w was then used on ˜ o (estimated EOG). Ee was a Nc xNT matrix, where Nc the remaining test data to calculate E was the number of channels and NT was the number of samples. NT in this case was 77,131 (the total length, in samples, of the first trial of all 9 tasks concatenated together). In Tables 5.3 and 5.4, the results of using various combinations of EEG are reported: for ”Full head”, NC = 55; for ”Anterior”, NC = 33; for ”Frontal”, NC = 17; for ”Fp/AF”, NC = 8; and for ”Fpz+Cz+AF7+AF8”, NC = 4. Similarly, in Chapter 8, EOG signals were estimated from measured scalp EEG potentials to detect periods with OA, thus applying the correction only when eye movements or blinks are thought to occur. In this case, w was calculated using ensemble averages of ocular artifacts during calibration (See Section 8.2.3), where NT = 200 and NC = 3 (Fpz, AF7 and AF8 only). The “pseudo-inverse” Eo # in both cases was calculated using the Moore-Penrose algorithm implemented in MATLABTM (http://www.mathworks.com) by the pinv function. This function calculates the singular value decomposition of the transpose of Eo such that Eo T = USV  (A.17)  where U is a NT xNC array, S and V are NC xNC arrays, and the diagonal values of S contain the singular values of the decomposition. A tolerance tol  where tol  tol  is calculated as follows:  = max(NT , NC ) · Eo ·  sys  (A.18)  is the floating point precision of the computer system. Singular values lower than ˜ used to calculate the pseudo-inverse as are discarded, and the remaining singular values S sys  follows: ˜ Eo # = VSU  (A.19)  9  Note that this is the same as Eq. 5.3, and similar to Eqs. 8.4 and 8.5, with EOG = Eo , EEGc # = Ee # , ˜ o and EEG = Ee . eEOG = E  130  References [1] P. He, G. Wilson, and C. Russell, “Removal of ocular artifacts from electro-encephalogram by adaptive filtering,” Medical and biological engineering and computing, vol. 42, no. 3, pp. 407–412, 2004. [2] S. Puthusserypady and T. Ratnarajah, “H∞ adaptive filters for eye blink artifact minimization from electroencephalogram,” IEEE, Signal Processing Letters, vol. 12, no. 12, pp. 816–819, 2005.  131  Appendix B  Head Radius Calculation For SS head models and a given dipole position, differences in the shell radii affects the dipole moment, but not the predicted potential at the outer surface. The potential due to a current dipole at the surface of a homogenous volume conductor diminishes at a rate of 1/d2 (d is the distance from the dipole to the point of measurement on a conductor’s surface). Thus, for 2 shells (Fig. B.1): L1 = α2 L2  (B.1)  where r1 and r2 are the shell radii, α = r2 / r1 . L1 and L2 are lead field matrices relating a dipole at position D to potentials at the surface of S1 and S2 , respectively. If the measured potential is V, then the estimated dipole moments are: m 1 = L# 1 V  (B.2)  2 # 2 # m 2 = L# 2 V = (L1 /α ) V = α L1 V  (B.3)  where m1 and m2 are the moments for dipole D (V measured at S1 and S2 , respectively), and L1 # and L2 # are the pseudo-inverses of L1 and L2 , respectively. Therefore: m2 = α2 m1  (B.4)  The predicted potentials E1 and E2 are therefore identical: E1 = L1 m1 = α2 L2 (m2 /α2 ) = L2 (L# 2 V) = L2 m2 = E2  (B.5)  Since the proposed method uses the predicted potential and not the dipole moment (or even the position directly), the actual head radius does not matter: a nominal head radius can be used.  Figure B.1: Two spherical shells S1, S2 that differ only by their radii r1, r2.  132  Appendix C  Research Ethics Approval  133  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0064930/manifest

Comment

Related Items