Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Energy efficient compression techniques for biological signals on a sensors node Mahrous, Hesham 2017

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2017_may_mahrous_hesham.pdf [ 2.49MB ]
JSON: 24-1.0343409.json
JSON-LD: 24-1.0343409-ld.json
RDF/XML (Pretty): 24-1.0343409-rdf.xml
RDF/JSON: 24-1.0343409-rdf.json
Turtle: 24-1.0343409-turtle.txt
N-Triples: 24-1.0343409-rdf-ntriples.txt
Original Record: 24-1.0343409-source.json
Full Text

Full Text

Energy Efficient Compression ofBiological Signals at the Sensor NodebyHesham MahrousA THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF APPLIED SCIENCEinThe Faculty of Graduate and Postdoctoral Studies(Electrical and Computer Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)March 2017c© Hesham Mahrous 2017AbstractCompression of biological signals is rapidly gaining much attention in re-search especially for Wireless Body Area Networks (WBANs) applications.This is because of their demonstrated potential in assisting physicians andpatients, and helping them achieve a more convenient lifestyle. This work fo-cuses on some problems arising in the deployment of EEG signals in WBANswhere EEG data is collected and then transmitted using devices poweredby batteries. To elongate the battery life, the energy consumed by acquir-ing, processing and transmitting the data has to be minimized. Lots ofwork using Compressed Sensing (CS) have addressed this problem and havedemonstrated power savings in WBANs for applications such as Seizure de-tection. None of these studies however, have demonstrated a high qualitysignal recovery at high compression ratios such as 10:1. Higher quality signalrecovery results in better performance for seizure detection. The ultimategoal is to achieve high quality recovery at high compression rates so as toelongate the battery life and without degrading the performance of WBANapplications. Two frameworks have been previously proposed to solve thisproblem. The first is a CS framework, which has an Analog to Digital Con-verter (ADC), micro-controller, and a low power transmitter at the sensornode. The second framework has an under-sampling circuit, an ADC, andiia transmitter.This thesis compares the results of the state of the art CS algorithms ofboth frameworks and demonstrates their performance in automatic seizuredetection. Then it proposes two methods that achieve high quality signalrecovery at high compression rates, for both frameworks. These methodsare demonstrated on 3 different datasets. The first method, BSBL-LNLD isused for the CS framework. It exploits the linear and a non-linear depen-dency of multivariate EEG signals to recover the compressed signals. Thismethod can achieve up to 0.06 Normalized Mean Square Error (NMSE)at 10:1 compression ratio. The second method solves the under-samplingCS framework using a Meyer wavelet over-complete dictionary by using theAnalysis-Prior Formulation. This method achieves up to 0.18 NMSE at 10:1compression ratio. The proposed methods achieve superb recovery qualityand significantly decreased energy consumption.iiiPrefaceThis thesis presents the research conducted by Hesham Mahrous, in collab-oration with Professor Dr. Rabab K. Ward. I hereby declare that I am thefirst author of this thesis. Chapters 4 and 5 are based on work that I haveconducted. This work was published in the following journal and conference:1. Mahrous, H.; Ward, R., Block Sparse Compressed Sensing of Elec-troencephalogram (EEG) Signals by Exploiting Linear and Non-LinearDependencies. Sensors 2016, 16, 201.2. Mahrous; Ward, A Low Power Dirac Basis Compressed Sensing Frame-work for EEG using a Meyer Wavelet Function Dictionary. IEEECCECE 2016For both publications, I carried out the literature review, developed theframeworks, implemented them in software, carried out the simulations,analyzed the results, and wrote the manuscripts. Dr. Ward helped in for-mulating the research problem, supervised the direction of the research, andprovided significant editorial comments and important suggestions for theorganization of each manuscript.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Biological Signals in Wireless Body Area Networks (WBAN) 11.2 Electroencephalogram (EEG) Signals . . . . . . . . . . . . . 31.3 Challenges of EEG in WBANs . . . . . . . . . . . . . . . . . 41.4 Aim of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . 71.5 Lossy and Lossless Compressions . . . . . . . . . . . . . . . . 71.6 Compressed Sensing . . . . . . . . . . . . . . . . . . . . . . . 91.7 Seizure Detection . . . . . . . . . . . . . . . . . . . . . . . . 9v1.8 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.9 Literature Review on Compressed Sensing . . . . . . . . . . 111.10 Contribution of this Research . . . . . . . . . . . . . . . . . . 131.11 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . 142 Compressed Sensing . . . . . . . . . . . . . . . . . . . . . . . . 162.1 Sparse Signals . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2 Measurement and Reconstruction Algorithms . . . . . . . . . 182.3 Incoherence of Sensing Matrix and Dictionary . . . . . . . . 202.4 Block Sparse Recovery and Conventional Sparsity . . . . . . 212.5 Block Sparse Bayesian Learning Framework . . . . . . . . . . 223 A Compressed Sensing Framework for EEG Signals . . . . 253.1 Problem Description . . . . . . . . . . . . . . . . . . . . . . . 253.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.2.1 Transmission of Compressed EEG Signals at SensorNode . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.2.2 Recovery Algorithms at Server Node . . . . . . . . . 293.2.3 Power Consumption Evaluation . . . . . . . . . . . . 293.2.4 Seizure Detection Technique . . . . . . . . . . . . . . 303.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.3.1 Seizure Dataset . . . . . . . . . . . . . . . . . . . . . 333.3.2 Results using the State-of-the-Art Techniques . . . . 343.3.3 Energy Savings Results at High Compression Rates . 35vi3.3.4 Seizure Detection Performance Results After Recov-ery Using the Current State of the Art Recovery Tech-nique . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.4 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . 384 Exploiting Linear and Non-Linear Dependency in BSBL Frame-work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394.1 Problem Description . . . . . . . . . . . . . . . . . . . . . . . 394.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.3.1 Epoching . . . . . . . . . . . . . . . . . . . . . . . . . 454.3.2 Channel Arrangement and Vectorization . . . . . . . 454.3.3 Compression Using Binary Sparse Matrix . . . . . . . 474.3.4 Modification of BSBL-BO Algorithim (BSBL-LNLD) 484.4 Experiments and Results . . . . . . . . . . . . . . . . . . . . 504.4.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . 514.4.2 Dependence Measure of Intra and Inter EEG blocks . 514.4.3 Error Metrics . . . . . . . . . . . . . . . . . . . . . . 534.4.4 Compression/Recovery Results . . . . . . . . . . . . . 544.4.5 Seizure Detection After Signal Recovery Results . . . 584.5 Conclusion and Discussion . . . . . . . . . . . . . . . . . . . 595 A Compressed Sensing Technique by Under Sampling EEGSignals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615.1 Problem Description . . . . . . . . . . . . . . . . . . . . . . . 615.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63vii5.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655.3.1 Meyer Wavelet for Over-Complete Dictionary Con-struction . . . . . . . . . . . . . . . . . . . . . . . . . 655.3.2 Analysis Prior Formulation . . . . . . . . . . . . . . . 675.4 Experiments and Results . . . . . . . . . . . . . . . . . . . . 675.4.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . 675.4.2 Dictionary Sparsity . . . . . . . . . . . . . . . . . . . 685.4.3 Compression/Recovery Results . . . . . . . . . . . . . 695.4.4 Power Consumption Estimation . . . . . . . . . . . . 705.4.5 Seizure Detection After Signal Recovery Results . . . 725.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736 Conclusions and Future Directions . . . . . . . . . . . . . . . 756.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756.2 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . 77Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79viiiList of Tables1.1 Commercially Available EEG headset showing battery life . . 53.1 Reconstruction Results of the State of the Art Algorithms athigh compression rates . . . . . . . . . . . . . . . . . . . . . . 353.2 Break Down of Power consumption Results at different com-pression Rates in milliwatts . . . . . . . . . . . . . . . . . . . 353.3 Average Seizure Detection Performance at different compres-sion rates using BSBL-BO and STSBL-EM Algorithms . . . . 364.1 NMSE of the different methods at different compression rates 574.2 Average Seizure Detection Performance at 90% compressionrates: Signals are recovered by BSBL-BO and BSBL-LNLD . 595.1 Mean and standard deviation of NMSE at different compres-sion rates using 3 different datasets . . . . . . . . . . . . . . . 705.2 Power Consumption in uW at different Compression Rates . 72ixList of Figures1.1 Block diagram for the EEG WBAN framework . . . . . . . . 53.1 A Block diagram of a basic EEG CS framework showing thesensor node and the server node . . . . . . . . . . . . . . . . . 263.2 A Block diagram Showing the Training and Prediction ofSeizure Detection . . . . . . . . . . . . . . . . . . . . . . . . . 324.1 (a) DCT of vec[XT ] of 23 channels, (b) DCT of vec[X] of 23channels, (c) DCT of signal xl of 23 seconds length, (d) DCTof signal xl if one second length . . . . . . . . . . . . . . . . . 424.2 Context Block Diagram of CS method . . . . . . . . . . . . . 444.3 Block structure of correlated and uncorrelated signals in theDCT domain . . . . . . . . . . . . . . . . . . . . . . . . . . . 464.4 Mean Correlation and PLV in the blocks and between theblocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534.5 NMSE vs Number of Channels of proposed Method at Differ-ent Compression % Rates . . . . . . . . . . . . . . . . . . . . 554.6 Samples of Recovered EEG Signals at 90 % CompressionRates using state-of-the-art Recovery Algorithms . . . . . . . 57x5.1 A Block diagram Showing the Sensor and the Server Nodewhen Solving the Under Sampling Problem . . . . . . . . . . 635.2 Band-Pass Filter bank Over-Complete Dictionary generatedusing the Meyer Ψ(Ω) function . . . . . . . . . . . . . . . . . 665.3 Absolute Values of Meyer Wavelet Coefficients and GaborCoefficients sorted in Descending Order . . . . . . . . . . . . 685.4 Seizure Detection Event Specificity and Event Sensitivity atdifferent recovery NMSE . . . . . . . . . . . . . . . . . . . . . 73xiList of AcronymsAWGN Additive White Gaussian NoiseADC Analog to Digital ConverterBCI Brain Computer InterfaceBSBL Block-Sparse Bayesian LearningBSBL-BO Block-Sparse Bayesian Learning - Bounded OptimizationBW BandwidthBP Blood PressureCR Compression RatioCS Compressed SensingDCT Discrete Cosine TransformDWT Discrete Wavelet TransformEEG ElectroencephalogramECG ElectrocardiographyEMG ElectromyographyIoT Internet of ThingsNMSE Normalized Mean Square ErrorPPG PhotoplethysmogramMMV Multiple Measurements VectorsRIP Restricted Isometry PropertySNR Signal-to-Noise RatioSSIM Structural SIMilaritySMV Single Measurement VectorSVM Support Vector MachineSTFT Short Time Fourier TransformSTSBL Spatio Temporal Sparse Bayesian LearningWBAN Wireless Body Area NetworkWPAN Wireless Personal Area NetworkWT Wavelet TransformxiiAcknowledgmentsI would like to thank and acknowledge my colleagues at the lab for their sup-port and help, but first and foremost I would like to thank my supervisor Dr.Ward for her unconditional support and encouragement. I cannot expressthe gratitude and appreciation I have for her professionalism, support, bodyof knowledge, achievements, and most importantly her kindness and under-standing. Dr. Ward helped in formulating the research problem, supervisedthe direction of the research, and provided significant editorial commentsand important suggestions for the organization of each manuscript.xiiiChapter 1Introduction1.1 Biological Signals in Wireless Body AreaNetworks (WBAN)WBAN is a wireless network of wearable sensors attached to the human bodyto collect sensory data. This data are not limited to Electroencephalogram,Electrocardiography, pulse oximetry,and Oxygen saturation sensors. WBANsensory devices can be embedded inside the body, or can be mounted outsidethe body in a fixed position. These sensors exhibit large amount of data thatcan be used for applications in healthcare, fitness, information technology,and Internet of Things (IoT). Smart devices such as smart phones, padsand watches play an important role in terms of acting as a data hub thatprovides a user interface to view and manage such applications [56], [52]. Thedevelopment of WBAN technology has emerged during the last two decadesaround the idea of using personal physiological data to develop user friendlyapplications and high quality health care systems. A WBAN system canuse a Wireless Personal Area Network (WPAN) as gateways to reach longerranges and allow physiological data to be accessible by web applications andservices. Through gateway devices, it is possible to connect the wearable1body sensors (worn on the human body) to the Internet. This way, the bodysensors data can be used for online gaming applications, medical analysisand visualization applications, Brain Computer Interfaces (BCIs), and IoT.Addressing all these aspects at once is a very challenging task. In this thesiswe focus only on the data transmission, aspects mainly related to medicalapplications such as seizure detection.Statistics studies have shown that a significant number of seniors sufferfrom chronic diseases and that these are increasing in younger people dueto unhealthy eating habits and lifestyles [46]. As a result, more financialresources must be allocated for the health care system, estimated to be in thebillions of dollars. Chronic diseases are not easily discovered. This requiresconstant medical check-ups and supervision. One of the reasons health-care is suffering from scalability issues is because chronic disease patientsrequire continuous monitoring of their condition. Traditional health-carecannot solve this problem, as it requires a one-to-one relationship betweenthe caregiver and the patient. The one-to-one medical attention is costly,therefore solutions that are cost effective are required.WBAN medical applications can be a solution that is gaining a lot ofrecognition in the health care industry [54], [68]. It allows patients to con-tinuously monitor themselves at home via applications and real-time moni-toring systems, while caregivers can be at a remote clinic. By using physio-logical sensors such as Electroencephalography (EEG), Electrocardiography(ECG), Photoplethysmogram (PPG), and Blood Pressure (BP) that are at-tached to the body, WBANs allow patients to check on their vital signs andonly consult with a physician when it is needed. WBANs are cost-effective2and scalable which make them practicable to use in health-care solutions.For such applications, it is required to have a reliable system that isminimally obtrusive and allows the patient to move and walk freely, whichis the reason why WBANs can be valuable. Each of the above mentionedphysiological signals is associated with its unique problems in WBANs. Inthis thesis, we only focus on the problems that are associated with EEGsignals in WBAN applications. Lots of works are carried out to addressproblems of the other type of signals, but this is not the area of interest inthis thesis.1.2 Electroencephalogram (EEG) SignalsElectrical brain activity is a main component of WBANs and requires ex-tensive studies. The EEG signals are recorded using non-invasive wirelesssensors located on a patients scalp and then used for medical applicationsand BCIs. EEG signals are used to detect and predict several medical con-ditions, such as epileptic seizures. Seizure detection by a WBAN has ad-vantages in health-care because it is a cost-effective solution. Seizures occurrelatively rarely, hence seizure detection requires constant monitoring for along period of time. This is the reason why seizure detection is resource in-tensive when carried in a health institution. Health-care Applications thatutilize EEGs and WBANs can provide patients a way to do the monitor-ing themselves and then consult a physician when needed. It also enablesthe physicians to automatically monitor patients and assists them to takemedical decisions during the occurrence of epidemic events such as seizure3events. Other common uses of EEG signals include sleep pattern studies,and the diagnosis and treatment of strokes, infections (e.g. encephalitis,cerebral abscess), cerebral tumors and diseases (e.g. Alzheimers) [55].1.3 Challenges of EEG in WBANsThe hardware setup of the EEG-based WBAN is shown in figure 1.1. Thecommercially available wireless EEG sensors are assembled on a headset andthe arrangement of the sensors usually follows the international convention(e.g. the international 10-20 system). The number of sensors (electrodes) de-pends on the application, some require a few numbers of sensors and othersrequire a few hundreds. All sensors are wired to an Analog to Digital Con-verter (ADC) then a central microprocessor unit processes the data. Afterthe ADC and the data processing stage, the processed data is transmittedvia a low-power radio such as Bluetooth or Zigbee. The combination of theEEG sensors, ADC, and the microprocessor unit is referred to as the sen-sor node. This sensor node is battery-powered. The sensor node transmitsthe EEG data to the server node, which is placed in a close proximity orrouted to a cloud server via WPAN. The server node contains a low-powerradio receiver and a computing resource (to perform storage, process, andheavy computation depending on the applications). It is assumed that thereare no constraints on the computational power at the server node. Thereare commercially available and off the shelf EEG-based WBAN productsmainly used for research purposes. These commercially EEG headset aresummarized in Table 1.1.4Figure 1.1: Block diagram for the EEG WBAN frameworkTable 1.1: Commercially Available EEG headset showing battery lifeHeadset Brand Number of EEG channels Battery Life ReferenceEmotiv 14 12 hours 24NeuroSky 1 8 hours 51Interaxon Muse 4 5 hours 49Imec 8 22 hours 37B-Alert X24 24 8 hours 48Quasar DSI 23 12 hours 1Enobio 32 32 14 hours 50The wireless EEG headsets mentioned in the table suffer from two draw-backs. First, the battery life is limited which makes it impractical andinconvenient to the patient. If the battery life of a headset is at most 22hours, the patients would not be able to continuously monitor the EEG sig-nals. If any of these batteries has to be changed or recharged every day, thiswill be inconvenient to the patient. Second, the number of EEG channelsis fairly low and usually the 10-20 International System is desired for EEGapplications. The reason is that the more the number of EEG channels, thehigher is the spatial resolution, which could improve current applications oreven unlock new ones. The number of EEG channels is intrinsically linked5to the battery life. Using more channels decreases the battery life due to theadditional requirements carried out on the sensor node such as additionalsensing and additional wireless data transmission. In addition to that thenumber of the transmitted data packets used to continuously monitor pa-tients is limited per unit time (every second). These limitations restrict themaximum number of channels that can be used on the headset. Solutionsthat can address these two drawbacks are highly desired for providing amore practical commercial product to customers and patients.The battery life available on the sensor node in a WBAN is limited. Itis required to power the circuits that acquire and digitize the data. Also,the battery provides power to the micro-controller that carries out compu-tations on the sensor node, and to the wireless radio transmitter. There islittle that can be done to minimize the sensing energy. The sensing stageis the acquisition of raw data from the sensor. The acquired data are thenconverted from the analog to a digital form. For the computations that arecarried out by the micro- controller on the sensor node, energy savings is pos-sible. Energy efficient algorithms can be used to achieve low computationalcomplexity and hence saving battery life. Also, minimizing the amountof data transmitted by the radio transmitter will minimize the power con-sumed. To minimize the amount of transmitted data from the sensor node,the acquired digitized signals are compressed before transmission using lowcomputational complexity algorithms. The higher the compression ratio thelower the power consumed by the radio transmitter.In conclusion, reducing the overall power consumption in the sensor nodecan be achieved by reducing the computational power and the data transmit-6ted by the transmitting radio. Therefore, finding a compression solution forEEG signals that does not require lots of computational power is essential.Conventionally, EEG signals are collected by the sensors and sampled abovethe Nyquist rate. Then, compression algorithms are directly applied to thesampled data, at the sensor node, prior to radio transmission. However,traditional compression algorithms consume a high computational power atthe sensor node which is not desired for WBAN application.1.4 Aim of the ThesisThe aim is to find a compression framework suitable for EEGs used in tele-monitoring and seizure detection applications, in the context of WBANs.The goal is to find a solution to extend the battery life of an EEG basedWBAN and to prove its reliability for use in applications such as seizuredetection.1.5 Lossy and Lossless CompressionsLots of works related to the compression of EEG signals have been proposed.Generally, compression algorithms are categorized to be either lossless orlossy. Lossless algorithms recover the original data from their compressedform, but usually require high computational complexity and achieve lowcompression ratios [63]. On the other hand, lossy algorithms recover theoriginal signals with an acceptable error margin. Using lossy algorithms pro-vide higher compression ratios and tend to be less complex. Since WBANrequires high compression ratios and low complex algorithms, lossy algo-7rithms are used in this work. A seizure detection application is applied afterthe compression to demonstrate the practicality of lossy algorithms.Depending on the application, utilizing lossy algorithms is possible be-cause exact recovery is not necessary as long as a small reconstruction erroris tolerated given a specific application. Several lossy algorithms offer inter-esting solutions for EEG signals which are mentioned in previous literature.The study in [10] proposed a compression technique that discards the low-est absolute values by of the Daubechies-8 wavelet coefficients. The highcoefficients are then uniformly quantized. In [31] a classified signature andenvelope vector sets approach is proposed. This method is based on the gen-eration process of the classified signature and envelope vector sets (CSEVS)which employs an effective k-means clustering algorithm. EEG signals aremodeled by multiplying only three factors called the classified signaturevector, the classified envelope vector, and the gain coefficient (GC). Thenormalized mean square error of the reconstructed signals of this techniqueis relatively higher than other more successful techniques. In [34] and [35]a wavelet compression technique that use the CDF 9/7 wavelet coefficientsis proposed. Similarly, in [10] in [34] and [35] the use of CDF 9/7 waveletcoefficients is proposed instead of Daubechies-8 wavelet coefficients. Forthese 3 wavelets compression techniques, the highest 10 percent of the ab-solute values of wavelet coefficients are sufficient enough to reconstruct theoriginal signal at a low reconstruction error. To the best of our knowledge,this is the best state of the art lossy compression technique which achievesvery low normalized mean square error at high compression rates. However,compression using wavelets cannot be applied in WBANs because their ap-8plication is complex enough to apply at the sensor node. For this reason,other compression techniques shall be used to implement at the sensor node.1.6 Compressed SensingMuch recent research work have studied the use of compressed sensing (CS)as an alternative compression technique for ECG, EEG, and EMG data inthe context of WBANs [46]. After acquiring and digitizing the raw data, thedigitized data are sampled by CS by taking random linear projections of thedata (implemented by performing a matrix multiplication) on the sensornode. With the correct choice of this matrix, the compression is done atlow power consumption. The reconstruction of the data is computationallycomplex and is done at the server node [9].In CS, the signal has to be sparsely represented; usually sparsity isachieved by using a dictionary matrix. A dictionary matrix is a matrixthat transforms the data to another domain (FFT, DWT, Gabor...etc). Thesmaller the number of random linear projections that is sufficient to recoverthe signal exactly, the longer battery life is achieved for the sensor node. Toreconstruct the signal, usually greedy algorithms or optimization algorithmsare used. Chapter 2 contains more details about CS and its underlying the-ory.1.7 Seizure DetectionEpilepsy is characterized by sudden discharge of electricity in the brain, it isa neurological disorder and around 50 million people worldwide suffer from9it [13]. Such abnormality is sometimes called a seizure, which occurs with-out warnings and for no obvious reason. This is the reason why seizures areunpredictable. The detection or prediction of a seizure can automaticallyprompt immediate medical assistance or even the avoidance of a seizureby using a closed-loop application. Using scalp EEG (EEG data collectedusing non-invasive sensors), much automatic seizure detection/predictionresearch work that have shown promising detection performance have beendeveloped. The results of such work opens up new possibilities to manageand control epilepsy. For example, to trigger a warning signal to remotehealth-care providers during seizure events or to apply a close-loop electricitystimulation to intervene with the seizure event and prevent it [38]. Sinceseizures are unpredictable, contentious patient monitoring is required at alltime. This is the reason EEG based WBAN can be a potential solution forseizure detection or prevention.1.8 MotivationThe motivation behind this thesis is to develop CS recovery techniquesto achieve high compression ratio, hence, lower the power consumption inWBANs. The reconstruction of a highly compressed signal is a challengingproblem and computationally intensive. Usually there is no limitation onthe reconstruction side, server node, because there is no power limitation onthe server node. However, if the compression ratio is too high at the sen-sor node and the reconstruction quality is not sufficient for the application,then the compression ratio is lowered. Hence, low compression ratios are10directly associated with high power consumption at the sensor node. Thegoal is to find a CS technique that achieves high quality of reconstructionat high compression rates and to prove its reliability in seizure detectionapplication.1.9 Literature Review on Compressed SensingThe work presented in [54] is the first study that applies CS to EEG compres-sion. The Multiple Measurements Vectors (MMV) approach, that jointlycompresses the signals from all channels and reconstructs them simultane-ously is shown to obtain a reasonable reconstruction error at high compres-sion ratios. However, the experiment is setup in a way such that the EEGsignals are acquired from repeated trials. A patient was asked to repeat thesame task many times during the recording of one EEG channel each time.This setup increases the coherence in the signals with each other. Usuallypatients are not prompted to act in a certain way or to repeat the sametask multiple times and for this reason this setting is not desired in WBANapplications.Since then lots of other work in CS methods have been applied to recon-struct single EEG channels [68], [54], [2], [46], [25]. The work in [54] and [2]proposes a CS framework for EEG signals. This framework is compared withthose using the following dictionaries: a Gabor dictionary, a Mexican hat(second derivative of Gaussian function), a linear spline, a cubic spline, anda linear B spline and Cubic B-Spline. In [36], ECG and EEG signals werereconstructed using a Linear B Spline wavelet dictionary and Cubic B-Spline11matrix and reconstructed using Bayesian Compressive Sensing. The dictio-nary is randomly sampled and a modified banded Toeplitz dictionary matrixis formed. Another recent approach applied independent component analy-sis to pre-processes the EEG signals, prior to applying compressed sensing,so as to improve their sparse representation [47]. Zhang et al. have recentlyproposed reconstructing the EEG signals using a Block Sparse BayesianLearning Bounded Optimization (BSBL-BO) framework [68]. BSBL-BO re-constructs EEG signals without using a sparsifying dictionary matrix suchas a Gabor dictionary. It is empirically shown to be highly effective in re-constructing EEG signals, as long as a low reconstruction error is tolerated.In [25], a compressive sensing framework is proposed where inter-channelredundancy removal is applied at the sensor after the sensing stage.The above studies have addressed the Single Measurement Vector (SMV)CS case i.e. single EEG channels are compressed and decompressed channelby channel. However, the simultaneous reconstruction of the CS signalsfrom their multi-channel measurements (referred to as the MMV problem)has been shown to recover the signals more accurately than by applyingSMV solutions on each channel separately [14], [66], [14], [67].In [14] the MFOCUSS algorithm extended the diversity minimizationFOCUSS algorithm from an SMV to an MMV algorithm. MFOCUSS isan iterative re-weighted regularized algorithm that solves a lp norm mini-mization, where 0 < p1 for MMV. In [66], the MFOCUSS was modified totMFOCUSS by replacing the regularization with the Mahalanobis distanceregularization, to capture the temporal correlation of the MMV signals. Asimilar idea is used in [14] to capture the temporal correlation, but in this12case a Sparse Bayesian framework for MMV signals is employed. In [67] aSTSBL-EM algorithm is proposed to solve the MMV model by exploitingthe temporal and spatial correlation of the signals. It achieves somehowlower reconstruction errors compared to applying BSBL-BO on each chan-nel. In this work we propose a method to solve the MMV problem and theSMV problem.1.10 Contribution of this ResearchThe above mentioned work in the section on Literature Review on Com-pressed Sensing investigated these questions:1. Which energy saving techniques can be realized in CS for EEG WBANapplications?2. Is it possible to exploit both the EEG temporal correlations (intra-correlations) and spatial correlations (inter-correlations between chan-nels) to increase the compression performance of CS?3. Does block sparsity achieve better compression performance than con-ventional sparsity in EEG compression ?4. What is the impact of CS on the performance of a practical applicationthat uses EEG signals?The main contributions of this thesis are:• Proposing novel CS techniques that fully take advantage of the in-herent structure present in EEG signals (both temporal and spatial13linear dependence and nonlinear dependence) achieving high qualityrecovery rate at high compression rates.• Proposing a novel CS technique that solves the under sampling prob-lem achieving acceptable recovery at high compression rates. Solvingthe under sampling problem unlocks a key in WBAN to replace themicro-controller on the sensor node with a low power random samplingcircuit, thus saving more power more reliably.• Comparing CS frameworks with other state-of-the-art compressiontechniques for EEG compression used in WBAN.• Applying a CS technique to compress signals in the context of a seizuredetection application and evaluating its impact on the performance ofthe system.1.11 Thesis StructureThis thesis is organized as follows. Chapter 2 gives an overview of the theoryunderlying CS and the state of the art reconstruction algorithms such asSTBSBL, BSBL-BO.Chapter 3 shows the compression and recovery results of these tech-niques. It also shows the battery life performance and the seizure detectionresults when these techniques are used to recover the compressed signals.Chapters 4-5, present different versions of CS frameworks for EEG com-pression in the context of WBANs. Chapter 4 introduces a reconstructionalgorithm for jointly compressing multichannel EEG data. It takes advan-14tage of the linear and non-linear dependence structure of the EEG signals,spatially and temporally. Chapter 5 proposes a technique that solves theunder-sampling problem and achieves high re-construction quality at highcompression rates. Solving the under-sampling problem for EEG signalssaves reduces the power consumption in WBAN sensor nodes, thus themicro-controller on the sensor node is replaced by a random sampling circuit.Both Chapters 4, and 5 applies both CS techniques in the seizure detectionproblem, shows the results of the power consumption model for both CStechniques and compares them with the baseline (no-compression) results.Finally, Chapter 6 concludes this thesis by summarizing the contributionsproposed in chapters 4 and 5 and by offering possible extensions of our workand paths to explore.15Chapter 2Compressed SensingSince this thesis revolves around the basis of compressed sensing (CS) thischapter discuses the key theoretical concepts of CS. Lots of literature haveintroduced the theory of compressed sensing, also it has been applied inmany disciplines such as MRI imaging, remote sensing data, seismology,and many more. In this section the entire literature of compressed sensingis not entirely covered as it is not our aim. This work only focused on ap-plying compressed sensing to a practical problem, therefore the basics thatwill allow the reader to understand how CS works will only be mentioned.This chapter is structured as follows. Section 2.1 discusses the meaning ofsignal sparsity and its importance in compressed sensing. Section 2.2 dis-cusses how the EEG signals are measured using the measurement matrixand the conventional reconstruction algorithms used to decompress signalsin CS. Section 2.3 discusses the necessary conditions of the measurementmatrix and dictionary matrix so that the reconstruction is made possible.Section 2.4 discusses the difference between block sparsity and the conven-tional sparsity mentioned in section 2.1, also the difference in reconstructionquality is discussed. Finally, Section 2.5 briefly explains BSBL frameworkand the algorithms derived from them.162.1 Sparse SignalsAlmost every signal has a sparse representation in a transform domain. Lotsof work on EEG signals have tried to find such sparse domain such as usingtransforms like Gabor, second derivative of Gaussian function, linear spline,DCT, Wavelets, cubic spline, linear B spline, and Cubic B-Spline [54], [2],[36]. Greedy and optimization recovery algorithms are usually used to solvefor a CS compression problem which requires the signal to have a sparserepresentation in a transform domain. The CS recovery algorithms exploitthe fact that most signals have a sparse representation in some dictionary.The dictionary is denoted by DNXK = [D1, D2, ..., DK ]. Given a non-sparse one dimension signal x of length N , x can be represented in a sparseform which is written as:x = Dz =K∑i=1ziDi (2.1)The sparse vector z has a size of K1 which contains large number of zero(or small) coefficients, and the non-sparse signal x can be obtained from zusing few dictionary columns, Di, from the dictionary matrix, D. There area number of non-zero elements in z which is denoted as S, it is said that zis the S-sparse representation of x in a given dictionary D. D is also calledthe synthesis dictionary.172.2 Measurement and Reconstruction AlgorithmsThe compression technique in compressed sensing is performed by the sens-ing matrix A. Assume the number of EEG channels is L. For the lthchannel, the corresponding CS model, denoted as the single measurementvectors (SMV) model is expressed asyl = Axl. (2.2)In equation 2.2, the vector xl ∈ IRN is the raw signal, converted fromanalog to a digital form, of the lth channel, yl ∈ IRM is the CS compressedsignal, and A ∈ IRMXN is the measurement matrix (also called the sensingmatrix). N is the number of samples of xl, and M is the number of projectedsamples after the reduction/compression of xl.Equation 2.2 makes it possible to only acquire M samples instead of Nsamples, hence the data is compressed. Where M << N and M > S. Asmentioned above, S is the smallest number of vector non-zero elements in Dthat represent the signal xl in a sparse form. The success of recovery relieson the key assumption that xl is sparse. When xl has a sparse representationin a certain domain or dictionary D, then xl can be expressed as xl = Dzl.D can be an orthogonal discrete cosine transform (DCT) or an orthogonaldiscrete wavelet transform (DWT) matrix. Therefore, Equation 2.2 can berewritten as:yl = ADT zl (2.3)To obtain xl from the vector yl, there are an infinite number of solutions18for xl or its sparse form zl. Since the signal that is intended to recoveris sparse, the most correct solution is assumed to be the sparsest solution.This corresponds to solving the following an L0 optimization problem whichis given asminzl ‖zl‖0 subject to yl = ADzl (2.4)Unfortunately, solving for equation 2.4 is a NP hard problem, and there-fore it can not be used as a solution. To solve such a problem requires anexhaustive search over all the possible combinations of the columns of D. IfD ∈ IRNXN then there are N ! possible solutions. Fortunately, this problemcan be converted into an L1 optimization problem which is more practicalproblem due to its convexity. There equation 2.4 can be rephrased as:minzl ‖zl‖1 subject to yl = ADzl (2.5)The solution of equation 2.5 is called the synthesis prior formulation.This problem is similar to linear programming and there are many practicalsolvers that exist that can achieve perfect recovery when the small numberof measurements are small (i.e. M << N ) [41].In synthesis prior formulation problem, the objective was to solve forthe sparse coefficients zl. This formulation is getting so much in the fieldof CS and has been studied extensively. Another alternative is proposedwhich is the opposite and the objective is to solve for xl directly such thatHxl is a sparse representation. The dictionary D will be refereed to as thesynthesis dictionary, while H is refereed to as the analysis dictionary, and19both dictionaries achieves sparsity. When H is used to solve for the CSproblem, the solution is called analysis prior formulation which is given as:minxl ‖Hxl‖1 subject to yl = Axl (2.6)The analysis and synthesis prior formulations are equivalent only whentheir sparsifying dictionaries are orthogonal (i.e. D = H−1). The synthesisprior formulation approach is only possible if D is either orthogonal ortight-frame. On the other hand, if this is the case then the analysis priorformulation is used to solve for the problem.2.3 Incoherence of Sensing Matrix andDictionaryAssuming we are solving a synthesis prior formulation problem, the Perfectrecovery is possible if the Restricted Isometric Property (RIP) of AD issatisfied. The RIP is defined as follows:(1− δS)‖xl‖2 ≤ ‖ADzl‖2 ≤ (1 + δS)‖xl‖2 (2.7)δS is referred to as the isometric constant of the matrix A such thatδS < 1. The RIP guarantees that the solution (signal energy) is boundedby an upper bound and a lower bound. If the value of δS is small thenmore signal energy is captured and the more stable the inversion of ADbecomes[9].As long as RIP is satisfied perfect reconstruction is achieved at a mini-20mum sampling rate given by M ≥ µ2(AD.S.log(N)) [32], [71]. This equa-tion shows the minimum value of M that can be chosen, so as perfect re-construction is achieved. where S is the number of non-zero elements in zl,and µ2 is a coherence function between the two matrices A and D. Theminimum value that can be chosen for M is dictated by S and the valueof µ2(A.D.S.log(N). To achieve maximum incoherence, both matrices Aand D should be selected carefully, so that D achieves a minimum S insparsifying xl. Much work has been done by previous researchers, to findthe optimal D and A to achieve a minimum M .2.4 Block Sparse Recovery and ConventionalSparsityIt has been shown that better reconstruction can be obtained by exploit-ing the block-sparsity (assuming the data vector is block sparse) than byonly exploiting the sparsity in the signal (assuming the vector is sparse inthe conventional sense) [68], [22], [23]. The conventional sparsity solutionmethod only assumes that xl has at most S non-zero elements, in a sparsedomain. However, it does not exploit any further structure that the signalmay have. The non-zero components can appear anywhere in xl, however,there are cases in which the non-zero values can form blocks [23].212.5 Block Sparse Bayesian Learning FrameworkBSBL-BO is a CS framework [71] that has been recently proposed for solvingthe Single-Measurement-Vector (SMV) model (i.e. the model 2.2 with L=1).While some CS algorithms depend on the sparsity of the signal, BSBL-BOexploits the block sparsity in the signal, provided the signal is block sparse[68]. That is, BSBL-BO assumes that the vector xl consists of (g non-overlapping) blocks and some of these blocks are all zeros. As mentioned in[68] and [71], the block size can be arbitrarily and the block partition doesnot need to be consistent with the true block structures of the signal.Raw EEG signals generally do not have clear block structures in thetime domain. Therefore, BSBL-BO is applied on the DCT coefficients ofthe signals [68]. By using the DCT dictionary matrix, an EEG signal isexpressed as a DCT coefficient vector, where the coefficients with significantnonzero values concentrate at the lower frequencies of the coefficient vector(from the ”energy compaction” property of DCT). The coefficient vector canbe viewed as a concatenation of one or more nonzero blocks followed by anumber of zero blocks. Therefore, BSBL-BO can exploit this specific blocksparsity in the DCT coefficients by first obtaining the DCT coefficients andthen reconstructing the original EEG signal.The BSBL-BO algorithim is derived by applying type II maximum like-lihood derivation [71] of the posterior probability given as:p(x|y, λ, {γi,βi}gi=1) = N (µx,Σx) (2.8)The hyperparameters λ, {γi,βi}gi=1 represnt the noise (λ), the block22sparsity structure (γi), and the intra-correlation structure (βi) in the non-overlapping blocks g. Let Σ0 be a diagnoal matrix such that Σ0 = diag({γiβi}gi=1).After estimating the hyper-paramteres, the reconstructed signal xˆ is esti-mated by minimizing the negative log-likelihood 2.8. The resulting estimateof the reconstructed signal is given as xˆ = µx = Σ0AT (λI +AΣ0AT )−1y.The hyper-parameters λ and γi are derived based on the bound-optimizationestimation in [71]. λ is a scalar that helps the algorithm perform in noisyconditions. In noiseless cases, λ is fixed to a small value, e.g. λ = 10e−10 butwhen the SNR is less than 15dB, λ is estimated using the bounded optimiza-tion technique given in [71]. This yields λ←‖y −Aµx‖22 +g∑i=1Tr(Σxi(Ai)TAi)MThe hyper-parameter γi is a nonnegative scalar that controls the block-sparsity of the signal. When γi = 0 , the corresponding xˆ of the ith block isequal to 0. This hyper-parameter is given as γi ← 2√xTi Bi−1xiTr((Ai)T (Σ∗y)−1Aiβi)The other hyper-parameter, β ∈ IRdiXdi is a positive definite matrix thatcaptures the intra-correlation structure of the ith block. di is the number ofsamples of the ith block. The intra-correlation is useful because it indicatesa predictive relationship that can be exploited. Equation 2.9, below is acovariance matrix which is derived by minimizing the negative log-likelihoodof the posterior probability 2.8. βi is further modified to obtain βˆi 2.10 byconstraining it to be a positive definite intracorrealtion matrix. βˆi is formedusing a rst-order Auto-Regressive (AR) process which is sucient to modelthe intra-block correlation [71]. The resulting βˆi is a Toeplitz matrix thatis selected to represent the intra-block correlation matrx βi23βi =1gg∑i=1Σxi + µxi(µxi)Tγi(2.9)The first order Auto-Regressive coefficient is r¯ =m¯1m¯0, where m¯0 and m¯1 arethe average of the elements of the main diagonal and sub-diagonals of theestimated covariance matrix βi.βˆi = Toeplitz([1, r¯, . . . , r¯di−1]) =1 · · · r¯di−1.... . ....r¯di−1 · · · 1 (2.10)In BSBL-BO, βˆi captures the intra-block correlation structure by con-verting the estimated covariance matrix βi to a bounded first order Toeplitzmatrix. The intra-block correlation is a measure of linear dependency.In the next chapter, we modify the BSBL-BO so it can exploit both thelinear dependency structure as well as the non-linear dependency structurein EEG signals.24Chapter 3A Compressed SensingFramework for EEG Signals3.1 Problem DescriptionAs mentioned in the introduction, compressed sensing (CS) has been draw-ing much attention in EEG-based WBAN compressed sensing as it results infast and simple computational load at the sensor node, hence saving batteryenergy. The main operations carried out at the micro-controller of the sensornode involves taking random projections of the EEG signals. While muchwork has studied the CS compression for EEG signals, little considerationhas been paid studying its effectiveness from a tel-medicine perspective. Ad-ditionally, there have been much focus on saving the battery energy at thesensor node, and there has been little focus to improve the recovery error athigh compression rates for EEG tele-medicine applications.The questions we intend to answer in this Chapter are as follows:• How important is it to achieve high quality recovery ? Which is ob-tained at high compression rates for EEG based WBAN.• What is the impact of the recovery error on tele-medicine applications25Figure 3.1: A Block diagram of a basic EEG CS framework showing thesensor node and the server nodesuch as Seizure detection ?To address these questions, in this chapter, we use the CS framework infigure 3.1 to test the state-of-the-art techniques for EEG compressed sens-ing [68], [67], suitable for low-power consumption. Using the wireless sensornetwork simulator, Avrora, we model the power consumption of these tech-niques. The signals recovered by these algorithms are used for the seizure de-tection application to study the trade off between power savings and seizuredetection performance.Figure 3.1 shows a block diagram of the sensor node and the server nodeof a CS framework proposed in [68], [67].This chapter is organized as follows. Section 3.2 presents the differentbuilding blocks of the framework including the sensor node, and the algo-rithms used on the server node. This section explains the evaluation methodof the power consumption at the sensor node. Finally, the seizure detectionmethod that uses on the recovered data is presented. Section 3.3 shows theresults obtained from the recovery algorithms at high compression ratiossuch as 10:1, the evaluation results of the power consumption model, and26the seizure detection accuracy. Finally, the conclusions and discussions areincluded in section 3.43.2 MethodBelow we will discuss the Compressed Sensing framework that was adoptedfrom the state-of-the-art techniques presented in [68], [67]. Then we brieflydiscuss the recovery algorithms used to solve the compressed sensing prob-lem. The method of power consumption, Avrora, for wireless sensor networksimulation is discussed briefly. Finally the seizure detection technique is pro-posed.3.2.1 Transmission of Compressed EEG Signals at SensorNodeThe idea of CS is to exploit the redundancy in the signal, the repeatablesparsity patterns and the sparse representation, using random sampling rep-resented by the matrix A. The signal is reconstructed from the compresseddata, which is sampled below the Nyquist rate. There are two types of com-pressed sensing, one that is an analog form random sampling and other formsamples the signal at the Nyquist rate and then applies random sampling[9].Analog and Digital CSAnalog CS samples the signal randomly so the resultant sampled signal iscompressed below the Nyquist rate. Traditionally the ADC samples the27signal at Nyquist rate and CS is performed after the ADC stage by takingrandom samples using a random projection matrix. This allows a signifi-cant reduction in the power consumption of the analog-to-digital conversion(ADC) module and micro-controller on the sensor node [62], [65]. The com-pression is performed by a matrix multiplication operation on the signal.In both cases the complex computational load is shifted to the server node,where the computation power is unlimited.Compression of the SMV Problem and the MMV ProblemIn this chapter, the Single Measurement Vector (SMV), i.e. channel bychannel compression, and the Multiple Measurement Vector (MMV), i.e.channels are compressed simultaneously, problems are solved using two dif-ferent algorithms. In case of the SMV problem, the compression is givenas yl = Axl, where l is the lth channel number. As mentioned in section,2.3, A should be incoherent with D. This is achieved by decreasing thecorrelation between the elements of both these matrices. Several choices ofthe matrix A are studied in [55] which shows that there are many choicesof A that don’t affect the recovery quality. In this thesis, a sparse randombinary matrix is chosen for A, such that each column has exactly 2-nonzeroentries equal to one. The positions of these non-zero entries are randomlychosen. The key for choosing matrix A is that the matrix multiplication ofAxl is simple compared to any other matrix.In case of the MMV problem, the compression is given as Y = AX,where X is NXL, where N is the number of samples per unit time, and Lis the number of channels. Y isMXL, whereM is the number of compressed28samples per unit time, and A is MXN . The matrix A is also chosen to bea sparse-binary matrix.3.2.2 Recovery Algorithms at Server NodeTo solve for the SMV problem, we use the channel by channel decompressionof EEG signals, state-of-the-art algorithm BSBL-BO [68]. On the otherhand, to solve the MMV problem, we use the state-of-the-art algorithmSTSBL-EM [67].The BSBL-BO method is based on the assumption that the signal con-sists of non-overlapping concatenation blocks, and few of these blocks arenon-zero while the rest are sparse. As mentioned in section 2.5, BSBL-BOexploits the correlation structure within the blocks to reconstruct the sig-nals. For more information about the algorithm and its derivation, refer tothe work presented in [71]. The STSBL-EM algorithm is another algorithmderived from the BSBL framework proposed in [67] to solve the EEG basedWBAN. The STSBL-EM exploits the inter and the intra block correlationsof the multi-channel EEG signals, to recover the signals.3.2.3 Power Consumption EvaluationThe power consumption evaluation of the sensor node is obtained using anopen-source cycle-accurate wireless sensor network simulator called Avrora[59]. It emulates the sensor node circuitry by providing a virtual operatingenvironment of the execution. An emulator software for an Atmel AVRmicro-controller sensor platform, Mica2 and MicaZ, is provided in Avrora.It is used to provide a detailed monitoring evaluation of different behaviors29such as packet transmission, energy monitoring, interrupts and stack usage.The energy monitor provided in Avrora provides the state changes of thesensor node and estimates the power consumption of each component of thesensor platform.The compression techniques for the SMV and the MMV problem is im-plemented in Network Embedded Systems C for TinyOS [42]. TinyOS is anembedded operating system (OS) that provides hardware abstraction for op-erations such as packet transmission, storage and input/output (I/O). Thecode was compiled and simulated for a MicaZ sensor platform using Avrora.The MicaZ consists of an ATMega 128 micro-controller, ZigBee radio, 4 KBof RAM and 128 KB of flash memory.To simplify the evaluation process, real-time EEG acquisition simulationwas not performed. Alternatively, a short epoch segment of EEG data wasloaded into the memory to simulate a 1 second data window. This sim-plification does not affect the evaluation results of the power consumptionbecause the EEG sensing is the same across all epochs. The majority ofthe power on the sensor node is consumed by the micro-controller and thewireless transmitter. Approximately 20% of the power usually consumed bythe micro-controller (mainly CS compression), while 70% of the power is bythe radio transmitter [64]. For this reason, usually the main focus of thepower consumption is on the micro-controller and the wireless transmitter.3.2.4 Seizure Detection TechniqueOn the server node, the received compressed data is recovered first then aSeizure detection operation is performed. The seizure detection paradigm is30presented in figure 3.2 showing the process of seizure detection. First, therow EEG data is divided into window frames of 4 second windows. EachFrame is assigned a label, the labels are either 1 or −1, indicating a Seizureframe, or a non-Seizure frame. It is important to note that a seizure eventmay contain one or more seizure frames, and a non-seizure event may containone or more non-seizure frames as well. The frames are divided randomlyinto a training set of frames and a testing set of frames. The ratio betweenthe training and the testing sets are divided as 60% training data and 40%testing data. At first, Feature extraction is applied on the training framesthen a Support Vector Machine (SVM) classifier is used to learn the featuresand the training labels. The testing frames are compressed as mentioned in3.2.1 frame by frame. Then each compressed frame is recovered using thealgorithms mentioned in section 3.2.2 and then feature extraction is appliedon the recovered testing frames. The features extracted from the recoveredtesting frames are used to predict the labels using the SVM prediction model.Feature ExtractionIn [12] three different types of simple features are tested for seizure detectionsuch as energy, line length and nonlinear autocorrelation features showinggreat performance using the dataset [30]. In this thesis, the nonlinear auto-correlation features are used.The autocorrelation features exploit the repetitive spikes in an EEGsignal with successive short intervals. The widow frame is divided into 15-sample wide non-overlapping sub-windows and the maximum and minimumvalues for each sub-window are calculated. Let max(Si) and min(Si) be the31Figure 3.2: A Block diagram Showing the Training and Prediction of SeizureDetectionmaximum and minimum of the i − th sub-window in a frame, respectively.The parameters, HVi and LVi are calculated as follows:HVi = min{max(Si),max(max(Si + 1),max(Si + 2))} (3.1)LVi = max{min(Si),min(min(Si + 1),min(Si + 2))} (3.2)where i = 1, ..., NS2, and HVi and LVi are the high and low values ofthe i − th sub-window, respectively. The symbol, NS , is the total numberof sub-windows in an N sample.The nonlinear autocorrelation value, NLACC, is basically the summa-32tion of the differences between HVi and LViNLACC =NS∑i=1(HVi − LVi) (3.3)The results of the seizure detection technique using the algorithms are shownin the next section.3.3 Results3.3.1 Seizure DatasetThe experiments are performed using seizure data from the Childrens Hospi-tal Boston and the Massachusetts Institute of Technology (CHB-MIT) down-loaded from PhysioNet ([30].The dataset contained 23 sets of EEG recordings collected from 23 pedi-atric patients under the age of 18. The patients were being treated for seizureand they were taking medication for epilepsy surgery evaluation. The EEGsignals collected from these patients were recorded using 23 channels whichwas based on the the International 10 20 system. A bipolar montage isused to remove signal artifacts and reference electrode noise. The signalswere sampled at 256 Hz with 16-bit quantization. For each patient, theEEG signals were labeled by medical experts to determine the start and endtimes of a seizure event. The duration of a seizure event varied between 6and 752 seconds. Although artifacts were present resulting from the scalpEEG signals, no artifact removal technique was performed on the signals.333.3.2 Results using the State-of-the-Art TechniquesUsing the sparse binary matrix as a sensing matrix A, this section showsthe recovery performance of the CS framework that solves the SMV and theMMV problems using the state-of-the-art work presented in [68], [67]. Theevaluation metrics is defined at first and the compression ratio (CR) is alsore stated in this subsection.Evaluation metricsTo quantify the compression rate, we used the following equation to selectM :CR% = (1− MN)100 (3.4)To test the recovery quality, we used the normalized mean square error(NMSE), and the equation is given as:NMSE(X, Xˆ) =‖X − Xˆ‖‖X − µX‖ (3.5)where X is the original EEG data matrix of size NXL, and Xˆ is therecovered data matrix. and µX is the mean of X. The NMSE measuresthe distance between 2 matrices and the lower the NMSE, the better thereconstruction. We removed the mean of each channel in the original dataso that differences in means between datasets do not bias the results.The reconstruction performance for different compression ratios is shownin Table 3.1.34Table 3.1: Reconstruction Results of the State of the Art Algorithms at highcompression ratesAlgorithimCompression ratio10:1 6.6:1 5:1 3.3:1 2.5:1 2:1BSBL-BO 0.671 0.575 0.472 0.319 0.228 0.147STSBL 0.984 0.728 0.419 0.166 0.091 0.032Table 3.2: Break Down of Power consumption Results at different compres-sion Rates in milliwattsCompression Ratio MCU Transmitter MemoryTotal(mW)Battery Life hrs(@3V, 200mAh)0 (No Compression) 46.14 160.68 0 20.82 3.042:1 20.07 30.67 13.60 64.34 9.792.5:1 19.24 20.58 13.25 53.07 11.873.3:1 18.41 14.1 12.91 45.42 13.875:1 17.72 9.65 12.76 40.14 15.6910:1 17.04 6.67 12.78 36.49 Energy Savings Results at High Compression RatesThe power consumption at the sensor node is simulated using Avrora whichhas been described in Section 3.2.3. The results of the total power con-sumption on the sensor node is broken down into the code execution of themicro-controller, wireless transmitter radio, and flash memory. The powerconsumption of the sensor node of different CR% is estimated over a spanof 1 hour of data which is sampled at 256Hz. The battery life is estimatedassuming the battery used is a 3 Volt of 200mAh. The battery life is esti-mated as given in [18], where Hours = 0.7capacity(mAhr)PowerConsumption(mA). Theresults are shown in table 3.235Table 3.3: Average Seizure Detection Performance at different compressionrates using BSBL-BO and STSBL-EM AlgorithmsCR% Event Sensitivity (%) Frame Sensitivity (%) Frame Specificity FP/hr Latency (Sec)No Compression 95.2 57.6 99.2 2.69 9.52:1 (STSBL-EM) 94.8 57.2 98.9 3.62 9.12.5:1 (STSBL-EM) 93.7 57.2 98.9 3.57 8.93.3:1 (STSBL-EM) 91.4 57.2 98.7 4.92 9.45:1 (STSBL-EM) 88.6 57.3 98.1 6.19 10.310:1 (BSBL-BO) 85.2 46.8 96.4 25.45 Seizure Detection Performance Results AfterRecovery Using the Current State of the ArtRecovery TechniqueThe study conducted by [12] shows good performance using the non-linearautocorrelation features presented in section 3.2.4 by choosing a window sizeof 4 seconds. The following table 3.3 shows the effects of the CompressedSensing on the seizure detection performance for different CR’s and usingthe features discussed and the state of the art recovery algorithm [68]. First,the data is compressed using the CS framework then the recovery algorithmsdiscussed in section 3.2.2 are used to recover the signals. As shown in table3.1 the performance of STSBL-EM is better than BSBL-BO when tested atCR’s 2:1, 2.5:1, 3.3:1, and 5:1 percent. In this experiment, the BSBL-BOis only used only at CR = 10 : 1 since it is better than STBSL-EM at thisparticular CR. After the recovery is performed on the compressed signals,the seizure features are extracted and the SVM prediction model discussedin section3.2.4 is used to predict the testing features.Table 3.3 shows the classification of the seizure detection experiment.The evaluation technique of the seizure classification is adopted from themajority of the literature. The results are described as follows:36• Seizure Sensitivity: The percentage of true labels of classified seizureevents of which at least one or more seizure frame in the seizure eventis correctly labeled by the classifier. As mentioned above the seizureevent of the data set varies between 6 and 752 seconds. The frame sizeis only 4 seconds.• Frame Sensitivity: The percentage of true labels of classified seizureframes. This measures the percentage of accuracy per classified frame.• Frame Specificity: The percentage of correctly labeled non-seizureframes.• False Positives per hour (FP/hr): The number of incorrectly labeledframes as seizures within a one hour period.• Latency: The average time elapsed since the onset of a seizure eventto correctly classify one frame as a seizure frame.From table 3.3 it is obvious that when the CR increases the detectionperformance relative to the case of no compression. At a CR of 10:1, theobserved drop in the event sensitivity and frame specificity was 10% and 3%,respectively, relative to the uncompressed results. A dramatic increase ofthe false positives per hour from 2.69 to 25.45 is observed. Such degradationin seizure detection performance is expected, since the quality of recoveredsignals is low at high compression rates.373.4 Discussion and ConclusionIn this chapter, we briefly discussed the state of the art compression tech-niques and illustrated their recovery quality by estimating the NMSE ofthe reconstructed signals relative to the original data. The accuracy of theseizure detection from the recovered compressed signals are estimated fordifferent compression rates. The power consumption at the sensor node isestimated at different compression rates by using the Avrora system.The state of the art compression techniques achieved 0.671 NMSE athigh 10:1 compression ratio. The reconstruction results degraded the seizuredetection event sensitivity by 6% when compared with the no-compressioncase, and the false positives increased from 4.8% to 14.8%. Commerciallyavailable headsets use 2 AAA batteries to power the EEG headsets of 23channels for 12 hours without using compressed sensing [1]. Our resultsshowed that by using 1 miniature battery (200mAh), the battery life in-creased from 3 hours for the no compression case, to 17 hours to transmit23 EEG channels when the compression ratio is 10:1. While compressedsensing at high compression rates would solve the life battery problem, it isnot suitable for seizure detection in EEG based WBAN. There is a positivecorrelation between recovery quality and seizure detection performance, thisthesis contributes to improve the quality of recovery in CS at high compres-sion ratios such as 10:1.38Chapter 4Exploiting Linear andNon-Linear Dependency inBSBL Framework4.1 Problem DescriptionThis chapter studies the reconstruction of the multi-channel signals, knownas the Multi Measurement Vector, MMV, problem. The works presentedin [68] and [67] claim to be the state of the art in solving the CS problemfor EEG signals. These works show that by using a DCT dictionary inBSBL-BO for the SMV problem and STBSL (another algorithm baed onthe BSBL) for MMV problem, low error reconstruction is achieved at highcompression rates. In this chapter we improve upon the BSBL-BO algorithmand compare the recovery performance at high compression rates such as10:1. Our results show that our proposed method achieves superior recoveryperformance compared to the state of the art algorithms, using 3 differentdatasets. The BSBL-BO algorithm exploits the intra-correlation structurein the EEG signals to reconstruct the signals, while the STBSL exploits39the intra and inter correlations of the EEG signals. Since correlation is ameasure of linear dependency, non-linear dependency in EEG signals havealso been studied. These work have shown that EEG signals also have a non-linear dependence structure, temporally and spatially [7],[53]. Our proposedmethod exploits the linear and non-linear dependence in EEG signals. Wealso show that when multi-variable EEG signals are vectorized in a certainway, and then DCT is applied, the resulting DCT vector has a block sparseredundant structure. This vector representation is proved to significantlyimprove the recovery results when used with BSBL-BO and our proposedmethod. To the best of our knowledge, our proposed technique achieved thebest result for CS of EEG signals at high compression rates such as 10:1,using three different data sets.4.2 ApproachWhen the data vector is block sparse, it has been shown that better re-construction can be obtained by exploiting its block-sparsity than by onlyexploiting the sparsity in the signal (assuming the vector is sparse in the con-ventional sense) [68], [22], [23]. The conventional sparsity solution methodonly assumes that xl has at most S non-zero elements, in a sparse domain.However, it does not exploit any further structure that the signal may have.The non-zero components can appear anywhere in xl, however, there arecases in which the non-zero values can form blocks [23]. We propose to ap-ply the block-sparse recovery approach to the EEG multiple measurementvector (MMV) problem because the MMV data share a joint sparsity pat-40tern [11], [11], [19]. For the case of EEG signals, the channels have linearand non-linear dependencies between them as well as within each channel[19],[7],[53]. The work presented in [68] addresses the case pf the signal fromone channel, known as the Single Measurement Vector, SMV case. It uses aDCT dictionary matrix (that results in energy compaction of a single EEGchannel) to obtain a vector of block sparse DCT coefficients. These DCTcoefficients are recovered by BSBL-BO in [68]. To study the MMV case, wefirst investigate the structure of the MMV data vector.For the MMV case let X be the matrix [x1, x2, xL] where L is the num-ber of channels. In conventional studies vec[X] i.e. the vector formed bystacking the columns of X has been studied . However in this work wepropose to study vec[XT ] i.e the vector formed by stacking the rows of Xas a vector.The DCT coefficients of vec[XT ] and vec[X] are shown in Fig 4.1-a andFig 4.1-b, respectively. These correspond to the case when the number ofchannels L is 23 . The DCT coefficients of Fig 4.1-c are the DCT transformof one channel xl,when xl is formed of 23 seconds of data of the channel l,and the DCT coefficients of Fig. 4.1-d are the DCT transform of xl when itis formed of one second of data of the same channel. In Fig 4.1-c, the lengthof the channel was formed of 23 seconds so as to result in the same numberof coefficients as those of Figs 4.1-a and 4.1-b.We compress the DCT coefficients of the multi-channel EEG signals,which we form asDT vec[XT ]. This vector, DT vec[XT ] has more of a blocksparse structure than the vector formed by concatenating the channels ofthe EEG signals, i.e. DT vec[X]. The blocks in DT vec[XT ] also have more41Figure 4.1: (a) DCT of vec[XT ] of 23 channels, (b) DCT of vec[X] of 23channels, (c) DCT of signal xl of 23 seconds length, (d) DCT of signal xl ifone second lengthstructure than the DCT coefficients of a single channel which we denote asDTxl. Figure 4.1-a shows that the MMV vector, DT vec[XT ], exhibits morestructure and more redundant non-zero blocks than the vector formed bythe traditional concatenating of the channels, DT vec[X], (figure 4.1-b) andDTxl (Figures 4.1-c and 4.1-d). This is investigated further in more detailsin section 4.3.2, 4.4.2.BSBL-BO exploits the intra-block correlation, which is a measure of lin-ear dependency in the blocks of a single channel data (temporal data only).Previous works however show that EEG signals and neuron-physiological sig-nals exhibit linear as well as non-linear dependencies [7], [53]. In [7], EEGsignals are examined for non-linear interdependence between channels, and asignificant evidence of the existence of non-linear interdependency is shown.In order to describe the structure of EEG signals, the work in [53] suggests42that nonlinear dependency measures, such as: mutual information, entropy,phase synchronization, and state space synchronization are not intended tosubstitute linear dependency measures such as auto regression (AR) andcross correlation. Instead, non-linear dependency must be regarded as acomplement to the linear dependency when studying EEG signals. Thisallows a more comprehensive structure in the EEG data.Based on our observation above, we therefore vectorize the signals ofthe multichannel data as vec[XT ] i.e. in a way that is different from theconventional one. This will help us better exploit the block-sparse struc-ture of vec[XT ] exhibited in figure 4.1-a. We will show that this resultantmultichannel vector DT vec[XT ] has significant linear and non-linear de-pendencies. In our method, the compressed data is reconstructed using thisvectorization in conjunction with a modified version of BSBL-BO, whichwill be presented in section 4.3.2. As will be shown, we will modify thematrix βˆi 2.10 in BSBL-BO. This matrix is Toeplitz and its AR coeffi-cients model the intra-channel correlation for every channel by exploitingthe intra-block correlation of each EEG channels. The modified version ofβˆi combines the Phase Locking Values (PLV) of the blocks (so as to exploitthe non-linearity intra-dependence) with the intra-channel correlation. Theuse of Dvec[XT ] instead of processing single channels, would enable theexploitation of the intra-blocks interdependencies and modeling the intraand inter dependencies (whether linear dependence, non-linear dependence,or both) of channels. Applying the modified BSBL-BO on the vector fromthe suggested vectorization vec[XT ] will enable us to exploit the linear andnon-linear dependencies within the channels of the EEG data as well as be-43Figure 4.2: Context Block Diagram of CS methodtween the channels. The detail of the modified algorithm follows in the nextsubsection.4.3 MethodThe corresponding CS model for the L channels case, denoted as the MMVproblem is expressed as Y = AX+V , where Y = [y1, y2, yl],X = [x1, x2, xl],V =[v1, v2, vl], (X ∈ IRNXL,Y ∈ IRMXL,V ∈ IRMXL), and N is the number ofsamples per epoch. As discussed above, the matrix XT is vectorized sothat the measurement vector is represented as y = Avec[XT ] + v. For theproposed method figure 4.2 shows the operations performed on the sensingnode prior to the radio transmission. Before transmission the EEG sig-nals are converted to digital using an ADC, and the data of the channelsare collected in epochs. The channels in each epoch are also arranged andvectorized in a certain way then compressed using a sensing matrix A.444.3.1 EpochingThe EEG data of each channel is divided into non-overlapping epochs eachof size N . In our experiments, we choose N to be equal to the samplingfrequency of the dataset i.e. it corresponds to one second. After the com-pression, the data of each epoch are recovered independently from otherepochs.4.3.2 Channel Arrangement and VectorizationIn [68], BSBL-BO was developed to decompress a single vector and wasthus applied on each SMV channel. To compress the multiple channels, theBSBL-BO in [68] is modified and then applied to reconstruct the channelsjointly and exploits the linear and nonlinear dependencies of the EEG sig-nals. Given a data matrix X ∈ IRNXL, whose columns are the data of theL channels, then (as mentioned above) the matrix X is transformed into thevectorvec[XT ] = [x1,1, x2,1, . . . , xL,1, x1,2, x2,2, . . . , xL,2, . . . , x1,N , x2,N , . . . , xL,N ]T, and also into the vectorvec[X] = [x1,1, x2,1, . . . , xN,1, x1,2, x2,2, . . . , xN,2, . . . , x1,L, x2,L, . . . , xN,L]T .When vec[XT ] is divided into non-overlapping blocks, d′is, such thatdi > 2L then each block of vec[XT ] would contain both temporal and spatialinformation about the data. It is thus important that di > 2L otherwisetemporal correlation would not be taken into consideration.As shown in figure 4.1-a the DCT coefficients of vec[XT ] exhibit betterblock sparsity than the DCT coefficents of vec[X] and of xl shown in figures45Figure 4.3: Block structure of correlated and uncorrelated signals in theDCT domain4.1-b, 4.1-c, and 4.1-d. The structure shown in figure 4.1-a is found tobe consistent for different data samples. The DCT of vec[XT ] shows thatthis distinct block-sparse structure has a redundant form. In figure 4.1-a,the non-zero values form blocks that repeat in a consistent fashion. Thisstructure does not exist for uncorrelated signals. To prove this empirically,the DCT of vec[XT ] is examined when XT is formed of uncorrelated andof correlated random variables as show in figures 4.3-a, 4.3-b, 4.3-c, and4.3-d. Figure 4.3-a shows the DCT coefficients of the vectorized form ofuncorrelated random signals. Figures 4.3-B, 4.3-C, and 4.3-D shows theDCT coefficients of the vectorized forms of correlated multichannel signalsgenerated of random multi-channels variables.As shown in figures 4.3-b, 4.3-c, and 4.3-d, the DCT coefficients of thevectroized correlated signals exhibit a distinct block sparse-structure. The46redundancy of the non-zero structure increases by increasing the numberof channels. As the number of correlated channels increases the number ofstructured non-zero blocks increases and this increases the accuracy of therecovery for high compression rates. This is illustrated in figures 4.3-b, 4.3-c,4.3-d, and Compression Using Binary Sparse MatrixAs mentioned in section 2.3, the compression effectiveness depends on the de-gree of coherence between the matrices A and D. It is shown in [32],[27],[9]that the higher the incoherence between A and D, the larger the compres-sion that can be achieved. This is valid only when the R.I.P conditionapplies. Regardless of the type of D, to achieve maximum incoherence, Ashould be independent and identically Gaussian distributed [9]. Using aGaussian N (0, 1/N) results in an optimal sensing matrix [9], but the gen-eration of such a matrix is computationally expensive. For energy savingpurposes, sparse binary matrix that contains few non-zeros (of value equalto one) in each column of the A matrix was used in [68], [25]. It was shownthat 2 non-zeros entries are sufficient to compress the signals when the po-sitions of the non-zeros entries are randomly selected in each column of A.Also, it was shown that the sparse binary matrix performs as well as usinga Gaussian matrix [55], [25].For the MMV problem, to compress vec[XT ] of every epoch, we useA ∈ IRLMXLN , where M is the number of random projections that deter-mines the compression ratio given by N/M (or the compression rate per-centage CR% = (1 − MN)100. L is the number of channels of the EEG47signals. To solve the MMV problem, the compressed data is given by y =Avec[XT ],where y ∈ IRLM . In case of the SMV problem, yl = Axl,whereyl ∈ IRM and A ∈ IRMXN . The matrix A is fixed for the measurement ofall epochs.4.3.4 Modification of BSBL-BO Algorithim (BSBL-LNLD)As mentioned in section 2.5, BSBL-BO exploits the block-sparse structure bylearning the hyper-parameter γi. It exploits the intra-block correlation (ina single channel) by learning the hyper-parameter βˆi. The hyper-parameterβi is evaluated by minimizing the negative log-likelihood with respect to βi[71]. The resultant derivative is shown in equation 2.10. βi, is transformedto βˆi by constraining it to being a positive definite and symmetric matrix.Assuming all the blocks have the same size, the idea is to find one parameterfrom which a close estimate of βi is formed. The βˆi formed using theparameter, r¯, is especially close to βi along the main diagonal and themain sub-diagonal of βi [71]. Further, it is found that for many modelingapplications, if the elements of a block form a first-order Auto-Regressive(AR) process then this is sucient to model the intra-block correlation [70].In this case, the covariance matrix βi of the block (equation 2.9) is convertedto a Toeplitz matrix (βˆi) as shown in equation (2.10). The parameter r¯ isthe AR coecient. Instead of estimating r¯ from the BSBL cost function, it isempirically calculated as r¯ =m1m0. For most of the time m0 is greater thanm1, which makes r¯ < 1. If for any case r¯ > 1, then r¯ is constrained to beequal 0.99. This is done in order to insure that βi is always invertible.To exploit the linear and nonlinear dependencies, we modify r¯ so that it48can also learn the non-linear dependency and not only the linear one, but therest of the algorithm remains unchanged. The Phase Locking Value (PLV)between each block of Dvec[XˆT ] and every other block is calculated. HereXˆ is a matrix of size NXL that represents the reconstructed signal at everylearning iteration. The PLV between every two non-overlapping blocks iscalculated and then averaged to become a scalar p. This scalar value, p,represents the average phase synchronization between all blocks. Since eachblock of vec[XˆT ] contains temporal and spatial information about the EEGsignals, then p captures the intra and inter non-linear dependence in theEEG channels.The PLV was proposed in[39] to measure the phase relationship betweentwo neuro-electric or bio-magnetic signals. The PLV between two signalsvaries from 0 to 1. When the PLV between two signals is equal to one, thisimplies that the signals are perfectly synchronized. When the PLV betweentwo signals is equal to zero this implies that the signals are completely outof synchronization. The PLV between two signals is given as:PLV =1NN∑i=1ej(φ1,i−φ2,i) (4.1)where φ1,i−φ2,i will now be explained: to obtain 4.1, each signal is con-verted into the Hilbert space to obtain a real and imaginary part for eachsample of the signal i. The phase angle φ1,i and φ2,i are then obtained bycalculating arctan of the real and imaginary values. Thus φ1,i − φ2,i is thephase angle difference between two signals in each sample. Phase synchro-nization is useful because the phase component is obtained separately from49the amplitude component. Unlike coherence, it does not separate the effectsof amplitude and phase in the interrelations between two signals.To exploit the information about the phase synchronization in BSBL-BO,the parameter r¯ is modified so that it learns the linear and also the non-lineardependencies in a vector. Thus when applied to vector Dvec[XˆT ] whichcontains the inter and intra information about the multi channels, r¯ wouldlearn both types of dependencies in the channels. As such r¯ would containinformation about inter relationships between channels and not only intrarelationships in each channel. This would then allow BSBL-BO to exploitthe inter and intra linear/non-linear dependence in the channels instead ofthe linear dependency (intra-correlation) only. The modified r¯ is given asr¯ = 0.5(m1m0+ p), where p is the average of the PLV between the blocks. Inthe experiments section we show the performance of the modified version,BSBL-LNLD, in a compressed sensing framework for both SMV and MMVproblems (we denote our modified algorithm as BSBL-LNLD).4.4 Experiments and ResultsThis section presents the results of the CS that are applied on 3 differentscalp EEG data sets (using non-invasive sensors). Section 4.4.1 briefly dis-cusses the 3 different data sets. In section 4.4.2 the linear and non-lineardependence measures of Dvec[XT ] are evaluated, to show the significantpatterns of dependency at different block sizes. Section 4.4.3 discusses theerror metrics that is used to evaluate the recovery of the compressed signals.Finally the recovery results are tabulated in section DatasetsThe compression and recovery of the experiments were conducted on 3datasets to test the performance of the different algorithms on a wide rangeof EEG cases.The first Dataset 1 of the BCI Competition IV [5]. This dataset wasrecorded from healthy subjects or generated artificially and contains thedata of 7 patients. The recording was made using 59 EEG channels persubject at an initial sampling rate of 1000Hz. The other databases are fromPhysionet [29]. The second dataset consists of seizure data of 23 pediatricsubjects. The recordings contain 23 EEG channels at a sampling rate of256 Hz [30]. Dataset 3 is a collection of 108 polysomnographic recordingsof different sleep disorders monitoring. Each recording contains between 5and 13 EEG channels sampled at 256 Hz [58].Some files of these datasets contain channels where the output is always0 µv. These channels are removed from our experiments. The datasetsare down-sampled to 128Hz so as to provide a realistic sampling frequencyin the context of WBAN and to ensure uniformity between the datasets.Non-overlapping windows of length N = 128 were used in our experiments.4.4.2 Dependence Measure of Intra and Inter EEG blocksIn this experiment, we study the correlation and the PLV measures over thesame data samples. A total of 3000 samples were used, 1000 samples fromeach dataset. Each sample of the EEG data was selected randomly from adataset and was over a one second window of the multiple channels. Each51sample was thus formed of the data of all channels in that dataset, Eachsample was transformed to its DCT coefficients and the vectors,DT vec[XT ],DT vec[X], and DTxl were formed. Each of these 3 vectors was divided intoequal size blocks, then the intra block and inter blocks correlation and PLVwere calculated for each vector . Then the absolute value of these measureswere averaged over all 1000 samples taken from the dataset it belongs toThis experiment was then repeated few times but for different block sizes .That is each sample was again divided into equal size blocks but for a dif-ferent block size . The results of correlation and PLV of different block sizesare shown in figure 4.1. Correlation is a measure of linear dependency andit ranges between −1 to 1. When the correlation measure is positive thenthe average block correlation is a positive correlation.When the correlationmeasure is negative , then it is a negative correlation. The Phase synchro-nization measures such as the PLV finds a statistical relationship betweentwo signals and varies from 0 to 1. PLV is important because EEG signalsare noisy and chaotic, thus two or more EEG signals may have their phasessynchronized even if their amplitudes are uncorrelated over time [53].Figure 4.4 shows the average correlations and PLV of the randomly se-lected samples. The vectorDT vec(XT ) has the most intra-correlation struc-ture in the blocks (over 1000 repetitions). Our results agree with [7], [53] thatthere exist non-linear dependency between the EEG channels and also withineach channel. Unlike the correlation measure, the PLV is invariant to thetype of vector (i.e. it gives the same results for the DT vec[XT ],DT vec[X],DTxl). Although the PLV shows a more dominant dependency, it is notmeant to replace the correlation type dependency [53]. For this reason we52Figure 4.4: Mean Correlation and PLV in the blocks and between the blockshave added the PLV measure in equation 2.10 of the BSBL-BO algorithmso as to exploit the non-linear dependency in the EEG signals.4.4.3 Error MetricsThe original data are compressed at different compression rates given byCR% = (1−MN)100. Please note that the compression ratioNM= 2 : 1 cor-responds to CR% = 50. In our experiments we compress the EEG datasetsat different rates including, 50, 60, 70, 80, and 90% compression rates. Forthese experiments, windows of 4 seconds each were randomly selected fromthe 3 datasets, 200 windows from each channel and for each subject. Wemeasured the reconstruction quality using the Normalized Mean SquareError (NMSE). During the recording of the EEG datasets, a voltage DCbias in the electrodes was present. This bias could cause misleading results53when calculating NMSE, to avoid this bias the mean of each signal,µX issubtracted. For a fair evaluation we use the NMSE proposed in [55],[25],NMSE(X, Xˆ) =‖X − Xˆ‖‖X − µX‖4.4.4 Compression/Recovery ResultsThe compression was done using the sparse binary matrix as a sensing ma-trix. This matrix has only 2 non-zeros (i.e. ones) in each column selected atrandom. The length of the matrix depends on M and this depends on thecompression rate. For the MMV problem, our proposed BSBL-LNLD, andthe BSBL-BO algorithms are applied. Based on figure 4.4, the smaller theblock size (i.e. the more number of blocks per epoch), the higher the depen-dency measures. This however, causes a slow performance in Matlab. Forthis reason, a block size of 92 is chosen because it is found to be a suitabletrade off.Figure 4.5 shows the performance of our proposed MMV method as thenumber of EEG channels increases. The more EEG channels (spatial data),the lower the reconstruction error. This is because the spatial dependencebetween channels increases as the number of channels increase. This pro-motes more joint dependency which makes the decompression more immuneto compression rate values.In our experiments we compared the performance of our proposed MMVmethod (BSBL-LNLD) with the following state of the art decompressionEEG algorithms as:1) The tMFOCUSS proposed in [66] is a modified version of the MFO-CUSS. It works by capturing the temporal correlation in the channels. The54Figure 4.5: NMSE vs Number of Channels of proposed Method at DifferentCompression % Ratesmodification lies in replacing the norm minimization with the Mahalanobisdistance. 2) The TMSBL method proposed in [14]. It is a Bayesian ap-proach. It defines the signal in the hyper-parameter space instead of theEuclidean space such as in L1/L2 minimization techniques. The hyper-parameter space is defined by temporal correlation and sparse modeling,this approach is called the Automatic Relevance Determination as proposedin [14], and [20]. By using Expected Maximization, the hyper-parameters areestimated from the posterior information, which is derived from a prior andthe log-likelihood of the compressed signal. One can argue that TMSBLis similar to BSBL-BO in its basic approach and derivation. However,BSBL-BO reconstructs the signal in blocks form, unlike TMSBL. 3) Re-cently, the BSBL-BO approach [68] was compared with the STSBL-EMalgorithm presented in [67]. The comparison was performed on BCI data atdifferent compression rates such as 50, 60, 70, 80, and 90%. It was shown55that the decompression of SMV BSBL was less accurate than STSBL-EM.Two learning hyper-parameters were introduced in STSBL-EM, to capturethe correlation between the blocks in the temporal and spatial domains.STSBL-EM learns the parameters by temporally whitening the model atfirst, and then the spatial hyper-parameter is learned and the signals areestimated. Then the signals are spatially whitened and then the temporalhyper-parameter and the signals are estimated. This process repeats untilconvergence. The repetitive whiting of the model reduces the correlationin the signals which causes less redundancy during decompression, henceless correlation amongst the blocks. Our results in the table 4.1 show thatcompared to the other methods STSBL-EM does not achieve low errors athigh compression rates. The DCT transform matrix is used for all exper-iments i.e. for the proposed BSBL-LNLD, tMFOCUSS, TMSBL, STBSL,and BSBL-BO methods. These 5 methods were applied on the MMV prob-lem. For the SVM problem only BSBL-LNLD and BSBL-BO, were appliedas the others are not applicable to the MMV problem on Thus single chan-nels are compressed channel by channel for the SMV problem, and multiplechannels are compressed simultaneously in case of the MMV problem. Forthe SMV problem, the EEG data is compressed for each vector such thatyl = Axl,∀l = [1, 2, . . . , L]. In case of the MMV problem, the EEG signalswere compressed using the vector y = Avec[XT ]. As shown above the pro-posed method can sustain good recovery results even at high compressionrates e.g. CR 90% (10:1 compression ratio).Figure 4.6 shows a sample of the recovered EEG signals using differ-ent state of the art MMV algorithms such as STBSL, and TMSBL. The56Table 4.1: NMSE of the different methods at different compression ratesCR% 90%(10:1) 85%(6.6:1) 80%(5:1) 70%(3.3:1) 60%(2.5:1) 50%(2:1)Compression ExperimentsNMSE (BCI DataSet)BSBL-LNLD(Multi-Channels) 0.065 0.058 0.016 0.008 0.005 0.002BSBL-BO(Multi-Channels) 0.094 0.089 0.075 0.014 0.006 0.003BSBL-LNLD(Single-Channels) 0.461 0.384 0.242 0.154 0.094 0.045BSBL-BO(Single-Channels) 0.551 0.414 0.318 0.217 0.134 0.089STBSL-EM 0.791 0.427 0.133 0.038 0.017 0.009TMSBL 0.248 0.178 0.066 0.04 0.022 0.014tMFOCUS 0.665 0.269 0.077 0.035 0.018 0.011NMSE (Seizure DataSet)BSBL-LNLD (Multi-channels) 0.242 0.191 0.174 0.114 0.097 0.035BSBL-BO (Multi-channels) 0.311 0.257 0.216 0.165 0.114 0.058BSBL-LNLD (Single Channel) 0.457 0.412 0.35 0.261 0.156 0.098BSBL-BO (Single Channel) 0.671 0.575 0.472 0.319 0.228 0.147STBSL-EM 0.984 0.728 0.419 0.166 0.091 0.032TMSBL 0.698 0.687 0.217 0.154 0.11 0.036tMFOCUS 0.912 0.757 0.683 0.441 0.098 0.021NMSE (Sleeping DataSet)BSBL-LNLD (Multichannel) 0.148 0.135 0.095 0.064 0.009 0.004BSBL-BO (Multichannel) 0.176 0.153 0.113 0.094 0.015 0.007BSBL-LNLD (SingleChannel) 0.388 0.265 0.147 0.092 0.058 0.029BSBL-BO (SingleChannel) 0.475 0.356 0.225 0.134 0.075 0.044STBSL-EM 0.89 0.561 0.315 0.126 0.065 0.007TMSBL 0.352 0.243 0.156 0.114 0.072 0.009tMFOCUS 0.864 0.587 0.413 0.324 0.054 0.017Figure 4.6: Samples of Recovered EEG Signals at 90 % Compression Ratesusing state-of-the-art Recovery Algorithmscompression rate was performed at 90%. As shown in the figure the BSBL-LNLD and BSBL-BO using Dvec[XT ] shows superior quality recovery at5790% compression rate. To the best of our knowledge, these are the bestresults that have been achieved so far with respect to obtaining high com-pression rates and low construction errors for the EEG signals in compressedsensing. Not ignoring that fact that the JPEG2000 still achieves the mostaccurate results at high compression rates, however, it is not suitable forWBANs.4.4.5 Seizure Detection After Signal Recovery ResultsIn this section the seizure detection technique discussed in section 3.2.4 andsection 3.3.4 is used to evaluate the detection performance, when BSBL-LNLD is used to solve for the MMV problem. For maximum power savings,the seizure detection results are compared at 90% compression rate betweenbaseline (No Compression), recovered signals using BSBL-BO (state of theart), and recovered signals using the MMV BSBL-LNLD. As shown previ-ously in the table 4.1 the recovery quality of the BSBL-LNLD is better thanBSBL-BO and STSBL-EM, then it is expected that the seizure detectionperformance will improve when BSBL-LNLD is used. The results are shownin table 4.2. The BSBL-LNLD has improved the seizure detection when90% compression is applied. Using the State-of-the-Art recovery algorithmat 90% compression rate the seizure sensitivity is deteriorated by 10% com-pared to the no compression case. On the other hand the seizure sensitivityhas only decreased by 0.8% when BSBL-LNLD is used. The number offalse positives is 25.45 / hour in case of using BSBL-BO but only 2.89 whenBSBL-LNLD is used.58Table 4.2: Average Seizure Detection Performance at 90% compression rates:Signals are recovered by BSBL-BO and BSBL-LNLDCR% / Recovery Algorithim Seizure Sensitivity (%) Epoch Sensitivity (%) Specificity FP/hour Latency (Sec)0 % / No Compression 95.2 57.6 99.2 1.69 9.590% / BSBL-BO 85.2 46.8 96.4 25.45 11.690/% / BSBL-LNLD (MMV) 94.4 57.3 98.7 2.89 9.04.5 Conclusion and DiscussionThis chapter shows a method to compress multi channel EEG signals usingcompressive sensing. We first show that we can represent the multi channelEEG data measurements by a vector that has a good sparsity structure.Previous works have shown that neurophysiology signals (including EEGsignals) have a linear and non-linear dependencies within and between chan-nels. We then exploit both these linear and non-linear dependencies whenreconstructing the sparse EEG vector.To confirm the existence of linear and non-linear dependencies in the pro-posed EEG vector representation we calculate the average correlation andphase locking values of various EEG multi channel data To exploit these de-pendencies, we propose a modification to the parameters of BSBL-BO . Thismodification enables this method to exploit not only the linear dependencybut also the non-linear one . The modified BSBL-BO is then applied onthe sparse vector that results from our proposed representation of the EEGdata . The DCT coefficients of the resultant vector is shown to have a highsparse and redundant block sparse structure. We used correlated and un-correlated random signals to prove that this sparse structure is reproduciblein correlated signals. The redundancy in the block structure increases withthe increase in the number of correlated channels.59The proposed compressed sensing technique is applied on the MMV andthe SMV problems. The compressed signals were decompressed using dif-ferent existing algorithms for comparison. Two datasets of EEG signals ofdifferent patients and a third dataset of brain computer interface data wereused. The results show that, the proposed BSBL-LNLD method results in asignificant lower error compared to other methods, even at high compressionrates such as 10:1. To the best of our knowledge, the results obtained are thebest in the (WBAN) literature for EEG signals, JPEG2000 still remains thebest compression technique in terms of accuracy at high compression rates,but it is not suitable for WBANs due to its high power consumption. Pre-viously in section 3.3.3 we have demonstrated that at 10:1 compression thepower consumption has dramatically decreased, however, it is not reliableto use the current state-of-the-art recovery algorithms to recover the dataespecially when seizure detection is used as a WBAN application. However,we demonstrated that it is possible to use our proposed BSBL-LNLD toachieve battery life efficiency in a way that it is reliable to use in seizuredetection.60Chapter 5A Compressed SensingTechnique by UnderSampling EEG Signals5.1 Problem DescriptionThe previous chapter has demonstrated that high compression rates is ef-fective in saving battery energy. However, the higher the compression ratethe less accurate the recovery results which is not desirable in applicationslike seizure detection. This is because the overall normalized mean squareerror of the recovered compressed signals are considered high, this meanssome of the information in the signals is lost during recovery. In order tohave an acceptable seizure detection performance and still maintain highpower efficiency the compression rates should be decreased. Since the goalof this study is to solve the power consumption problem and to demonstrateits reliability in tele-medicine applications such as seizure detection, two ap-proaches are studied in this thesis study. The first approach, is to studythe structures in the EEG signals that could be exploited in recovering the61lost information, to improve the quality of recovery. This is demonstratedin chapter 4. The second approach is explained in this chapter. The chal-lenge hence is to reduce the power consumption in the acquisition stage ofthe EEG signals and still maintain high compression rates. Previously, werefereed to this as analog compressed sensing in section 3.2.1. The data arerandomly sampled as Dirac functions, thus the matrix product operationis eliminated from the micro-controller and replaced by a random samplingcircuit which consumes less power. The previous studies discussed in section1.9 have only focused on reducing the amount of transmitted data to tacklethe power consumption problem, and these approaches do not account forreducing the sensing power consumption. Only few works have attemptedto solve this problem. In [44] it is argued by acquiring a smaller numberof samples in a random fashion the sensing energy is reduced. The stan-dard CS recovery techniques yield to high NMSE so previous work used analternative recovery approach based on the theory of low-rank completion.They applied the compression experiment using the dataset in [5] and theirproposed algorithm was able to achieve power savings at 2:1 compressionrates, the signal recover was poor at high compression rates. In [45], it isshown that by applying blind compressed sensing, (a compressed sensingtechnique that comprises dictionary learning while solving for the synthesisprior formulation) better results are achieved. In [43] proposed a techniqueby combining blind compressed sensing with low-rank recovery (i.e. com-bining the techniques in [44] and [45]. This technique is considered the stateof the art technique for solving the under sampling problem. It achieves onaverage 0.18 NMSE at 5:1 using the data set in [5]. In this work we propose62Figure 5.1: A Block diagram Showing the Sensor and the Server Node whenSolving the Under Sampling Problema different approach to solve the under-sampling problem by building anover-complete dictionary and use it to solve the analysis-prior formulationthat is discussed in section 2.2.By Solving the under-sampling problem the matrix multiplication op-eration on the micro-controller will be no longer needed. Therefore, themicro-controller is replaced by a simple low power sampling circuit. Also,the ADC is controlled by a random generator that enables the operation ofthe ADC at random as shown in figure 5.1. In figure 3.1 the conventionalsensing node is shown, hence less power consumption is achieved.5.2 ApproachGenerally, Short Time Fourier Transform (STFT) and wavelets are used todecompose non-stationary signals to analyze the time-frequency decompo-sition of the signal. STFT uses a sliding window to decompose the signalin time and frequency domain, which gives the information of both timeand frequency [3]. A problem arises is that the length of window limits theresolution in the frequency domain. On the other hand Wavelet transform63solves this problem as it is adapted in time and frequency by translation anddilation [3],[40]. The translated-version of wavelet function locate parts ofthe signal in time. Whereas the dilated-version of wavelets allows the signalto be analyzed at different frequency scales.In wavelet transform the signal is represented as a scaling function, φ(t),and a wavelet function, ψ(t). The function φ(t) is more compact at low fre-quencies, while the function ψ(t) concentrates at relatively high frequencies.Therefore, φ(t) is a function that approximates the signal (captures the lowfrequencies), while ψ(t) is a function that finds the details of the signal (cap-tures the high frequencies) [3],[40]. Usually, signals are realized on multipledecomposition levels of low and high frequencies components by dilating andtranslating φ(t) and ψ(t). This is formulated as φj,k(t) = 2j/2φ(2jt−k) andψj,k(t) = 2j/2ψ(2jt− k). where j is the parameter of scaling or dilation (i.e.the visibility of frequency) and k is the parameter of translation or position.Theoretically, a discrete wavelet function ψj,k(t) can represent any non-stationary signal using an infinite number of dilations and translations [8].The translation of a wavelet function in time aims to find a particular partof the signal that best correlate to the wavelet function, which means itrepresents a certain frequency at such time. The length of the waveletcoefficients are determined by an upper bound and a lower bound value [8],this length is called the support of a wavelet. A wavelet function is called acompact support if and only if it has a bounded support, and infinite supportotherwise. Based on the admissibility condition properties of wavelets [3][40] [8], the Fourier transform of ψ(t) vanishes at the zero frequency. Thismeans a wavelet function acts like a band pass filter in the frequency domain.64Therefore, a signal can be realize as a combination of a stack of band passfilters.Fψ(at) =1|a|ψωa(5.1)Equation 5.1 shows the Fourier transform of the wavelet function. Thismean for example when the length of the wavelet is decreased by a factorof 2 will stretch the frequency spectrum of the wavelet by a factor of 2 andalso shifts the frequency components up by a factor of 2. In this work weuse this insight to construct an over complete dictionary that contains manyfilters of frequency dilations and translations. In order to achieve that wepicked a wavelet that is not a compact support therefore, the wavelet lengthcan be controlled for different sizes. Since the over-complete dictionary weconstructed is not orthogonal or tight frame we solve the compressed sens-ing problem using the analysis prior formulation which has been expressedpreviously in equation Method5.3.1 Meyer Wavelet for Over-Complete DictionaryConstructionThe Meyer wavelet is an orthogonal wavelet that is indefinitely differentiablewith infinite support [61], then the lower and upper bounds can be selectedof different ranges to generate multiple of band-pass filters. Generally, theMayer scale function Φ(ω) and wavelet function Ψ(ω) are defined in the65Figure 5.2: Band-Pass Filter bank Over-Complete Dictionary generated us-ing the Meyer Ψ(Ω) functionfrequency domain and transformed to the time domain by using inverseFourier transform [16]. We used this property to construct multiple waveletsfilter by using the Meyer function in Matlab. We used different values oflower bounds and upper bounds to generate multiple filter banks, of whicheach filter is translated in time to cover a wide range of translation. TheFast Fourier Transform of the generated over-complete dictionary is shownin figure 5.2 showing that the dictionary covers fine frequency resolutions inthe frequency domain. Please note that the displayed band pass filters infigure 5.2 are reduced by a factor of 300 (i.e. between two displayed filtersin the image there are 300 more filters that are not shown for visualizationpurposes).665.3.2 Analysis Prior FormulationNatural non-stationary signals like EEG signals are not sparse, but have asparse representation in transform domain such as Wavelets, DCT, or Gaborbasis. Such basis are used to build dictionaries to achieve the sparsity, thisis observed in [55], [25], [69], [54], [2], [46]. Such presented dictionariesare either orthogonal or tight frame, therefore it is sufficient to solve the CSproblem using the synthesis prior formulation problem discussed in 2.1. TheMeyer over-complete dictionary in our work is not tight frame or orthogonal,therefore, we use the Analysis-Prior Formulation to solve for our SMV CSproblem. We used the Nesta algorithm to solve this problem [6].5.4 Experiments and Results5.4.1 DatasetsThe algorithms were tested on a wide range of EEG signals. Experimentson the CS compression and recovery of EEG signals were conducted on 3datasets. The first dataset was from the BCI Competition IV dataset [30].This dataset was recorded from healthy subjects or generated artificiallyand contained the data of 7 patients. The recording was made using 59EEG channels per subject at an initial sampling rate of 1000Hz. The otherdatabases were from Physionet [29]. The second dataset consisted of seizuredata of 23 paediatric subjects. The recordings contained 23 EEG channels ata sampling rate of 256 Hz. Dataset 3 is a collection of 108 polysomnographicrecordings of different sleep disorders monitoring. Each recording contained67Figure 5.3: Absolute Values of Meyer Wavelet Coefficients and Gabor Co-efficients sorted in Descending Orderbetween 5 and 13 EEG channels sampled at 256 Hz [58]. The datasetswere down-sampled to 128Hz so as to provide a realistic sampling frequencyin the context of WBAN and to ensure uniformity between the datasets.Non-overlapping windows of length N = 128 were used in our experiments.5.4.2 Dictionary SparsityUsually the Gabor dictionary is used as a sparsifying basis for EEG signals[54], [55], [25]. In this section we show that the Meyer over-complete dictio-nary is also a good sparse basis for EEG signals. The Gabor dictionary isgenerated the same method that was mentioned in [55]. Figure 5.3 showsthe absolute values of the Gabor and Meyer wavelet coefficients which aresorted in descending order. As shown in the figure both dictionaries have afast decaying coefficients which is a good indicator for sparsity.685.4.3 Compression/Recovery ResultsIt was argued in [45] that wavelets are coherent with the Dirac function,therefore it is not possible to use wavelets to solve for the under samplingcompressed sensing problem. However, this is not true in case of usingour Meyer wavelet dictionary. We used the Dirac function as a sensingmatrix to simulate the random sampling circuit in figure 5.1. In 4.3.3, theA matrix is a binary matrix which has two non-zeros in each column; thisdoes not simulate the Dirac sampling basis. In order to simulate a randomDirac function in time, each row in A should have only one non-zero whoselocation in each row is selected at random, but it should be assigned inascending order relative to each row. The compressed data is recoveredusing the NESTA algorithm and the generated Meyer wavelet over-completedictionary. The original data are compressed at different compression rates. The definition of compression rate is CR% = (1 − MN)100, while that ofcompression ratio isNM.Thus a compression ratioNM= 2 : 1 corresponds toCR% = 50%. In our experiments we compress the EEG datasets at differentrates including, 70% (3.3:1), 80%(5:1), and 90%(10:1).For the experiments, windows of 1 second each were randomly selectedfrom the 3 datasets; a total of 200 windows from each channel and foreach subject. We measured the reconstruction quality using the NMSEmeasure. During the recording of the EEG datasets, a voltage DC bias inthe electrodes was present. This bias could cause misleading results whencalculating NMSE. To avoid this bias, the mean of each signal is sub-tracted from the signal. For a fair evaluation the NMSE proposed in [25],69NMSE(X, Xˆ) =‖X − Xˆ‖‖X − µX‖ . As mentioned above there is a limited numberof works proposed to solve the CS under-sampling problem of EEG signals.In these experiments we compare our results with the state of the art tech-nique proposed in [43].Table 5.1: Mean and standard deviation of NMSE at different compressionrates using 3 different datasetsRecovery Method Data SetCompression Rates90% 80% 70%BSBL-BO [1],(state of the Art EEG CS)BCI0.5511+/- 0.000.3182 +/- 0.000.2178+/- 0.00Seizure0.6715+/- 0.000.4721 +/- 0.000.3192+/- 0.00Sleep0.4751+/- 0.000.2251 +/- 0.000.1344+/- 0.00BCS [13],(state ofthe Art under-sampling)BCI 0.6589 +/- 0.023 0.4528 +/- 0.011 0.3254 +/- 0.012Seizure 0.7412 +/- 0.012 0.4625 +/- 0.017 0.4159 +/- 0.001Sleep 0.6478 +/- 0.001 0.4281 +/- 0.005 0.3510 +/- 0.045Proposed MethodBCI 0.2486 +/- 0.0590 0.1108 +/- 0.0419 0.0508 +/- 0.0249Seizure 0.2064 +/- 0.0876 0.1292 +/- 0.0780 0.0968 +/- 0.0726Sleep 0.1757 +/- 0.0684 0.1072 +/- 0.0692 0.0421 +/- 0.0355Table 5.1 shows the mean and standard deviation of the NMSE of therecovery quality. As shown from the table, the blind compressed sensing(BCS) technique has highest errors especially at high compression rates(90%). The BSBL-BO method is used to solve the conventional CS SingleVector Measurement problem for EEG signals [1]. Our proposed techniqueperforms much better than this method also.5.4.4 Power Consumption EstimationThe power consumption estimation of the proposed method can not be esti-mated using the same technique discussed in section 3.2.3. This is becausethe hardware is different as, the micro-controller is no longer needed and70a random generator is proposed as an alternative. Therefore, we used thesame power estimation technique that was proposed in [45], [43].For a regular CS technique the total power is given as:Ptot = Psense + PProcessor + PTransmitter (5.2)The sensing energy Psense has two parts: amplification and analogue todigital converter for every channel these are referred to as Pamp and PADC ,respectively. PProcessor is the power consumed by the micro-controller toperform linear multiplication of y = Ax. PTransmitter is the power consumedby the transmitter of the sensor. Since the micro-controller is not used in thepresent framework, we will adopt the method used for measuring the powerconsumption in [45], [43]. The total power consumed at the sensing nodeis comprised of two power sinks Ptot = Psense + PTransmitter. It was shownthat Ptot varies linearly with the amount of data transmitted, thereforePsense =CMN(Pamp + PADC), where C is the number of channels. ThePADC value is obtained from [60] for a 12 bits per sample (R) and thesampling rate of the ADC is 500Hz. These specifications are estimated tobe PADC = 0.2mW . The specifications of the power amplifier are as thatused in [43] which are obtained from [33], thus Pamp = 0.9mW .As shown in [43], the communication power is expressed as PTransmitter =CMNJfsR, where J the transmission power per 1 bit, fs is the samplingfrequency of the ADC, and R is the number of bits per sample (resolution).The transmission energy required to transmit one bit is estimated to be 5nJper bit. The sampling frequency is 0.5 samples / second. Each sample is71Table 5.2: Power Consumption in uW at different Compression RatesCompression Rate (%) Power Consumption (uW)50%(2:1) 357uW60%(2.5:1) 286uW70%(3.3:1) 214uW80%(5:1) 143uW90%(10:1) 71uWassumed to be 12 bits. The energy consumption is computed at differentcompression rates and shown in Table Seizure Detection After Signal Recovery ResultsIn this section the seizure detection technique that was previously discussedin sections 3.2.4 and 3.3.4 is used to evaluate the detection performance. Thesignals are compressed at high compression rate such as 90% (10:1) which ispreviously described in section 5.4.3. The signals are then recovered usingthe proposed technique and also using the BCS-LR technique proposed by[43]. Seizure detection is then applied using both of the recovered signalsand the detection accuracies are compared in the following figure.Figure 5.4 shows the event sensitivity and specificity performances atdifferent NMSE of the recovered signal. Using both techniques. This figureshows that the proposed method is far more superior than the state of theart techniques at high compression rates.At 90% compression Rate (10:1), high battery life is achieved, seizuredetection sensitivity has decreased by 1.4 % compared to no compressioncase (i.e. NMSE = 0). On the other hand, for the state of the art methodthe sensitivity has decreased by 11.5%. Also, at 90% compression Rate72Figure 5.4: Seizure Detection Event Specificity and Event Sensitivity atdifferent recovery NMSE(10:1), the seizure detection specify has decreased by 0.79 % compared tono compression case. Also for the state of the art method the sensitivity hasdecreased by 8.7%.5.5 ConclusionA new solution is proposed to solve the under-sampling problem of CS com-pressed EEG signals. The result of the technique is compared with theconventional compressed sensing technique using the BSBL-BO Algorithm,also it is compared with the BCS techniques proposed in [43]. Our proposedmethod outperforms both these EEG CS methods. The recovery of ourproposed technique yields more accurate results even for the cases havinghigh compression rates. It is hence concluded that our proposed methodachieves maximal power savings for EEG WBAN applications solving the73under-sampling problem, hence achieving maximal power savings and highrecovery accuracy at high compression rates.74Chapter 6Conclusions and FutureDirections6.1 ConclusionsThis thesis proposes two novel energy efficient compression techniques forthe wireless transmission of EEG signals, using Compressive Sensing (CS).The aim is to elongate the battery life at the EEG sensor node. Two CSframeworks were studied. The first one samples the signals fully by multi-plying the signal with a sparse binary matrix to obtain an aggregation ofrandom samples. The second framework under-samples the analog samplesdirectly, i.e. the samples are obtained randomly by Dirac basis.The first proposed CS technique takes advantage of the inherent struc-ture present in EEG signals to efficiently decompress these signals. Theunder-sampling framework takes advantage of the infinite support structureof the Meyer Wavelet function to construct an over complete dictionaryof several scales and shifts. This over complete dictionary is enough torepresent the time-frequency decompositions of EEG signals. We used theanalysis prior formulation CS recovery technique and the over-complete dic-75tionary to recover the data.In Chapter 3, we presented a simple CS-based framework. We discussedthe two state of the art compression techniques BSBL-BO, and STBSL-BOthat were been proposed to solve the CS framework for EEG signals. Weshowed that both techniques do not achieve high recovery accuracy at highcompression rate. However, more power savings and thus higher the batterylife was achieved at high compression rates. We showed that at a highcompression rate such as 10:1, the seizure detection accuracy deteriorateswhen compared to the no compression case. This means that even thecurrent state of the art techniques may not be reliable enough for use inWBAN applications. Therefore, it is essential to develop new techniquesthat can maintain a high quality signal recovery at high compression rates.In Chapter 4, we presented a modification to the BSBL-BO recoverytechnique. The first modification involves restructuring the signal represen-tation to give the EEG signals a better block sparse representation that canbe exploited during the recovery of the compressed signals. This block struc-ture is exploitable by the BSBL-BO and leads to higher accuracy results athigh compression rates.In chapter 4 we discussed the previous studies that showed that EEGsignals have temporal and spatial both linear dependencies and non-lineardependencies. The modifications we proposed to the signal representationenables the BSBL-BO to only exploit the linear dependencies in the EEGsignals. This is because the BSBL-BO only exploits the correlation in thesignals and correlation is a linear dependency measure. In addition to en-abling BSBL-BO to exploit the correlation in the signals, we also introduced76an additional hyper-parameter in BSBL-BO to enable it to also exploit thePhase Locking Values in addition to correlation. The Phase Locking Valueis a non-linear dependency measure. The seizure detection accuracies andthe recovery accuracies of the proposed techniques shows to be superior incompared to the state of the art techniques.In Chapter 5, we presented a recovery technique for the CS frameworkthat captures the under-samples directly from the complete analog EEG sig-nals. We built an over-complete dictionary using Meyer wavelet functions.Each column in the dictionary matrix contains multiple levels of scales andtranslations of the Meyer wavelet functions. We showed that this dictionarysparsifies the EEG signals and was compared with the Gabor dictionary.We showed that the analysis prior formulation using Nesta algorithm wassufficient to reconstruct the compressed signals. The seizure detection ac-curacies and the recovery accuracies of the proposed techniques are shownto be superior compared to the state of the art techniques.6.2 Future DirectionsIt is important to verify the reliability of these techniques in other applica-tions such as Brain Computer Interface, sleeping pattern recognition, andseizure source localization. If it achieves high accuracy recovery comparedto other techniques then we will look into a better real time implementationof these techniques to improve scalability and speed. Parallel computingtechnologies such as Hadoop and Apache Spark are good potentials to im-plement these techniques on the server side to obtain fast processing, and77hence to achieve real-time scalable processing.We demonstrated the block sparsity of the highly correlated signals us-ing the DCT coefficients of vec(XT ). We suggest to take advantage of thisstructure and apply Expected Maximization approximation to obtain meansand covariances of multi-variate Gaussian distribution. The means and co-variances are basically the compression of the signals, and the signals arerecovered by solving the maximum likely-hood of this representation.78Bibliography[1] Quasar Sensing a world of potential. Eeg:dsi 10/20. Accessed:09/11/2015.[2] Casson A.J. Rodriguez-Villegas E. Abdulghani, A.M. Quantifying theperformance of compressive sensing on scalp EEG signals. Applied Sci-ences in Biomedical and Communication Technologies (ISABEL), 20103rd International Symposium on, pages 1–5, 2010.[3] Adhemar B. Learning to swim in a sea of wavelets. Bull. Belg. Math.Soc., 2:146, 1995.[4] Blanco-Velasco M. Cruz Roldan F. Bazan-Prieto C., Cardenas-Barrera J. Electroencephalographic compression based on modulatedfilter banks and wavelet transform. In Annual International Confer-ence of the IEEE Engineering in Medicine and Biology Society, page70677070, 2011.[5] Krauledat M. Mller-K. Curio G. Blankertz B., Dornhege G. The non-invasive berlin brain-computer interface: Fast acquisition of effectiveperformance in untrained subjects. NeuroImage, 2007.79[6] Becker S. Bobin J. Nesta: A fast and accurate first-order method forsparse recovery. Technical report, California Institute of Technology,2009.[7] Terry J.R. Breakspear M. Detection and description of non-linear in-terdependence in normal multichannel human eeg data. Clinical Neu-rophysiology, 113:735–753, 2002.[8] Valens C. A really friendly guide to wavelets. Technical report, Uni-versity of New Mexico, 1999.[9] Wakin M. B. Candes E. J. An introduction to compressive sampling.IEEE Signal Processing Magazine, 25(2):21–30, 2008.[10] Rodrguez Valdivia E. Cardenas-Barrera J. L., Lorenzo-Ginori J. V. Awavelet-packets based algorithm for eeg signal compression. Informaticsfor Health and Social Care, 29(1):15–27, 2004.[11] Huo X. Chen J. Theoretical results on sparse representationsof multiple-measurement vectors. IEEE Trans. Signal Process,54(12):4634–4643, 2006.[12] Ward R. Chiang J. Energy-efcient data reduction techniques for wirelessseizure detection systems. Sensors, 14:2036–2051, 2014.[13] Pedersen M.G. Pedersen C.B. Olsen J. Sidenius P. Christensen J.,Vestergaard M. Incidence and prevalence of epilepsy in denmark.Epilepsy Res, 76:60–65, 2007.80[14] Kjersti E. Kreutz-Delgado K. Cotter S.F., Rao B.D. Sparse solutionsto linear inverse problems with multiple measurement vectors. IEEETransactions on Signal Processing, 2005.[15] Katznelson R. D. Neuroelectric measures of mind. In: PL Nunez (Au),Neocortical Dynamics and Human EEG Rhythms. Oxford UniversityPress, New York, 1995.[16] Meyer Y. Daubechies I., Grossmann A. Painless nonorthogonal expan-sions. J. Math. Phys., 27(5):1271–1283, 1986.[17] Labeau F. Dehkordi V.R., Daou H. A channel dierential ezw codingscheme for eeg data compression. IEEE Transactions on InformationTechnology in Biomedicine, 15(6):831838, 2011.[18] Digi-Key. Battery life calculator. Accessed: 16/11/2015.[19] Wakin M. B. Baron D. Baraniuk R. G. Duarte M. F., Sarvotham S.Joint sparsity models for distributed compressed sensing. Signal Pro-cessing with Adaptive Sparse Structured Representations in online pro-ceedings of Workshop, 2005.[20] Tipping M. E. Sparse bayesian learning and the relevance vector ma-chine. Journal of Machine Learning Research 1, pages 211–244, 2001.[21] Mishali M. Eldar Y. C. Robust recovery of signals from a structuredunion of subspaces. IEEE Trans. Inf. Theory, 55:5302–5316, 2009.81[22] Bolcskei H. Eldar Y.C., Kuppinger P. Block-sparse signals: Uncertaintyrelations and efficient recovery. IEEE Transactions, Signal Processing,58(6), 2010.[23] Mishali M. Eldar Y.C. Block sparsity and sampling over a union ofsubspaces. 16th International Conference on Digital Signal Processing,pages 1–8, 2009.[24] Emotiv. Epoc specications. Accessed:09/11/2015.[25] Ward R.K. Fauvel S. An energy efficient compressed sensing frame-work for the compression of electroencephalogram signals. Sensors,14(1):1474–1496, 2014.[26] Ozgur Y. Felix J. H., Michael P. F. Fighting the curse of dimensionality:Compressive sensing in exploration seismology. IEEE Signal ProcessingMagazine, 29:88–100, 2012.[27] Mallat S. G. A Wavelet Tour of Signal Processing. Academic Press,2008.[28] Cutillo B. A. Gevins A. S. Neuroelectric measures of mind. In: PLNunez (Au), Neocortical Dynamics and Human EEG Rhythms. OxfordUniversity Press, New York, 1995.[29] Glass L. et-al Goldberger A. L., Amaral L. A. N. Physiobank, phys-iotoolkit, and physionet: Components of a new research resource for82complex physiologic signals. circulation. Technical report, Stanley,2000.[30] Glass L. Hausdorff J. M.-Ivanov P. Mark R. G. Mietus J. E. Moody G.B. Peng C. K. Stanley H. E. Goldberger A. L., Amaral L. Componentsof a new research resource for complex physiologic signals. PhysioBank,PhysioToolkit, and PhysioNet.[31] Yarman B. S. Gurkan H., Guz U. Eeg signal compression based onclassified signature and envelope vector sets. International Journal ofCircuit Theory and Applications, 37(2):351–363, 2009.[32] Felix J. H. Randomized sampling and sparsity: getting more informa-tion from fewer samples. Geophysics, 75:173–187, 2010.[33] Charles C. Harrison R. A low-power low-noise cmos amplifier for neu-ral recording applications. IEEE J. Solid State Circuits, 38(6):958–965,2003.[34] Glavin M. Jones E. Higgins G., McGinley B. Low power compression ofeeg signals using jpeg2000. 4th International Conference on PervasiveComputing Technologies for Healthcare, pages 1–4, 2010.[35] Glavin M. Jones E. Higgins G., McGinley B. Lossy compressionof eeg signals using spiht. electronics letters. Electronics Letters,47(18):10171018, 2011.[36] Yan-ling W. Pei-hua L. Hong-xin Z., Can-feng C. Decomposition andcompression for ecg and eeg signals with sequence index coding method83based on matching pursuit. The Journal of China Universities of Postsand Telecommunications, 19:92–95, 2012.[37] Imec. Holst centre and panasonic present wireless low-poweractive-electrode eeg headset. Accessed: 09/11/2015.[38] Gabor J. Seizure detection using a self-organizing neural network:validation and comparison with other detection strategies. Electroen-cephalogr Clin Neurophysiol, 107:27–32, 1998.[39] Jacques M.-Francisco J. V. Jean-Philippe L., Eugenio R. Measuringphase synchrony in brain signals. Human Brain Mapping, 8:194–208,1999.[40] Chun-Lin L. A tutorial of the wavelet transform. Technical report,National Taiwan University, 2010.[41] Donoho D. L. Compressed sensing. IEEE Transactions on InformationTheory, 52(4):1289–1306, 2006.[42] Polastre J. Szewczyk R. Whitehouse K.-Woo A. Gay D. Hill J. WelshM. Brewer E. et al. Levis P., Madden S. Tinyos: An operating systemfor sensor networks. In Ambient Intelligence, Springer: New York, NY,USA,, page 115148, 2005.[43] Ward R. Majumdar A. Energy efficient eeg sensing and transmissionfor wireless body area networks: A blind compressed sensing approach.Biomed. Signal Process. Control, 20:19, 2015.84[44] Ward R. Majumdar A., Gogna A. Low-rank matrix recovery approachfor energy efficient eeg acquisition for wireless body area network. Sen-sors: Special Issue on State of the Art Sensor Technologies in Canada,14(9):15729–15748, 2014.[45] Ward R. Majumdar A., Shukla A. Row-sparse blind compressed sensingfor reconstructing multi-channel eeg signals. Biomed. Signal Process.Control, 18(4):174178, 2015.[46] Atienza D. Vandergheynst P. Mamaghanian H., Khaled N. Com-pressed sensing for real-time energy-efficient ECG compression on wire-less body sensor nodes. IEEE Transactions on Biomedical Engineering,58(9):2456–2466, 2011.[47] De Vos M. Van Huffel S. Mijovic B., Matic V. Independent componentanalysis as a per-processing step for data compression of neonatal EEG.Engineering medicine and Biology Society, EMBC, pages 7316–7319,2011.[48] Advanced Brain Monitoring. B-alert x24 eeg system. Accessed:09/11/2015.[49] Muse. The brainwave sensing headband tech spec sheet. Accessed: 09/11/2015.[50] neuroelectrics. Enobio 32. Accessed: 09/11/2015.85[51] NeuroSky. Mindwave headsets. Accessed: 09/11/2015.[52] Sreenan C. O’Reilly P. Sammon D. O’Donovan T., O’Donoghue J. andO’Connor K. A context aware wireless body area network (ban). Inproceedings of the Pervasive Health Conference, 47, 2009.[53] Bhattacharya J. Pereda E., Quiroga R. Q. Nonlinear multivariate anal-ysis of neurophysiological signals, progress in neurobiology. ClinicalNeurophysiology, 77, 2005.[54] Aviyente S. Compressed sensing framework for EEG compression.IEEE Statistical Signal Processing 14th Workshop 2007, pages 181–184,2007.[55] Fauvel S. Energy-efficient compressed sensing frameworks for the com-pression of electroencephalogram signals. Master’s thesis, University ofBritish Columbia, Vancouver, Canada, 2013.[56] Morsdorf J. Bernhard J. von der Grun T. Schmidt R., Norgall T. Bodyarea network ban–a key infrastructure element for patient-centeredmedical applications. Biomed Tech, 47, 2002.[57] Ghosh J. Strehl A. Cluster ensembles - a knowledge reuse frameworkfor combining multiple partitions. The Journal of Machine LearningResearch, 3:583–617, 2003.[58] Sherieri A. Chervin R. Chokroverty S. Guilleminault C. Hirshkowitz M.Mahowald M. Moldofsky H. Rosa A. et al. Terzano M.G., Parrino L. At-86las, rules, and recording techniques for the scoring of cyclic alternatingpattern (cap) in human sleep. Sleep Med., 2001.[59] Palsberg J. A. Titzer B.L., Lee D.K. Scalable sensor network simulationwith precise timing. In Proceedings of Fourth International Symposiumon Information Processing in Sensor Networks (IPSN 2005), Los An-geles, CA, USA, page 477482, 2005.[60] Chandrakasan A. Verma N. An ultra low energy 12-bit rate-resolutionscalable sar adc for wireless sensor nodes. IEEE J. Solid State Circuits,42(6):1196–1205, 2007.[61] De-Oliveira H.M. Vermehren V. Close expressions for meyer waveletand scale function. Technical report, Cornell University Library, 2015.[62] Nakamura E. Grant M. Sovero E. Ching D. Yoo J. Romberg J. Emami-Neyestanak A. Candes E. Wakin M., Becker S. A nonuniform samplerfor wideband spectrally-sparse environments. IEEE J. Emerg. Sel. Top.Circuits Syst. environments, 2:516529, 2012.[63] Tanaka T. Rao K.R. Wongsawat Y., Oraintara S. Lossless multi-channeleeg compression. IEEE International Symposium on Circuits and Sys-tems, pages 161–1617, 2006.[64] Merken P. Penders J. Leonov V. Puers R. Gyselinckx B. van Hoof C.Yazicioglu R.F., Torfs T. Ultra-low-power biopotential interfaces andtheir applications in wearable and implantable systems. Microelectron.J., 40:13131321, 2009.87[65] Rao A.S. Naraghi S. Flynn M.P. Gilbert A.C. Yenduri P.K., Rocca A.Z.low-power compressive sampling time-based analog-to-digital converter.IEEE J. Emerg. Sel. Top. Circuits Syst., 2:502515, 2012.[66] Bhaskar D. R. Zhang Z. Sparse signal recovery with temporally corre-lated source vectors using sparse bayesian learning. IEEE Journal ofSelected Topics in Signal Processing, Special Issue on Adaptive SparseRepresentation of Data and Applications in Signal and Image Process-ing, 2011.[67] Makeig S. Pi Z. Rao B. Zhang Z., Jung T. Spatiotemporal sparsebayesian learning with applications to compressed sensing of multichan-nel physiological signals. IEEE Trans. on Neural Systems and Rehabil-itation Engineering, 22(6):1186–1197, 2014.[68] Makeig S. Rao B. Zhang Z., Jung T. Compressed sensing of EEG forwireless telemonitoring with low energy consumption and inexpensivehardware. IEEE Transactions on Biomedical Engineering, 60(1):221–224, 2013.[69] Makeig S. Rao B. D. Zhang Z., Jung T. P. Compressed sensing forenergy-efficient wireless tele-monitoring of noninvasive fetal ecg viablock sparse bayesian learning. IEEE Trans. on Biomedical Engineer-ing, 60(2):300–309, 2013.[70] Rao B. D. Zhang Z. Sparse signal recovery in the presence of corre-lated multiple measurement vectors. IEEE International Conferenceon Acoustics, Speech and Signal Processing, 2010.88[71] Rao B. D. Zhang Z. Extension of SBL algorithms for the recovery ofblock sparse signals with intra-block correlation. IEEE Transactionson Signal Processing, 61(8):2009–2015, 2013.[72] Rao B. D. Zhilin Z. Iterative re-weighted algorithms for sparse signalrecovery with temporally correlated source vectors. ICASSP, 2011.89


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items