Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Reliable and efficient transmission of compressive-sensed electroencephalogram signals Rmeily, Patrick 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2014_september_rmeily_patrick.pdf [ 2.01MB ]
JSON: 24-1.0165950.json
JSON-LD: 24-1.0165950-ld.json
RDF/XML (Pretty): 24-1.0165950-rdf.xml
RDF/JSON: 24-1.0165950-rdf.json
Turtle: 24-1.0165950-turtle.txt
N-Triples: 24-1.0165950-rdf-ntriples.txt
Original Record: 24-1.0165950-source.json
Full Text

Full Text

Reliable and Efficient Transmission ofCompressive-SensedElectroencephalogram SignalsbyPatrick RmeilyA THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF APPLIED SCIENCEinThe Faculty of Graduate and Postdoctoral Studies(Electrical and Computer Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)August 2014c© Patrick Rmeily 2014AbstractAs technologies around us are emerging at a rapid rate, wireless body sensornetworks (WBSN)s are increasingly being deployed to provide comfort andsafety to patients. WBSNs can monitor the patient’s health and transmitthe collected data to a remote location where it can be assessed.Such data is collected and transmitted using low battery devices such asspecialized sensors or even smart phones. To elongate the battery life, theenergy spent on acquiring, processing and transmitting the data should beminimized. The thesis addresses the case of electroencephalogram (EEG)signals. It studies the energy spent in the sensor node, mainly in the pro-cessing stage, i.e. after acquiring the data and before transmitting it toa certain receiver-end in a wireless fashion. To minimize this energy, thenumber of bits to be processed and transmitted must be minimized. Com-pressive sampling (CS) is ideal for such a purpose as it requires minimalnumber of computations to compress a signal.For transmitting the signals acquired by CS, we studied their quanti-zation followed by 2 different schemes. Scheme 1 applies lossless Huffmancoding for further compression that allows perfect reconstruction. This isfollowed by a Reed-Solomon (RS) code to protect the data from errors dur-ing transmission. Scheme 2 does not apply any further compression. It onlyiiquantizes the data and applies the RS code to it.Both schemes were then enhanced by adding an interleaver and de-interleaver that improved the results. The data was then sent in packetsover a transmission channel which was simulated using a 2-state Markovmodel.Under ideal channel conditions, Scheme 1 with Huffman compressiondecreased the total number of bits sent by 5.45 %. The best scheme how-ever was scheme 2 followed by an interleaver. It achieved the best signalreconstruction results under normal or noisy channel conditions.iiiPrefaceThis thesis presents the research conducted by Patrick Rmeily, in collabora-tion with Professor Dr. Rabab K. Ward. I hereby declare that I am the firstauthor of this thesis. The chapters in this thesis are based on work that Ihave conducted by myself and projects that I have submitted as assignmentsas part of my past coursework. Section 2.1.3 is based on the work done bymy colleague at UBC, Mr Hesham Mahrous.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiList of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . xv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Telemetry for Medicinal Purposes . . . . . . . . . . . . . . . 11.2 Conditions for Telemetry . . . . . . . . . . . . . . . . . . . . 21.3 Electroencephalogram Signals . . . . . . . . . . . . . . . . . 31.4 Aim and Motivation of the Thesis . . . . . . . . . . . . . . . 51.5 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . 82 Literature Review and Background Theory . . . . . . . . . 9v2.1 Compressed Sensing . . . . . . . . . . . . . . . . . . . . . . . 92.1.1 Encoder . . . . . . . . . . . . . . . . . . . . . . . . . 112.1.2 Decoder . . . . . . . . . . . . . . . . . . . . . . . . . 122.1.3 Block-Sparse Bayesian Learning . . . . . . . . . . . . 142.2 Source Coding . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.1 Overview of Information Theory . . . . . . . . . . . . 192.2.2 Huffman Code . . . . . . . . . . . . . . . . . . . . . . 222.3 Transmission Channel . . . . . . . . . . . . . . . . . . . . . . 262.3.1 Channel Models . . . . . . . . . . . . . . . . . . . . . 262.3.2 Two-State Markov Model . . . . . . . . . . . . . . . . 322.3.3 Packet-Switched Networks . . . . . . . . . . . . . . . 352.4 Channel Coding . . . . . . . . . . . . . . . . . . . . . . . . . 372.4.1 Block vs Convolutional Codes . . . . . . . . . . . . . 382.4.2 Reed-Solomon Code . . . . . . . . . . . . . . . . . . . 432.4.3 Interleaving . . . . . . . . . . . . . . . . . . . . . . . 483 Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503.1 General Framework of the CS-EEG Encoding . . . . . . . . . 503.2 Huffman Code Followed by RS Code . . . . . . . . . . . . . . 543.3 Huffman Code Followed by RS Code and Interleaving . . . . 553.4 No Source Coding Followed by RS Code . . . . . . . . . . . 553.5 No Source Coding Followed by RS Code and Interleaving . . 563.6 Performance Measures . . . . . . . . . . . . . . . . . . . . . . 574 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584.1 Parameters Used in the Framework . . . . . . . . . . . . . . 58vi4.2 Huffman Code Followed by RS Code . . . . . . . . . . . . . . 614.3 Huffman Code Followed by RS Code and Interleaving . . . . 664.4 No Source Coding Followed by RS Code . . . . . . . . . . . 694.5 No Source Coding Followed by RS Code and Interleaving . . 724.6 General Comparisons . . . . . . . . . . . . . . . . . . . . . . 745 Summary and Conclusions . . . . . . . . . . . . . . . . . . . . 785.1 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 785.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 81AppendicesA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89viiList of Tables4.1 Results for Different Values of qbits . . . . . . . . . . . . . . . 584.2 Values for α and β . . . . . . . . . . . . . . . . . . . . . . . . 604.3 Results for Random Patterns Using RS(255,223) . . . . . . . 614.4 Results for Random Patterns Using RS(255,193) . . . . . . . 634.5 Results for Random Patterns Using RS(255,153) . . . . . . . 644.6 Results for Fixed Patterns Using RS(255,223) . . . . . . . . . 644.7 Results for Fixed Patterns Using RS(255,193) . . . . . . . . . 654.8 Results for Fixed Patterns Using RS(255,153) . . . . . . . . . 654.9 Results for Random Patterns Using RS(255,223) and Inter-leaving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.10 Results for Random Patterns Using RS(255,193) and Inter-leaving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.11 Results for Random Patterns Using RS(255,153) and Inter-leaving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.12 Results for Fixed Patterns Using RS(255,223) and Interleaving 684.13 Results for Fixed Patterns Using RS(255,193) and Interleaving 684.14 Results for Fixed Patterns Using RS(255,153) and Interleaving 69viii4.15 Results for Random Patterns Using RS(255,223) with no SourceCoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.16 Results for Random Patterns Using RS(255,193) with no SourceCoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.17 Results for Random Patterns Using RS(255,153) with no SourceCoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714.18 Results for Fixed Patterns Using RS(255,223) with no SourceCoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714.19 Results for Fixed Patterns Using RS(255,193/153) with noSource Coding . . . . . . . . . . . . . . . . . . . . . . . . . . 714.20 Results for Random Patterns Using RS(255,223) with Inter-leaving, no Source Coding . . . . . . . . . . . . . . . . . . . . 724.21 Results for Random Patterns Using RS(255,193) with Inter-leaving, no Source Coding . . . . . . . . . . . . . . . . . . . . 734.22 Results for Random Patterns Using RS(255,153) with Inter-leaving, no Source Coding . . . . . . . . . . . . . . . . . . . . 734.23 Results for Fixed Patterns Using RS(255,223/193/153) withInterleaving, no Source Coding . . . . . . . . . . . . . . . . . 734.24 Encoder Processing Times of the Approaches . . . . . . . . . 744.25 Encoder Processing Times Comparison with CS Algorithmusing RS(255,223) . . . . . . . . . . . . . . . . . . . . . . . . 754.26 Number of bits sent per epoch . . . . . . . . . . . . . . . . . 76A.1 Results for Random Patterns Using RS(255,223) . . . . . . . 83A.2 Results for Random Patterns Using RS(255,193) . . . . . . . 83ixA.3 Results for Random Patterns Using RS(255,153) . . . . . . . 83A.4 Results for Random Patterns Using RS(255,223) and Inter-leaving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84A.5 Results for Random Patterns Using RS(255,193) and Inter-leaving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84A.6 Results for Random Patterns Using RS(255,153) and Inter-leaving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84A.7 Results for Random Patterns Using RS(255,223) with no SourceCoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84A.8 Results for Random Patterns Using RS(255,193) with no SourceCoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85A.9 Results for Random Patterns Using RS(255,153) with no SourceCoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85A.10 Results for Random Patterns Using RS(255,223) with Inter-leaving, no Source Coding . . . . . . . . . . . . . . . . . . . . 85A.11 Results for Random Patterns Using RS(255,193) with Inter-leaving, no Source Coding . . . . . . . . . . . . . . . . . . . . 85A.12 Results for Random Patterns Using RS(255,153) with Inter-leaving, no Source Coding . . . . . . . . . . . . . . . . . . . . 86A.13 Results for Fixed Patterns Using RS(255,223) with and with-out Interleaving . . . . . . . . . . . . . . . . . . . . . . . . . . 86A.14 Results for Fixed Patterns Using RS(255,193) with and with-out Interleaving . . . . . . . . . . . . . . . . . . . . . . . . . . 86A.15 Results for Fixed Patterns Using RS(255,153) . . . . . . . . . 86A.16 Results for Fixed Patterns Using RS(255,153) and Interleaving 87xA.17 Results for Fixed Patterns Using RS(255,223) with no SourceCoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87A.18 Results for Fixed Patterns Using RS(255,193/153) with noSource Coding, with and without Interleaving . . . . . . . . . 88A.19 Results for Fixed Patterns Using RS(255,223) with Interleav-ing no Source Coding . . . . . . . . . . . . . . . . . . . . . . . 88xiList of Figures1.1 Sample of an EEG output . . . . . . . . . . . . . . . . . . . . 41.2 Block diagram for the processing of EEGs using WBSN . . . 52.1 Block diagram of the adjusted algorithm . . . . . . . . . . . . 152.2 Plot of the entropy w.r.t. the probability of a random variable 212.3 Example of how a Huffman algorithm works . . . . . . . . . . 232.4 AWGN channel model . . . . . . . . . . . . . . . . . . . . . . 272.5 Binary symmetric channel model . . . . . . . . . . . . . . . . 292.6 BSC capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.7 2-state Markov model . . . . . . . . . . . . . . . . . . . . . . 332.8 Convolutional encoder [10] . . . . . . . . . . . . . . . . . . . . 392.9 Viterbi decoder [10] . . . . . . . . . . . . . . . . . . . . . . . 412.10 Step by step Viterbi decoding [10] . . . . . . . . . . . . . . . 422.11 Reed-Solomon encoder [10] . . . . . . . . . . . . . . . . . . . 442.12 Interleaver [18] . . . . . . . . . . . . . . . . . . . . . . . . . . 482.13 Deinterleaved output [18] . . . . . . . . . . . . . . . . . . . . 493.1 Block diagram of the encoder . . . . . . . . . . . . . . . . . . 503.2 Block diagram of the first approach . . . . . . . . . . . . . . . 54xii3.3 Block diagram of the first approach with interleaving . . . . . 553.4 Block diagram of the second approach . . . . . . . . . . . . . 563.5 Block diagram of the second approach with interleaving . . . 564.1 Huffman followed by RS code (255,223) in good channel con-ditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644.2 Huffman followed by RS code (255,223) in average channelconditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654.3 Huffman followed by RS code (255,193) in bad channel con-ditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664.4 Huffman followed by RS code (255,223) with interleaving inaverage channel conditions . . . . . . . . . . . . . . . . . . . . 684.5 Huffman followed by RS code (255,193) with interleaving inbad channel conditions . . . . . . . . . . . . . . . . . . . . . . 694.6 No source coding followed by RS code (255,193) in bad chan-nel conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 72xiiiList of AcronymsAWGN Additive White Gaussian NoiseBCI Brain Computer InterfaceBSBL Block-Sparse Bayesian LearningBSBL-BO Block-Sparse Bayesian Learning - Bounded OptimizationBCS Binary Symmetric ChannelBW BandwidthCR Compression RatioCS Compressed SensingDCT Discrete Cosine TransformDM Delta ModulationDMC Dynamic Markov CompressionDWT Discrete Wavelet TransformEEG ElectroencephalogramFEC Forward Error CorrectingGSM Global System for Mobile communicationsIID Independent and Identically DistributedLDPC Low-Density Parity-CheckNMSE Normalized Mean Square ErrorPCM Pulse-Code ModulationRIP Restricted Isometry PropertyRS Reed-SolomonRV Random VariableSNR Signal-to-Noise RatioSSIM Structural SIMilaritySVM Single Measurement VectorTDM Time Division MultiplexingWBSN Wireless Body Sensor NetworkWT Wavelet TransformxivAcknowledgmentsThere are many people that I would like to thank and acknowledge for theirsupport and help, but first and foremost I would like to thank my supervisorProfessor Rabab Ward for her unconditional support and encouragement. Icannot express the gratitude and appreciation I have for her professionalism,body of knowledge, achievements, and most importantly her kindness. I willnever forget that kindness of hers, and I can say that this is what made allthe difference for me in having her as my supervisor.I would like to thank my lab mates, and specifically Simon, Hesham andHiba for all the laughs we had together and all the stories we shared. Therehave been lots of ups and downs in the last couple of years, and having youaround helped make the downs easier to go through!Last but definitely not least, I would like to thank my family and closestfriends for their continuous belief in me, even when things seemed to alwaysgo against what I was hoping for at that time. But as life teaches us so often,you need to accept the things you cannot change and with time the reasonfor such events becomes clearer. So thank you life for all those lessons, andthank you for sending me all those beautiful people to enrich my journey!xvChapter 1Introduction1.1 Telemetry for Medicinal PurposesOne of the basic human rights is that of having a good health care systemthat is properly functional to ensure the safety and health of human beings.Most countries around the world spend huge amounts of money for researchinto health, especially that many of the advanced countries have witnessedincreases in their elderly population [4].The current age demography shows that more people suffer from chronicdiseases which come naturally with age. However, the amount of chronicdiseases is increasing drastically in younger people due to unhealthy eatinghabits and lifestyles [16]. This creates a bigger financial burden for the healthcare system, estimated to be in the billions of dollars. Chronic diseases arenot something that younger people would usually suspect they have. Thismeans that constant medical check-ups and supervision could be beneficial.However, that is practically impossible to do as it requires that each patienthas one caretaker specifically assigned to him/her. Even if that were to bepossible, the costs alone would be far too much to deal with. Thereforesolutions that are inexpensive - or at the least cost-effective - need to bepresented.1Wireless body sensor nodes (WBSN)s are one solution that is gaininga lot of ground in the health care industry [27], [3]. They allow patientsto continuously monitor themselves, from the comfort of their homes, whileremote caregivers can be at clinics for example. By using sensors that areattached to the body, WBSNs allow patients to check on vital signs such astheir heart rate, diabetes and brain activity. Considering that WBSNs arecost-effective, efficient, and can be produced at a large scale, the solutionthat the health care industry has been looking for is now available. The factthat such devices could save lives forms an enough reason for the industry tospend or carry a lot of research in this area, to further improve the servicesthat they can provide.1.2 Conditions for TelemetryAs exciting as WBSNs sound, it is not as easy to design such devices as itseems. There are some conditions and restrictions on parameters that areessential in the designing of WBSNs which will be discussed now.In many applications, telemedicine uses data channels that have lowbandwidth (BW). The BW of a channel dictates how many bits per secondone can send through the channel and since low BW channels have low bitrates, we already are faced with the first restriction of not being able to sendlarge chunks of data all at once.To be able to directly monitor one’s vital signs, the results have to reachthe remote center in real-time or close to real-time. This means that theless data we send, the faster the signal will be reconstructed and viewed at2the receiver end. To achieve that, one must try and make sure that the datato be sent is composed of only useful data that is also not redundant. Thisposes another condition on the processing stage of the WBSN encoders.Another restriction, which is basically the most costly one, is that ofthe processing power at the WBSN nodes. WBSNs usually use batteriesthat do not last long. Think of a cell phone battery for example. If youopen applications that require too much processing power, the battery willunfortunately die out pretty quickly. Hence, for WBSNs to be convenientand practical to use, the algorithms executed inside them should be designedso that their processing time is extremely fast. Therefore the battery wouldnot get drained out and the patient wouldn’t have to change batteries often(since WBSNs are operated using a battery).This previous restriction leads us to another costly one, which is theenergy needed to transmit all the data. As mentioned earlier, lesser amountsof transmitted data lead to a more real-time viewing of the signal at thereceiver end. This is not the only benefit that comes out when the dataamount to be sent is decreased. Saving up on energy is just as important ofa benefit, if not even more, because the lesser the data to be sent, the lesserthe energy that will be needed, and hence, the more the battery’s lifespanis preserved [27].1.3 Electroencephalogram SignalsEven though the work frame in this thesis can be applied to many signals,the focus has been on electroencephalogram (EEG) signals, which are signals3Figure 1.1: Sample of an EEG outputthat convey the electrical brain activity and are acquired by non-invasivesensors placed on the patient’s head. They are then transmitted wirelessly.The information contained in the EEG signals obtained from multi-ple electrode sensors can help diagnose epilepsy, sleep disorders, strokes,Alzheimer’s, tumors and comas among other illnesses. EEG monitoringneeds on average 30 minutes of recorded signals for specialists to be able todetect a disorder. In the case of some disorders, the monitoring should bedone for a longer period of time [13], [14]. This makes the use of WBSNs athome much more favorable than spending time at the hospital. Figure 1.1shows a sample of a recorded EEG output.Such WBSNs, specifically for EEG signals, have a certain number of elec-trodes placed on the patient’s head as specified in the international 10-20system. The number of electrodes used can vary depending on the pur-pose. Each of these electrodes results in an output signal, and that sensoris known as an EEG channel. EEG electrodes provide high temporal reso-lution. However, this is not the case for spatial resolution (i.e. resolution ofthe EEG signal over the scalp) which can be a limitation. To increase thespatial resolution, the number of sensors used in the EEG measurementshas to be increased.Before transmitting the EEG data, the channels have to be processed as4Figure 1.2: Block diagram for the processing of EEGs using WBSNshown in Fig. 1.2. All the channels are collected and the resulting outputforms a one stream of data. This data is then processed in two parts. Thefirst part removes any redundancy that may be present in the data in orderto transmit the relevant data only, and the second part adds protection bitsto the data to protect it against the channel noise that can greatly distortit. Finally this data is transmitted using Bluetooth.Since this processing is done within the WBSN [8], the complexity shouldbe minimal so as not to drain out the system battery. That is not the caseon the receiver-end where the computations to retrieve and reconstruct theoriginal data sent can be more complex, since there are no restrictions placedon the energy consumption at this receiver-end [7].1.4 Aim and Motivation of the ThesisThe aim of this thesis is to minimize the energy spent at the sensor node.The energy is spent on acquiring the data, processing it so it can be trans-mitted and then wirelessly transmitting it. To minimize the transmissionenergy, as few bits as possible should be transmitted. Therefore the acquired5data should be compressed before its transmission so that it is representedby as less bits as possible and it can be reconstructed from these bits. Thecompression should also require as little energy as possible for use in appli-cations such as WBSNs that transmit EEG data [1]. Compressed Samplingis ideal for this purpose. With present day technology however, it is notyet possible to sample the analog signal directly using CS. Therefore we re-sample the acquired signal using CS. We then quantize the CS resulting datausing the minimum number of levels that allow good quality reconstruction.To transmit the signals, we aim to come up with a joint source and channelcoding combination that can be used for EEG WBSN purposes. As alreadydiscussed, the power consumed by the algorithms that process the data atthe encoder side (the WBSN) should be decreased as much as possible. Wealso need the signals to be reconstructed and displayed in real-time, and wehave to do that using a low bandwidth channel, that is also noisy. So thegoal is to find a suitable framework that can address these demands.Processed signals need to be quantized. This process already introducesan error, known as the quantization error. Depending on the kind of signalsbeing sent, the bits required for quantization can be lowered. In otherwords, some signals require the smallest variations in the amplitudes to bereconstructed clearly because such variations are pivotal to the diagnosis of apatient. In such cases, a higher number of bits can be used for quantization,which leads to more quantization levels that allow for such variations to becaptured. Other signals can allow for a more flexible quantization error andhence the number of quantizing bits can be smaller.When applying source coding, a lossless compression algorithm is desired6for the simple fact that it can perfectly reconstruct the transmitted data.Lossless algorithms however suffer from the high number of computationsrequired to compress the information. The Huffman code is usually thealgorithm of choice as it can be found in numerous applications that coverseveral fields. In comparison, lossy-algorithms are simpler in terms of theircomputational requirements.For channel coding purposes, there are two main types of algorithms:block and convolutional algorithms. Both have very powerful existing algo-rithms. However, when dealing with real-time applications block codes areusually more prevalently used because they can be computed faster.The work in this thesis is motivated by the use of compressed sensing(CS), which is a technique that samples and compresses at the same time.CS was discovered recently. The sampling is done at a rate far below theNyquist rate, which for many decades was not explored. Because CS samplesthe data by taking linear projections of the raw data [7], it requires littleenergy at the transmitter side. The computationally complex part of CS isat the receiver end. For our application however, this does not constitute aproblem as there is no limitation on the power in the receiver end. Thosetwo properties make CS a perfect algorithm of choice to be applied in WBSNapplications.In summary, this thesis aims to find a good combination between com-pressing the data, quantizing and adding protection to it while taking intoconsideration the computational effort and the energy spent at the sensornode, the noise in the channel and the reconstruction quality. The thesiswill study the results of each scheme applied and propose improvements for7future work.1.5 Thesis StructureThe rest of the thesis is divided into 4 chapters. The first chapter, chapter2, gives a theoretical review of the concepts used in this work. Section 2.1 ofthis chapter introduces the concept of compressed sensing and presents thetheory behind the encoding and decoding stages of this algorithm. Section2.2 introduces the concept of source coding and information theory. It goesinto the detail of one of the most famous source coding algorithms, the Huff-man code. Section 2.3 deals with the transmission channel, its propertiesand how to model the Bluetooth channel that is used for sending the data.Section 2.4 explains the need for channel coding algorithms in any transmis-sion scheme. It gives a comparison between two families of codes: block andconvolutional codes. Its also discusses interleaving and its benefits. Thisis followed by chapter 3 which presents the different schemes used for thetransmission of the data and then chapter 4 which analyzes the results ob-tained from the experiments. Finally, the whole results and observations aresummarized in chapter 5.Throughout the thesis, it is important to point out that bold lowercaseletters mean the variable used is a vector, whereas bold uppercase lettersmean it is a matrix. Scalar quantities are represented in normal letters.8Chapter 2Literature Review andBackground Theory2.1 Compressed SensingSignals in life are usually of an analog, continuous nature. When we want todiscretize them, we have to apply a technique called sampling, which as thename implies, takes samples from the signal, and these samples can then beanalyzed or manipulated for whatever purpose intended. Then when thesesamples are dealt with, they are retransformed into an analog continuoussignal by interpolation.The samples though could not be taken randomly and without any con-strictions. There is a whole area of study behind it called the Nyquist-Shannon sampling theory, or in short, the sampling theory [26]. What itstates is that for a signal x(t) we must take at least two samples per cycle ofthe highest frequency component of that signal [21]. In mathematical term,if the highest frequency of the signal x(t) is2fM ≤ fS (2.1)9where fS is the sampling frequency. If this condition is not satisfied,aliasing would occur causing the signal to not be properly reconstructedfrom the samples taken. In cases where the bandwidth is very large and theconstraints that are imposed on the sampling architecture are a bit too muchto cope with, the traditional sampling theorem proves to be too demand-ing. In other cases where the signals are sparse in their nature, applyingShannon’s sampling theorem produces a large amount of redundancy in thesamples, which are costly to wirelessly transmit as is the case with telemon-itoring where the bandwidth is small and one needs to make the most of it,therefore using that theorem limits the sensor nodes lifetime.This theorem has been lately challenged by the advent of a new techniquecalled compressed sensing which will be discussed in this chapter.For WBSNs that were discussed in chapter 1, increasing the battery’slifetime and achieving high compression ratios (CR)s is important, as well asthe cost of the device. The cheaper the device, the more willing the patientswould be to purchase them. This also signifies that the hardware cost shouldbe low, which in turn means that the encoding algorithm should have lowcomplexity, leaving the complexity to the decoder at the remote center orlaptop [27].As far as choice of transforms for signal reconstruction go, the wavelettransform (WT) was usually the choice to use for excellent signal recon-struction at the receiver. However, the WT compression fails to satisfy suchconstraints, which has led to the introduction of CS, which is a compres-sion technique that depends on the sparsity of the signals on hand, and incomparison to the WT, reduces energy consumption while maintaining a10competitive data compression ratio, and largely reducing the device’s costas was shown in [16].Basically CS allows us to take far fewer samples from a signal than isactually needed by conventional techniques and still manage to properlyreconstruct that signal. To do so, two properties must be satisfied: that ofsparsity and that of incoherence [5].What is meant by sparsity is the idea that for a signal of finite length,the information rate contained in that signal is much less than its lengthsuggests, hence making it compressible. That is a point that allows CS tonot only act as a sampling technique, but also a compressing algorithm,since it drastically decreases the amount of data taken from the signal.Incoherence on the other hand refers to the duality between the timeand frequency domain representations of the signal where in one domain,that representation is spread out, and in the other domain, it is sparse [5].Taking advantage of those two properties allows for a sampling andcompression algorithm that is practical and energy efficient for applicationswhere the encoding energy is quite limited. Both sampling and compressionare replaced by a randomized sub-sampling technique that linearly encodesthe samples without the need of high resolution for the reconstruction ofthe signal [15]. Two matrices are essential for CS: the linear measurementsensing matrix and the dictionary matrix.2.1.1 EncoderCS starts by collecting M samples of a signal x whose dimension is Nx1,where M  N using simple analogue measurement waveforms, thus sam-11pling and compressing at the same time which helps in removing a large partof the digital architecture. The M samples are collected using the simplemeasurement vectors φi, where 1 ≤ i ≤ M and φi is of length N. Each ofthose vectors has k-nonzero elements. The transpose of these vectors formthe rows of the sensing matrix Φ which compresses the signal x as follows:y = Φx. (2.2)The output signal y has a dimension of Mx1 where M  N [16].2.1.2 DecoderThe original signal x is recovered from the compressed data y using thesensing matrix Φ and assuming that x is actually a sparse signal. If x isnot sparse (as is the situation with EEG signals), but it is sparse in anotherdomain, that is there exists a matrix D such that x = Dz, where z is sparseand D is a dictionary (transform) matrix of dimension NxN , then equation2.2 can now be rewritten asy = ΦDz (2.3)Thus in this case, the CS algorithms would first have to recover z byusing y and ΦD, and then recover the original signal x from the relationx = Dz.When CS is used in applications such as telemonitoring, the signal x isfirst compressed by the sensors according to equation 2.2. The compresseddata y is then transmitted to a server that may reside in a clinic or another12facility. The original data x is recovered from y according to equation 2.3,where the matrix Φ is known and D is determined from the nature of x.To see the advantages of this method, all one has to do is go deeper intothe mathematics that is involved in CS. The simple linear sampling strategyapplied by CS yields results that are marginally off the optimal adaptivestrategy (which is too complex) [23].To guarantee the robust and efficient recovery of any S-sparse vector x,the sensing matrix Φ must obey the key restricted isometry property (RIP)(1− δS)‖x‖2 ≤ ‖ΦDz‖2 ≤ (1 + δS)‖x‖2 (2.4)where δS is the isometry constant of the matrix Φ - which must not betoo close to one (δS < 1). The RIP ensures that the energy of the signalis bounded from below and above. The smaller δS is, the more energy iscaptured and the more stable the inversion of ΦD is [5].The key factor that bounds the restricted isometry constant δS is themutual coherence amongst the columns of ΦD as follows [11]:δS ≤ (k − 1)µ (2.5)where µ = max1≤i 6=j≤N < φi,dj >, and k is the number of non-zeroentries in x.A good choice for the measurement matrix Φ is a random matrix withindependent and identically distributed (iid) entries [9].The signal x is recovered at the decoder side via a convex optimizationproblem. One major advantage of decoding by optimization, is that CS13decoders are more robust to noise and quantization errors, which helps infurther enhancing compression and reduces the demands on the digital back-end and the on-board memory.If the RIP in equation 2.4 holds, then a reconstruction that is faithful tothe original signal can be accomplished by solving the convex optimizationproblem:z¯ = minz ‖y −Φz‖22 + λg(z); (2.6)where λ is a regularization term and g(z) is the penalty term, which isa function of z [15]. The regularization and penalty terms are needed toaccount for the noise factor introduced by the l2 norm. After solving for z¯,we can get the reconstructed signal as follows x¯ = Dz¯.2.1.3 Block-Sparse Bayesian LearningThe problem of EEG signals is that they are not sparse by nature, neitherin the time domain nor in a transform domain. Since the CS theory is de-veloped for signals that are sparse or have sparse representation coefficientsin some transform domain, the existing CS algorithms cannot achieve goodsignal recovery for EEG signals. Therefore, the aim in [27] was to find analgorithm capable of recovering the EEG signal from its compressed sensingdata using equation 2.3.The block-sparse Bayesian learning (BSBL) algorithm was developed totry and deal with that. BSBL uses a general dictionary to reconstruct anon-sparse signal instead of finding an optimal dictionary for a specific type14Figure 2.1: Block diagram of the adjusted algorithmof data. The BSBL algorithm was proven in [27] to be effective for thecompressed sensing and reconstruction of non-sparse signals that also haveno distinct block structure. It uses the discrete cosine transform (DCT) anddiscrete wavelet transform (DWT) as dictionaries.BSBL addresses the single channel signal case, also called the single mea-surement vector (SVM) case. The algorithm exploits the intra-correlationswithin the structure of an SMV EEG channel. Let us denote the number ofEEG channels as P . For the case when P = 1 (the SMV case), the data isencoded using the algorithm explained in section 2.1.1 with a few processingsteps before compressing the data. For the multi-channel case when P > 1,BSBL reconstructs each column of the data one by one.It is in the way with which the original data is reconstructed that theBSBL differs from traditional CS algorithms. CS exploits the sparsity ofthe signals when reconstructing them. BSBL exploits either the temporalcorrelation in the signal or its block-sparsity [27].Assume a block sparse signal of the form x = [x1, ..., xh1 , ..., xhg−1+1, ..., xhg ]T ,where [x1, ..., xh1 ] = xT1 up to [xhg−1+1, ..., xhg ] = xTg , and xi has dimensionhi × 1, i = 1...g.BSBL models each block xi ∈ Rdi×1 as a parametrized multivariate15Gaussian distribution of the formp(xi; γi,Bi) ∼ N(0, γiBi), (2.7)where i = 1...g. γi is a non-negative parameter that controls the blocksparsity of x. If γi = 0, then the ith block of x is all zero. Bi is a positivedefinite matrix with dimension di× di. It captures the correlation structureof the ith block.Assuming that the blocks are mutually uncorrelated, the prior of x ac-cording to 2.7 is p(x) ∼ N(0,Σ0) where Σ0 is a block diagonal matrix withthe ith principal block given by γiBi. The noise vector is assumed to satisfya multivariate Gaussian distribution p(v) ∼ N(0, λI), where λ is a positivescalar and I is the identity matrix.Now, the estimate of x can be obtained by the Maximum-A-Posteriorestimation, assuming that all the parameters λ and (γi,Bi)g1 have beenestimated using the Type-II maximum likelihood estimation [28].The reconstruction of the data is done using a bound optimization BSBLalgorithm. This algorithm has the ability to exploit the intra-block corre-lation through the estimation of the matrices Bi. Even though the userneeds to determine the block partition, it still works well for arbitrary blockpartitions.The work done in [15] improves the BSBL algorithm presented in [29] byinvestigating both the intra-correlation and inter-correlation of multivariateEEG channels and processing all channels simultaneously. This approachprocesses the data epoch per epoch and then takes the output for every16epoch and forms a stream of data, i.e. it is divided into equal blocks of sizeN , where N is chosen to be equal to 256 samples per epoch. Assuming thereare P channels, the stream would be of size P ×N . This step is known asepoching, and it is the first block in figure 2.1.The second block subtracts the channel mean from every channel, hencenormalizing each channel. Such a step leads to better results by the algo-rithm because it removes the biasing that affects the data by the acquisitionamplifiers. The means are added again after the reconstruction of the data.The third block in figure 2.1 exploits the intra-correlation and inter-correlation of the P channels. A random channel is first selected. Thenits cross correlation is computed among other channels. The three channelswith the highest correlation among the others are placed next to one anotheralong the rows of the matrix X. Next, the sorted channels are excluded andthe previous steps in this block are repeated again until all channels aresorted. Once the blocks are reordered into P rows, they are vectorized toform one output, x ∈ RPN×1, using the equationV EC(X) = [a1,1, a2,1, ..., aL,1, a1,2, a2,2, ..., aL,2, ..., a1,P , a2,P , ..., aL,P ]T ,(2.8)where a is the cell matrix of the data matrix X of dimension P ×N .The last block deals with the compression of the data, which compressesall the channel outputs at once instead of one at a time. It was shownin [11] that compression that can lead to perfect reconstruction dependson the degree of coherence between matrices Φ and D, and the higher that17incoherence, the larger the achievable compression ratio. The main conditionfor this to happen is that Φ must be iid.2.2 Source CodingWhen one wishes to transmit data from one end to another, it is very impor-tant to ensure that the transmitted signal only carries the necessary infor-mation and nothing more, especially in cases where we have constraints onthe data rate we are transmitting with. Therefore, compressing and deletingunwanted data is an essential step before sending the signal. That is whatsource coding does. To define source coding in just one sentence, we cansay that it is an algorithm that is used to get rid of all redundant and irrel-evant bits in the data. Before going into a deeper and more mathematicalexplanation of what that means, it is necessary to define what is meant byredundant and irrelevant bits.Redundancy means something that is repetitive, hence why source cod-ing helps us get rid of that. This means that when the representation of asymbol is not efficient enough, one can decrease the number of bits used torepresent this symbol. This of course is also dependent on the probability ofoccurrence of this symbol as will be demonstrated later. The second term,irrelevancy, can also be directly interpreted as something that is of no im-portance to the correct reception and understanding of the data. Therefore,this irrelevant part can be simply discarded (by the use of filters for exam-ple). However, out of these two terms, the one that is most important to usis redundancy, and most source coding algorithms work on taking advantage18of it [10].2.2.1 Overview of Information TheoryIn this section we consider a more in-depth explanation of the importance ofsource coding. It is a means that allows us to achieve compression of data,which reduces the amount of bits that need to be transmitted. As mentionedin the introduction of this chapter, the probability of the occurrence of asymbol is of great importance, because it is what can be targeted in orderto reduce the redundancy. Therefore one must know the statistics of thesymbols that are being transmitted and the probability of occurrence ofeach symbol. This is the basic building block of all information theory.Let xi and yj be the outcome observed for the two random variables Xand Y . From their probabilities, we can get the first important parameterin information theory, which is the information of an event. There are twotypes of information: the first is the mutual information between xi and yjand the second is the self-information. Mathematically, these are expressedasI(xi; yj) = log2P (xi|yj)P (xi)(2.9)I(xi) = log21P (xi)= − log2 P (xi) (2.10)where equation 2.9 and equation 2.10 are the mutual and self-informationrespectively.The self-information can be the same as the mutual information in the19case where the occurrence of the event Y = yj uniquely determines thatof the event X = xi. In this case, the conditional probability P (xi|yj) isunity and hence we get the self-probability. Another interesting scenariois when the two random variable X and Y are statistically independent.Here, the conditional probability would be equal to P (xi) and hence themutual information would be 0. Also worth noting is that from equation2.10, it is clear that the higher the probability of the event, the less theinformation that is being conveyed, and the lower the probability, the higherthe information being conveyed.The second important parameter that we need to know is the entropyor the average self-information, which gives an indication on how well thesource coding performance is. The entropy is defined asH(X) =n∑i=1P (xi)I(xi). (2.11)If we replace the equation 2.10 in equation 2.11, then we would get thefollowing:H(X) = −n∑i=1P (xi) log2 P (xi) (2.12)where the unit is bits/symbol in both equations 2.11 and 2.12.Figure 2.2 shows the entropy plotted against the probability of an eventof a random variable. It can be seen that the entropy is maximum when theprobability is equal to half. In other words, the entropy would be maximumwhen all the symbols are transmitted with equal probability.In order to see how much redundancy is present in a certain source20Figure 2.2: Plot of the entropy w.r.t. the probability of a random variablecoding scheme, one has to simply calculate the maximum entropy and thensubtract from it the entropy of the scheme used. This is defined in equation2.13. The smaller the difference is, the less is the redundancy available inthe symbols being transmitted. In such cases, the complexity of the sourcecoding algorithms would come into play, because if the system designed haslimitations on its power or computational ability, and the redundancy isminimal, then the following question arises. Is it worth it to use such analgorithm for minimal improvements, but at the same time consume moreresources? The answer to this question will be clearly stated in chapter 4.Redundancy = Hmax −H(X) (2.13)Another parameter of importance in information theory is the averagecode-word length L, defined as21L =n∑i=1L(xi)P (xi). (2.14)This parameter allows us to see the difference in the code-word lengthwhen using a source coding algorithm compared to when the data is sent asis. Mostly, when source coding is used, L decreases, which is to be expected,since the task of the algorithm is to decrease the redundancy.2.2.2 Huffman CodePerhaps the best way to show the efficiency and advantage of source codingis to simply give an example about a source coding algorithm. The algorithmof choice here will be the Huffman Code. The basic idea behind Huffmancoding is to assign the symbol with the highest probability of occurrence theshortest code-word, and the least probable symbol the longest code-word,all the while making sure that the combinations are unique and would notbe confused with other symbols upon receiving them. This is important orelse the code would be invalid.The algorithm will be explained step by step. It has 4 stages presentedbelow:1- The symbols are sorted according to their probability in descendingorder.2- Add the probability of the two lowest probabilities and assign a 1 tothe lower symbol of the two and a 0 to the higher one.3- Repeat steps 1 and 2 for the probabilities till you end up having onlytwo values.22Figure 2.3: Example of how a Huffman algorithm works4- Then move from right to left column-wise and symbol-wise to get therepresentation of each symbol.The following numerical example will help show the advantages of theHuffman code. Let us consider four symbols A, B, C and D with probabilities0.6, 0.25, 0.1 and 0.05 respectively. Figure 2.3 below shows the graphicalimplementation of the algorithm presented above.After applying the 4 steps of the algorithm, symbol A, which had thehighest probability, ended up being assigned only one bit, whereas the sym-bols C and D which had the lowest probabilities were assigned three bitseach. Had we chosen not to apply source coding, since we have 22 symbols,then the logical thing to do would be to give each symbol a two-bit repre-sentation, say 00, 01, 10, 11 for A, B, C and D respectively. Let us takeany transmitted sequence abiding by the above probabilities, that is we willtransmit 40 symbols (24 A, 10 B, 4 C and 2 D):AAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBCCCCDD.23Now let us replace each symbol by its bit representation first in the casewhere no source coding is used and then directly after it in the case whereHuffman encoding was used, and we get:Without Huffman Encoding:00000000000000000000000000000000000000000000000001010101010101010101101010101111and after Huffman Encoding:00000000000000000000000010101010101010101010110110110110111111.In the former scenario, for 40 transmitted symbols, we get a total of40x2 = 80 bits, whereas for the latter scenario, we would get a total of24x1 + 10x2 + 4x3 + 2x3 = 62 bits, hence a decrease of 22.5% in the amountof bits transmitted.From the above values obtained, it is easy then to calculate the averagecode-word length L as follows:L = 2x0.5 + 2x0.25 + 2x0.125 + 2x0.125 = 2 bits/symbol.Lhuffman = 1x0.6 + 2x0.25 + 3x0.1 + 3x0.05 = 1.5 bits/symbol.The above result reflects the improvement that source coding can intro-duce, since the average length of one code-word decreased from 2 bits to 1.5bits, or 25%!The Huffman code is an algorithm applied for the compression of dig-ital data. It is one of many such schemes belonging to the lossless datacompression algorithms, as opposed to lossy data compression algorithms.24The difference between the two types of algorithms is in both the amountof compression achieved and the quality of the reconstructed signal. Forlossless algorithms, the compression ratio achieved is less than when a lossyalgorithm is used. On the other hand, they are able to achieve perfectreconstruction at the receiver end, something that can not be done by thelossy algorithms. Some signals do not afford losing any of its accuracy(remember that analog signals already lose some of their resolution due tothe quantization error introduced by digitizing the signal), hence the reasonwhy lossless algorithms are preferred. However, these codes require morecomputational processing than their lossy counterparts.The above algorithm is a digital source coding technique. There existseveral important analog source coding techniques, with the most popularbeing the Pulse-Code Modulation (PCM).The concept of PCM is very simple. It converts an analog signal intoa digital one by first sampling the signal, then quantizing it, and finallyencoding each quantized value by a certain number of binary bits.Another important analog source coding technique is the Differentialpulse-code modulation (DPCM). The concept of DPCM is based on thecorrelation between successive samples that are sampled at the Nyquist rateor faster. In such cases, the correlation is high, that is, the average change inthe amplitude of the successive samples is relatively small, and DPCM makesuse of this correlation [22]. Also worth mentioning is the Delta modulation(DM) scheme, which is basically just a simplified form of the DPCM. How-ever, we will not delve into those two schemes as the intention of mentioningthem is simply to bring them to the attention of the reader.252.3 Transmission ChannelThe transmission process consists of three main building blocks: the trans-mitter, the receiver and the channel. The transmitter side deals with theencoding of the data whereas the receiver decodes it. This section explainsthe part that is in the middle: the channel and its effects on the transmissionprocess. To help in the understanding of how a channel works, examples ofa few well-known channel model will be discussed.2.3.1 Channel ModelsA channel is basically a physical medium through which a signal is transmit-ted. During transmission in such a medium, errors are added to the trans-mitted data, hence corrupting it. A channel can be any physical mediumwhere data travels through on its way from the transmitter to the receiver. Itcan be the atmosphere in the case of wireless transmission, or wire lines andoptical fiber cables in the case of telephone channels, among other possiblemediums [22].An ideal channel is one in which the signal that is transmitted arrivesexactly as is at the receiver end where it is reconstructed. However, in reallife that is never the case. In the simplest of cases, the channel will add inone way or another noise that is usually considered white, and hence thefirst channel we will discuss is the Additive White Gaussian Noise (AWGN)channel.This is a type of channel where the received signal is simply the sum ofthe original signal and added noise, as defined in equation 2.15.26Figure 2.4: AWGN channel modely(t) = x(t) + n(t) (2.15)where y(t) is the received signal, x(t) the sent signal and n(t) the addednoise. This equation can be easily transformed into the model shown infigure 2.4 below. The added white noise has a constant power spectraldensity, which means it has the same value over all frequencies.This channel model does not take into consideration the phenomena offading, interference, dispersion, nonlinearity or frequency selectivity. How-ever, it gives a mathematical model that is useful in understanding thebehavior of a system before such phenomena are taken into consideration[2].All of the aforementioned phenomena occur frequently depending on thechannel and the environment among other variables. The most commonis fading. Fading occurs normally due to multi-path propagation, which iscreated because of the presence of reflectors that lie in the path between27the transmitter and the receiver. This causes interference with the originalsignal and it leads to attenuation, phase shift and delay in the originalsignal. Frequency selectivity is a kind of fading as well. It is caused whenthe signal partially cancels itself when multi-path exists. Dispersion is whenthe signal hits an object, such as the branches of a tree and it gets dispersedor deflected into several directions, which could lead to more paths due tomore reflections of the signal.Some other common examples of channel models will be given. We startthings off with the simplest of all, the binary symmetric channel (BSC).This channel only transmits zeros and ones, that is binary symbols as thename indicates [22].To model this channel, we assume that the input and output havediscrete-time binary input and output sequences respectively, characterizedby the set X = 0, 1 for the input and Y = 0, 1 for the output. The errorsare assumed to be statistically independent, and their average probability isp. The transition probabilities are given asP (Y = 0|X = 1) = P (Y = 1|X = 0) = p, (2.16)andP (Y = 0|X = 0) = P (Y = 1|X = 1) = 1− p. (2.17)Therefore the channel can be modeled as a discrete-time channel as isshown in figure 2.5.The BSC is actually a special case of a more general discrete-time chan-28Figure 2.5: Binary symmetric channel modelnel called the discrete memoryless channel. In this case, we have q-aryinput symbols going into the channel and Q-ary output symbols coming outfrom the detector with Q ≥ 2q. If we consider that the channel and themodulation are memoryless, then the channel can be described by a set ofconditional probabilitiesP (Y = yi|X = xj) ≡ P (yi|xj), (2.18)where i = 0, 1, ..., Q− 1 and j = 0, 1, ..., q − 1.Another channel is the discrete-input, continuous-output channel, wherethe input alphabet is one of q possible values, with X = x0, x1, ..., xq−1,and the output of the detector is non-quantized, which means that there areinfinite values for the output value Y . An example of such a channel is theAWGN channel explained at the beginning of this section, which is modeledas29Y = X +N, (2.19)where N is a zero-mean Gaussian random variable with variance σ2 andX = xk, k = 0, 1, ..., q − 1.A main constraint on any channel model is the capacity of the channel,which will be briefly explained now. We start by considering a discretememoryless channel with an input alphabet X = x0, x1, ..., xq−1, an outputalphabet Y = y0, y1, ..., yQ−1 and the set of transition probabilities definedin equation 2.18. Assume that the symbol xj is transmitted and the symbolyi is received. The mutual information provided about the event X = xj bythe occurrence of the event Y = yi is log[P (yi|xj)/P (yi)], where [22]P (yi) =q−1∑k=0P (xk)P (yi|xk). (2.20)The average mutual information would beI(X;Y ) =q−1∑j=0Q−1∑i=0P (xj)P (yi|xj) logP (yi|xj)P (yi). (2.21)The channel capacity is then calculated by computing the maximum ofI(X;Y ) over the set of input symbol probabilities P (xj), which depends onthe characteristics of the discrete memoryless channel through the condi-tional probabilities P (yi|xj). Therefore, the channel capacity C is30Figure 2.6: BSC capacityC = maxP (xk)I(X;Y ) = maxP (xk)q−1∑j=0Q−1∑i=0P (xj)P (yi|xj) logP (yi|xj)P (yi), k = 0, 1, ..., q−1.(2.22)There exist two constraints for which the maximization is done:1- P (xj) ≥ 0.2-q−1∑j=0P (xj) = 1.The unit of the channel capacity C is measured in bits per input symbolinto the channel. Figure 2.6 shows the capacity of a binary symmetricchannel plotted versus the probability of error p.Next we consider the discrete-time AWGN memoryless channel. Again,the capacity is the maximum average mutual information between X andY .31C = maxP (xk)q−1∑i=0∞∫−∞p(y|xi)P (xi) logp(y|xi)p(y)dy, k = 0, 1, ..., q − 1 (2.23)wherep(y) =q−1∑k=0p(y|xk)P (xk). (2.24)2.3.2 Two-State Markov ModelA Markov chain is a stochastic process that studies the transitions betweenthe different allowable states in a certain situation [6]. It is a memorylessrandom process because the next state in the chain depends does not dependon the previous state other than the current state.Three elements characterize a Markov chain:1- The transition diagram (shown in figure 2.7 for a two state).2- The probability transition matrix P .3- The steady state vector pi [6].The two-state Markov model is shown in Figure 2.7. The model has twostates in the figure: state 1 or the good state, which means that the packethas been received, and state 2 or the bad state, which means that the packethas been lost. The variable PBB is the self-loop probability for state 2, PAAthe self-loop probability for state 1, PAB the probability of transition fromstate 1 to state 2, and finally PBA, the probability of transition from state2 to state 1 [6].For the sake of simplicity, let us relabel the probabilities as follows:32Figure 2.7: 2-state Markov modelPBB is q, PBA is p, PAB is P and PAA is Q.For states greater than 2, the other states are added with arrows going toand coming from every other state including one self state arrow signifyingthat the next state remains in the same state as the current state.The probability transition matrix has as elements the transition proba-bilities between each of the states and the states themselves. For an nxnmatrix, which means there are n-states in the chain, P is expressed as:P =P00 P01 ... P0nP10 P11 ... P1n... ... ... ...Pn0 Pn1 ... Pnn(2.25)where pij is the probability that the current state in the chain is in statei given that the previous state was in state j [6].The elements of P must satisfy the two following conditions:1- pij ≥ 0, i, j = 0, 1, ..., n.2-∑j pij = 1, j = 1, 2, ..., n.33The first property simply means that each element must be greater thanor equal to zero, which is quite logical since those are probabilities. Thesecond property implies that the total sum of the elements of a row must beequal to one.For the model used in this thesis, the two-state Markov model, the prob-ability transition matrix is as follows:P =PAA PABPBA PBB (2.26)The steady-state vector pi represents the total appearing percentage ofevery state in the chain [6]. We can get this vector from the probabilitytransition matrix P as shown in equation 2.27 belowPm = 1pi (2.27)where 1 is a column vector of ones and m is a large power. The vectorpi must satisfy the property that∑i pii = 1, where pii is the steady stateprobability for the ith state. What this property signifies is that the sum ofthe elements of pi should be equal to 1.There are two parameters of great importance for such a model andthey are the average packet loss burst length, β2state, and the probability ofpacket loss, α2state, both of which are defined as follows:β2state =11− q(2.28)and34α2state =P2−Q− q. (2.29)From equations 2.28 and 2.29 it is easy to find the self-loop probabilitiesq and Q:q = 1−1β2state(2.30)andQ = 1−α2state(1− α2state)β2state. (2.31)Markovian models are quite efficient in describing the characteristics ofa channel in a mathematical method, allowing us to have a better approachto simulating a real channel [6].2.3.3 Packet-Switched NetworksIn this thesis, packet-switched networks are used since we are transmittingpackets of data. However, it is important to explain what is happeningduring transmission for both packet-switched and circuit-switched networks,and compare the two schemes.A circuit-switched network establishes a fixed bandwidth channel be-tween nodes before the users are able to communicate or connect. It isanalogous to connecting the nodes physically with an electrical circuit [20].Normally, circuit switching is used only for connecting voice circuits. How-ever, it can be used in other forms of digital data, even if that rarely happens.35In circuit-switched networks, the data is transferred non-stop and withoutany overhead bits. During a communication between two users, the circuitcannot be used by other users until the circuit is dropped and then a newconnection is set up. Such channels are labeled as busy channels. On theother hand, when a channel is available for new calls to be set up, suchchannels are called idle [20].To establish a connection through the network and monitor its progressand termination requires the use of a control channel that is separate, similarto the links during telephone exchanges where the CCS7 packet-switchedsignaling protocol is used to communicate and control the call setup andinformation as well as use time division multiplexing (TDM) to transmitthe actual circuit data. Examples of circuit switched networks include high-speed circuit-switched data service in cellular systems such as X.21 and thepublic switched telephone network.Packet switching on the other hand is a communications method wherepackets are sent on different routes between nodes over data links that areshared with other traffic. In each network node, packets are either queuedor buffered, resulting in some variable delay called the queuing or bufferingdelay respectively. Once a packet is routed using a specific path, it is highlypossible that the path may change for the next packet and not be the same.This means that in some cases the packets sent from the same source to thesame destination would end up being routed differently. Hence the need toknow the original order of the packets [20].Packet switching is used for several advantageous reasons. It optimizesthe use of the channel capacity available in digital communication networks36such as computer networks. It also minimizes the time wasted in circuitswitching to establish a connection for transmission, and increases robust-ness of communication. The Internet and local area networks (LAN) are themost well-known applications that make use of packet switching. [20].To conclude this section, a small comparison is made between circuitand packet switching. The main points for circuit switching are:1- Long call setup times2- The setup times can be negligible compared to data length3- Inefficient channel utilization for bursty traffic.As for packet switching, its main points are:1- Breaks the message into packets2- Can interpolate packets from other nodes, therefore does not blockthe system3- Error check on4- Efficiently handles asymmetric traffic.2.4 Channel CodingThis section covers the topic of channel coding. Channel coding is the mostimportant building block of any transmission scheme. It is used to protectthe transmitted data from errors or erasures that are introduced by thechannel in which they are being transmitted, so that they can be correctlydecoded at the receiver. Source coding decreases the amount of bits beingsent by removing redundant bits whereas channel coding adds bits to thetransmitted data to protect it. There are two types of channel codes: the37block codes and the convolutional codes.2.4.1 Block vs Convolutional CodesLet us start by defining the two types of codes, and then proceed to give anexample of one of the most powerful convolutional codes, the Viterbi code aswell as an example of a simple block code, the repetition code. Linear con-volutional codes produce an output that depends on the previously receivedbits and the current bit by performing an operation such as binary XOR onthose bits for example. The block codes on the other hand, simply take a kblock of bits and converts them into an n block of data. Both codes havea rate R = k/n, where k is the input number of bits and n is the outputnumber of bits after encoding. Linear block codes have two advantages overlinear convolutional blocks [10]:1- The processing delay caused by their implementation is less than theircounterparts.2- The computational and processing complexity of the algorithms isless.The Viterbi algorithm is based on the maximum likelihood decodingtechnique. Figure 2.8 below shows an example of a convolutional encoderthat takes k bits as input and outputs n bits, where n > k [10].The encoder shown in figure 2.8 has as parameters k = 1 and n = 2, andhence the rate of this encoder is 1/2. The first output bit is computed byapplying the XOR operation on x(n), x(n-1) and x(n-2), whereas the secondoutput bit is computed by applying an XOR on x(n) and x(n-2). Oneimportant thing to note is that the convolutional encoder is not systematic,38Figure 2.8: Convolutional encoder [10]which means that the output bits do not contain any of the input bits. Thatcan be easily noticed by observing the output. For systematic codes, the noutput bits are made of the k input bits followed by the n − k added bitsfor protection, which is obviously not the case with the Viterbi encoder.To obtain the trellis diagram which is needed for the decoding of thereceived sequence, one needs to apply the Viterbi algorithm. Here is asummarized step-by-step implementation of the algorithm [10]:A- InitializationLabel the left-most state of the trellis (i.e., the all-zero state at level 0)as 0.B- Computation step j + 1Let j = 0,1,2,... and suppose that at the previous step j, we have donetwo things: all survivor paths have been identified, and the survivor path39and its metric for each state of the trellis have been stored.Then at level j + 1, compute the metric for all the paths entering eachstate of the trellis by adding the metric of the incoming branches to themetric of the connecting survivor path from level j. Hence, for each state,identify the path with the lowest metric as the survivor of step j+1, therebyupdating the computation.C- Final StepContinue the computation until the algorithm finishes its forward searchthrough the trellis and therefore reaches the termination node (i.e., the all-zero state), at which time it makes a decision on the maximum likelihoodpath. Then, the sequence of symbols associated with that path is releasedto the destination as the decoded version of the received sequence.One important note is that if the received sequence is very long, thenthe Viterbi algorithm will require a lot of storage capabilities, and in suchcases, normally a compromise is made. That compromise is to truncate thetrellis as follows:We choose a decoding window of length l. The algorithm operates ona corresponding frame of the received sequence and it always stops after lsteps. After that, a decision has to be made with regards to the ”best” path,and the symbol associated with the first branch on that path is released tothe user, whereas the one associated with the last branch on the path isdropped.The next step is to move forward the decoding window by one timeinterval, and make a decision on the next code frame, and so on. It is truethat in such a case, the decoding decisions made are not exactly maximum40Figure 2.9: Viterbi decoder [10]likelihood, but they can be made almost as good provided that the decodingwindow l is long enough [10].It has been shown that satisfactory results can be obtained as long asthe decoding window length l is about five times the constraint length K ofthe convolutional code, where K is equal to the number of shifting elementsfound in the encoder, plus 1. In the case of figure 2.8, K = 3. After applyingthe algorithm, we get the trellis diagram shown in figure 2.9.A quick example will illustrate how a trellis diagram is built. Assumethat the encoder in figure 2.8 produces an all-zero output, and after trans-mission, the received sequence was 0100010000.... Therefore, there are twobits that are received in error (the two bits that are 1). Figure 2.10 gives astep-by-step illustration of how the correct sequence is decoded.Next up, we discuss block codes, and we start with the repetition code,which is one of the simplest and easiest block codes to implement. Theconcept is to take every codeword and repeat it based on the rate of thecode. For example, a k = 2 and n = 6 code has a rate of 1/3. This41Figure 2.10: Step by step Viterbi decoding [10]42means that every original code-word has to be repeated three times. Letthe original symbols be designated as ki, i = 1, 2, 3, 4 and the codewords asnj , j = 1, 2, 3, 4. An example of a 1/3 rate repetition code isk0 = [0 0], n0 = [0 0 0 0 0 0],k1 = [0 1], n1 = [0 1 0 1 0 1],k2 = [1 1], n2 = [1 1 1 1 1 1],k3 = [1 0], n3 = [1 0 1 0 1 0],The decoding is done by comparing the received code-word to the dictio-nary containing all possible code-words. The comparison is done in terms ofthe differences in the bits. The codeword that has the least total differenceis then chosen as the corrected received code-word [10].2.4.2 Reed-Solomon CodeNow we move on to a more complicated block code which is the Reed-Solomon code. The RS code is a linear systematic block code based on finitefield theory. It is an (n, k) code, where k is the number of data symbols,and n is the total number of symbols transmitted which contains both thedata symbols and the Forward Error Correcting (FEC) symbols, as shownin Figure 2.11.The parameter n is defined as,n = qm − 1, (2.32)where q is the size of the Galois Field (GF), and m is the number of bitsin every symbol. Basically a GF is a finite field on which the operations of43Figure 2.11: Reed-Solomon encoder [10]commutative addition, subtraction, multiplication and division (except byzero) are defined [19].The RS code has the ability to correct t errors, as long as the followinginequality is respected:t ≤(n− k)2. (2.33)Next, let us delve into the mathematics behind the encoding of the RScode [25], which is defined by its generator polynomial. For a code capableof correcting up to t errors, the generator polynomial is given as44g(X) = (X+am0)(X+am0+1)(X+am0+2)...(X+am0+2t−1) = g0+g1X+g2X2+...+g2t−1X2t−1+X2t(2.34)where a, an m-bit binary symbol, is the primitive element of the finitefield GF (2m), and m0 is a pre-set number that is usually 0 or 1. To encodea message sequence, the message polynomial is first constructed asu(X) = u0 + u1X + u2X2 + ...+ uk−1Xk−1. (2.35)The parity polynomial, which has the parity sequence as its coefficients,is calculated as the remainder of X2t. The code is constructed as the datasequence followed by the parity sequence. Hence, the final code polynomialist(X) = X2tu(X) + v(X) (2.36)wherev(X) = v0 + v1X + v2X2 + ...+ v2t−1X2t−1. (2.37)The decoding of the data takes place as follows. We assume that thetransmitted code vector ist(X) = t0 + t1X + t2X2 + ...+ tn−1Xn−1 (2.38)and that the received vector is45r(X) = r0 + r1X + r2X2 + ...+ rn−1Xn−1. (2.39)The first step is to calculate the 2t syndrome components as follows:S0 = r(a0) = r0 + r1 + r2 + ...+ rn−1S1 = r(a1) = r0 + r1(a) + r2(a)2 + ...+ rn−1(a)n−1S2 = r(a2) = r0 + r1(a2) + r2(a2)2 + ...+ rn−1(a2)n−1up toS2t−1 = r(a2t−1) = r0 + r1(a2t−1) + r2(a2t−1)2 + ...+ rn−1(a2t−1)n−1.The syndrome polynomial is then calculated asS(X) = S0 + S1X + S2X2 + ...+ S2t−1X2t−1. (2.40)The second step in the decoding process of an RS code is to find theerror location polynomial L(X) and the error evaluation polynomial W (X).The error location and error evaluation polynomials are defined asL(X) = 1 + L1X + L2X2 + ...+ LeXe (2.41)andW (X) = W0 +W1X +W2X2 + ...+We−1Xe−1 (2.42)respectively, where e is the number of errors. These two polynomials arerelated to the syndrome polynomial through the fundamental equationL(X)S(X) = W (X)modX2t. (2.43)46Both L(X) and W (X) are solved using the iterative Berlekamp-Masseyalgorithm.The last step in decoding an RS code is to find the error location andthe error value. One can find the error location by using Chan’s searchingalgorithm. Basically X is substituted with an in L(X) for all possible n in acode to find the root of L(X). The inverse of the root of the error locationpolynomial is the error position. Once the error location is known, the errorvalue is calculated with the help of Forney’s error evaluation algorithm.Once the error value is found, it is added to the corrupted symbol to correctthe error [24].An interesting property of the RS code is its ability to correct not onlyerrors but also erasures, as long as the following inequality is respected:2t+ S ≤ n− k, (2.44)where S is the number of erasures that can be corrected. In such a case,the error locator polynomial is modified such that it would also include anerasure locator polynomial.There are many applications for the RS code, most notably in datastorage applications such as the CD and DVD, and in data transmission. Itis also used in mail encoding and most notably, in satellite communications.The first major use of RS codes for satellite transmission was when Voyagerhad to transmit a digital picture back to Earth and it used the RS codein conjunction with the Viterbi algorithm, and since then this practice hasbecome a fixture in deep-space and satellite transmission [24].47Figure 2.12: Interleaver [18]2.4.3 InterleavingAn interleaver is a process that rearranges the order of transmitted symbolsbefore sending them and then returns them to their original location at thereceiver’s end.The main reason behind the use of an interleaver (and at the receivera deinterleaver) is to be able to deal with channels that cause errors orerasures in bursts. The concept is very simple: just input the sequenceto be transmitted horizontally and then read it vertically, as is shown inFigures 2.12 and 2.13 respectively. What this helps in is spreading out theerrors caused by bursts which helps in making the decoding process in mostscenarios better.For transmission schemes that use block coded channel encoders, theerror detection and correction capability can be limited as seen in equation2.33. In case of a very poor channel, the block decoders would fail to correct48Figure 2.13: Deinterleaved output [18]all the errors introduced by the channel, but by spreading the errors on alarger set of symbols, the interleaving/deinterleaving process in fact givesthe decoders a better chance at being able to properly decode the receiveddata.The interleaver in Figure 2.12 is a square interleaver having a depth offour and is called a 4-by-4 interleaver. The depth of the interleaver is thenumber of symbols in each block of data. Of course, the interleaver can haveany dimension the designer wishes to use, but it is always advisable to useone where the distance between the two consecutive output symbols is largeenough, so that when the error burst occurs, the errors can be scattered asfar away as possible from each other.49Chapter 3FrameworksThis chapter discusses the different schemes applied and parameters usedthroughout the work.3.1 General Framework of the CS-EEG EncodingFigure 3.1: Block diagram of the encoderAfter collecting the data and applying compressed sensing (CS) to itusing the improved block-sparse Bayesian learning (BSBL) algorithm, theencoder stage was designed as shown in figure 3.1.The first step in designing our transmission system was to choose thenumber of quantization bits from which we can know how many quantizationlevels will be used. The equation that relates the two quantities is given asQlvl = 2qbits (3.1)where Qlvl is the number of quantization levels available and qbits is50the number of bits required to represent the quantized values. To decideon the number of quantization bits, we need to make sure that no valuableinformation is lost when quantization is done. This is because quantizationintroduces an error called the quantization error which should be kept at aminimum. Keeping that in mind, we quantized the signals using differentnumber of quantization bits. We sent the data through an ideal channel. Wedid not apply any source or channel coding on this data in order to see theeffect of this parameter on the reconstructed signals. Then we reconstructedthe quantized data and compared it to the original signals available.There are two types of quantization:1- Uniform quantization: The quantization levels have the same intervaldistance between each other. Each interval is also represented by the samenumber of quantization bits. This type of quantization is used when thesignal values are uniformly distributed among all values.2- Non-uniform quantization: The signal information is distributed heav-ily around a certain range. In those areas, more levels are assigned, whereasthe other areas would get lesser levels assigned to them.Since the compressed sensing (CS) EEG signals contain equally impor-tant information in all its levels, we chose uniform quantization for our work.In the first two sections, 3.2 and 3.3, we used the lossless Huffman codefor source coding. The reasoning behind our choice is that Huffman is capa-ble of perfectly reconstructing a signal under ideal channel conditions. Sincethe EEG data has values that occur more than others, applying the Huffmancode to compress it is a reasonable choice. This is because Huffman codingrelies on the probability of occurrence of symbols to compress the data.51Huffman is used in many applications such as the image formats jointphotographic experts group (JPEG) and portable network graphics (PNG).It is also used in the archive file format PKZIP (it includes the zip archivingfile format).The Reed-Solomon (RS) encoder was used in all schemes. The RS is ablock code. Block codes are known for their fast implementation comparedto convolutional codes, which is highly desired for wireless body networksensors (WBSN)s. It is also capable of detecting and correcting errors. Thatcapability can be increased by changing the data length of each codeword(remember that for block codes such as the RS, k-data symbols are encodedinto n-symbols, where n > k).In figure 3.1, the data was sent as packets. Each packet length is 12symbols, where each symbol is a double variable in Matlab.The channel was simulated using a two-state Markov model [12]. Thedata was transmitted through the Markov model under three noisy channelconditions:1- Good channel conditions, which mean that the number of errors in-troduced by the channel is on average quite low.2- Average channel conditions, which mean that the number of errorsincreases. This is where the channel encoder and decoder error correctingcapabilities start to be tested.3- Poor channel conditions, which mean that the number of errors intro-duced by the channel is large.The channel error sequences were generated in two different ways usingthe above matrices for each channel condition:521) Random pattern: In this approach, we generate the error sequence atthe same time as the data is being transmitted. What this means is that thedata transmitted will not always be sent under the same error sequences.Every transmission process will be done under a different error sequence.For fair results, the same data is transmitted 10 times over 10 differenterror sequences. Then the results obtained for each of the 10 cases will beaveraged. This way of sending the data helps to find if the algorithm thatoutperforms all others once, will in general outperform them on a regularbasis.2) Fixed pattern: This approach generates one fixed error sequence forthe three channel conditions (good, average, poor). These 3 sequences areused for all the data to see how the algorithms would compare under theexact same channel conditions.We expect that the algorithm that will outperform the other algorithmsusing the first method will also outperform them using the second method.The error sequence consists of two values: 1 and 2. The value 1 implies thatthe packet has been correctly received. The value 2 implies that the packethas been incorrectly received. In the case of the latter situation, the wholepacket is in error. This means that the symbols found in the packet have allhad their values changed.An interleaver and deinterleaver were used after the RS encoder andbefore the RS decoder. The reasoning behind the use of the interleaver issimple. The error sequences can occur in large bursts, especially in poorchannel conditions. In such a case, the RS decoder might not be able tocorrect all the errors introduced by the channel. This would degrade the53performance. The interleaver would spread out the errors. This would allowthe RS decoder to be able to correct the erroneous data.The following sections will give the block diagrams of the schemes appliedto study the individual and combined effects of source coding (compression),forward error correction and interleaving on the performance of the encoderside in terms of processing times and total number of symbols sent. Itwill also evaluate their performances at the decoder side in terms of datareconstruction.3.2 Huffman Code Followed by RS CodeFigure 3.2: Block diagram of the first approachThe first scheme that we considered was the pairing of the Huffmancode followed by a Reed-Solomon code. At the receiver side, the decoderof each algorithm is applied in reverse order from the encoder side, i.e.the RS decoder first and then the Huffman decoder. Finally, the signal isreconstructed. The block diagram for this scheme is shown in figure 3.2.543.3 Huffman Code Followed by RS Code andInterleavingFigure 3.3: Block diagram of the first approach with interleavingFigure 3.3 shows the block diagram of the approach used in this section.Basically it is the approach used in section 3.2 with interleaving and dein-terleaving added before and after the channel respectively. We expect theresults to improve with the addition of interleaving.3.4 No Source Coding Followed by RS CodeThe CS algorithm used achieved a compression ratio of 90 %. Since theaddition of Huffman proved to under-perform in poor channel conditionsas will be shown in the next chapter, in this framework we use no furthercompression than that provided by CS. Therefore we remove the Huffmancode. Instead, the quantized data is sent directly into the RS encoder, asshown in figure 3.4.55Figure 3.4: Block diagram of the second approach3.5 No Source Coding Followed by RS Code andInterleavingFigure 3.5: Block diagram of the second approach with interleavingThis last framework is a slight modification of the previous one in thatinterleaving and deinterleaving is added after and before the RS encoder56and decoder respectively, as is shown in figure 3.5. Again we expect animprovement to the previous scheme due to the presence of interleaving.3.6 Performance MeasuresThe measures used to test the performance of the work done are:1- Normalized Mean Square Error (NMSE): this parameter calculatesthe errors or differences between the original signal and the reconstructedsignal. It is expressed as ‖(x−xˆ)‖22‖x‖22, where x is the original signal and xˆ is thereconstructed signal. The lower and closer to zero the NMSE, the betterthe performance of the scheme.2- Signal to Noise Ratio (SNR): a way to see how much the noise orerrors have distorted the signal. It is defined as SNR = 10 log(PsignalPnoise ). Forour work, the signal is the CS output signal before quantization, and thenoise is the recovered signal after reconstruction. The higher the value ofthis parameter, the better. The unit for this parameter is decibel (dB).3- Structural SIMilarity (SSIM): a parameter that measures the similar-ity between the original and reconstructed signals. Its value is between 0and 1. The closer it is to 1, the closer the reconstructed signal is to theoriginal one [30].57Chapter 4Results4.1 Parameters Used in the FrameworkThis chapter discusses the results obtained from the different schemes ap-plied. We start by presenting the parameters used for the improved com-pressed sensing (CS) algorithm. The signals that we are encoding are elec-troencephalogram signals. They are collected from 23 channels. The datawas compressed up to a 90 % compression ratio (CR) with the help ofCS. The dictionary used is the inverse discrete cosine transform (DCT−1)dictionary. The reconstructing algorithm used is the BSBL-BO algorithmexplained in [15].Table 4.1: Results for Different Values of qbitsqbits NMSE SNR(in dB) SSIM8 0.03526 ∗ 10−3 38.6601 0.97567 0.06316 ∗ 10−3 32.8572 0.96116 0.25824 ∗ 10−3 26.7608 0.92245 1 ∗ 10−3 20.7383 0.841Table 4.1 gives the NMSE, SNR and SSIM for different values of thenumber of quantization bits qbits. We started with qbits = 8 bits and the58results were excellent. We then decreased the number of quantization bitsto 7 bits to see how much the parameters would be affected. Logically, weexpect the values to be slightly worse, which was the case. But even then,the results were still excellent. We then resent the data for qbits = 6 bitsand the results were still great. We tried to decrease the value of qbits toonly 5 bits, but as can be seen in table 4.1, the results were not satisfactory.Usually, if SSIM = 0.841, the reconstructed signal still resembles the originalsignal a lot. But we can not afford such loss in signal similarity because weneed to account for the losses that the channel errors will introduce to thedata. This would further lower the value of SSIM and hence the lowest valuefor qbits that we can afford to achieve good reconstruction is 6 bits. Thismeans that there are 26 = 64 quantization levels.The RS code used had a codeword length of n = 255 symbols. Threedifferent data lengths k were used: 223, 193, 153. By applying equation 2.33,we can calculate up to how many errors each code is capable of correcting.The RS(255,223) code can correct up to 16 errors. The RS(255,193) codecan correct up to 31 errors. And the RS(255,153) code can correct up to51 errors. It becomes directly clear that the RS(255,153) would yield betterresults than the other two codes in severe channel conditions because ofits ability to correct more errors, but the trade-off is that we would betransmitting more bits.The parameters for the 2-state Markov model were calculated from thevalues of the probability of packet loss, α, and the average packet burstlength, β. The values of both α and β for each of the 3 noisy channels wereobtained from previous statistics on global system for mobile (GSM) com-59munications (which is a standard used by mobile phones in digital cellularnetworks). The values for α and β are given in table 4.2.Table 4.2: Values for α and βChannel Condition α βGood 0.001167 1.076Average 0.02825 1.378Poor 0.12372 1.429From the values of α and β, we calculated the values of pij of the proba-bility transition matrix using equations 2.30 and 2.31. These equations allowus to calculate the values of Q and q, which are the self loop probabilities ofthe good and bad states in the Markov model respectively. From Q and qwe calculate P = 1−Q and p = 1− q. The transition probability matricesfor each of the 3 channels are then as follows:Pgood =0.9989 0.00110.9294 0.0706Paverage =0.9789 0.02110.7257 0.2743Ppoor =0.9012 0.09880.6998 0.3002.The interleaver used in the two sections 3.3 and 3.5 is aLepoch12 by 12interleaver, where L is the length of the encoded epoch (i.e. after sourceand channel encoding). This means that the encoder depth is 12, which in60turn means that the data is rearranged in a way that every 12th symbol inthe epoch is placed one after the other. Therefore, if we have 24 symbolsABCDEFGHIJKL MNOPQRSTUVWX,this interleaver would rearrange them asAMBNCODPEQFRGSHTIUJV KWLX.4.2 Huffman Code Followed by RS CodeTables 4.3, 4.4 and 4.5 show the results obtained when the data was transmit-ted through the randomly generated error sequences using the three differentRS codes, the (255,223), (255,193) and (255,153) codes respectively.Tables 4.6, 4.7 and 4.8 show the results of the three RS codes using thepre-generated error sequences for each of the three channel conditions (good,average, poor).Naturally as expected, the case when RS(255,153) was used achievedthe best results. However, for poor channel conditions, the SSIM of thereconstructed signal was low, which means that this scheme is not suited forbad channel conditions.Table 4.3: Results for Random Patterns Using RS(255,223)Channel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 29.6 ∗ 10−3 21.4714 0.8127Poor FAILED FAILED FAILEDOverall, the only combination in this section that fared well was the61Huffman code followed by the RS(255,153) code, but even that pairing gaveresults that are not satisfactory under poor channel conditions. Figures4.1, 4.2 and 4.3 show the reconstructed signal in green superimposed onthe original signal in blue for the cases when RS(255,223) was used in bothgood and average conditions, and when the RS(255,193) was used in badconditions respectively.Notice that for the RS(255,223) code, the scheme completely fails underpoor channel conditions using the randomly generated error sequence ap-proach, and completely under all channel conditions when using the fixedgenerated error sequence. The RS(255,193) fails as well under poor channelconditions when using the fixed generated error sequence.The reason for such failures is because the number of errors introducedby the channel in a certain RS codeword is larger than what can be correctedby the RS decoder. In this case of the RS(255,223) and RS(255,193) codes,the number of errors that can be corrected is 16 and 32 errors respectively.This changes the symbol stream and hence introduces a new codeword thatis not found in the dictionary of the Huffman code. In such cases, Huffmancan no longer decode the codewords and the whole scheme fails.In the cases where the scheme did not fail, but results achieved wereless than satisfactory, there exists another reason. If the error introduced islarger than what the RS decoder can correct, it can happen that the errorsare still a codeword in the Huffman dictionary. But this codeword is notthe one that was actually transmitted. This still allows the Huffman codeto function properly, however a propagated error would occur starting withthe first codeword that was wrongly decoded. To make this clearer, assume62that the transmitted codeword wasAAABCDADBBD.If the error occurs at the 4th symbol, i.e. symbol B, there is no certaintythat the error will not affect the rest of that sequence. The Huffman decoderwould still be able to decode the sequence because even though the symbolsequence is erroneous, the errors caused symbols to appear in the sequencethat are still in the Huffman dictionary. Therefore, the decoded sequencecan possibly beAAACABDDCBAwhich obviously is not the transmitted sequence itself. This would de-grade the reconstructed signal and hence why some numbers such as thosein table 4.4 for average and poor channel conditions are not better.Table 4.4: Results for Random Patterns Using RS(255,193)Channel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 19.8 ∗ 10−3 23.9993 0.852Poor 211.8 ∗ 10−3 5.2918 0.4068Figure 4.1 shows an almost perfect reconstruction of the signal using theRS(255,223) code. The small differences between the original and recon-structed signals are attributed to the quantization error introduced at theencoder. Figure 4.2 shows that the algorithm performed close to the optimal63Table 4.5: Results for Random Patterns Using RS(255,153)Channel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 2.3 ∗ 10−3 25.4091 0.9263Poor 142.4 ∗ 10−3 6.9801 0.5438Table 4.6: Results for Fixed Patterns Using RS(255,223)Channel NMSE SNR(in dB) SSIMGood FAILED FAILED FAILEDFigure 4.1: Huffman followed by RS code (255,223) in good channel condi-tionscase (which is the results obtained in table 4.3 for the good channel case).Figure 4.3 shows that some samples are not correctly reconstructed in theoutput signal. It also shows a small shift in the reconstructed samples insome parts of the signal. This explains why the SSIM was only 0.4068.64Table 4.7: Results for Fixed Patterns Using RS(255,193)Channel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 0.25824 ∗ 10−3 26.7608 0.9224Poor FAILED FAILED FAILEDTable 4.8: Results for Fixed Patterns Using RS(255,153)Channel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 0.25824 ∗ 10−3 26.7608 0.9224Poor 4.7 ∗ 10−3 25.4211 0.8796Figure 4.2: Huffman followed by RS code (255,223) in average channel con-ditions65Figure 4.3: Huffman followed by RS code (255,193) in bad channel conditions4.3 Huffman Code Followed by RS Code andInterleavingTables 4.9, 4.10 and 4.11 again show the results obtained when the datawas sent through randomly generated channels using the three different RScodes, and tables 4.12, 4.13 and 4.14 show the results when a fixed errorstream is used.The results show that once again the RS(255,223) fails completely underbad channel conditions for randomly generated error sequences as well as forall channel conditions when using the pre-generated fixed error sequences.So does the RS(255,193) under poor channel conditions when using the pre-generated error sequence scenario.However apart from that, the results of the algorithm fared better whencompared to the previous section. This is attributed to the interleaver whichspreads out the errors to other parts of the data stream. Sometimes the er-66Table 4.9: Results for Random Patterns Using RS(255,223) and InterleavingChannel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 19.18 ∗ 10−3 23.8635 0.8475Poor FAILED FAILED FAILEDrors happen in large bursts which would render the channel decoders helplessand unable to correctly decode the data. The spreading of those errors wouldmake it easier and more possible for the decoders to retrieve the correct data.Table 4.10: Results for Random Patterns Using RS(255,193) and Interleav-ingChannel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 0.25824 ∗ 10−3 26.7608 0.9224Poor 184.1 ∗ 10−3 6.1303 0.4435Table 4.11: Results for Random Patterns Using RS(255,153) and Interleav-ingChannel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 0.25824 ∗ 10−3 26.7608 0.9224Poor 71.3 ∗ 10−3 19.506 0.7494Figure 4.4 shows the improvement that interleaving achieves by super-imposing the green reconstructed signal over the blue original one for theRS(255,223) code under average channel conditions using the random error67Table 4.12: Results for Fixed Patterns Using RS(255,223) and InterleavingChannel NMSE SNR(in dB) SSIMGood FAILED FAILED FAILEDTable 4.13: Results for Fixed Patterns Using RS(255,193) and InterleavingChannel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 0.25824 ∗ 10−3 26.7608 0.9224Poor FAILED FAILED FAILEDstreams.Figure 4.4: Huffman followed by RS code (255,223) with interleaving inaverage channel conditionsFigure 4.5 also shows the improvement in the case of the RS(255,193)under poor channel conditions when compared to the exact scenario withoutinterleaving (in section 4.2). Those improvements though, just as the tables68Table 4.14: Results for Fixed Patterns Using RS(255,153) and InterleavingChannel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 0.25824 ∗ 10−3 26.7608 0.9224Poor 0.25824 ∗ 10−3 26.7608 0.9224show as well, are not good enough for this scheme to be adopted for thetransmission of CS-EEG signals.Figure 4.5: Huffman followed by RS code (255,193) with interleaving in badchannel conditions4.4 No Source Coding Followed by RS CodeThe fact that the conventional approach of using a source coder followed by achannel coder failed in its performance made us think of ditching any furthersource coding, since already CS acts as a source coder by compressing 90 %of the data. This is what this section implements: CS followed directly by69the RS encoder.Table 4.15: Results for Random Patterns Using RS(255,223) with no SourceCodingChannel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 10.1 ∗ 10−3 20.9918 0.8946Poor 74.1 ∗ 10−3 5.5298 0.5881The next 5 tables (table 4.15 - 4.19) show the results obtained fromthis simulation. The first thing that we notice is that the algorithm neverfails regardless of the channel conditions. This is because there is no pre-determined dictionary used anywhere that gives only a specific number ofdecodable codewords. The fact that it does not stop working under anycircumstance already favors it over the two schemes presented in sections3.2 and 3.3.Table 4.16: Results for Random Patterns Using RS(255,193) with no SourceCodingChannel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 0.25824 ∗ 10−3 26.7608 0.9224Poor 69.3 ∗ 10−3 9.4839 0.7423The next noticeable improvement in the results is that this algorithmoutperforms the previous two in every aspect, under every condition, withthe exception of the total number of bits sent per epoch (these numberswill be compared in section 4.6). Figure 4.6 further proves the point that it70outperforms the previous two scenarios, for the RS(255,193) code using therandomly generated error sequences under poor conditions.Table 4.17: Results for Random Patterns Using RS(255,153) with no SourceCodingChannel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 0.25824 ∗ 10−3 26.7608 0.9224Poor 0.25824 ∗ 10−3 26.7608 0.9224Table 4.18: Results for Fixed Patterns Using RS(255,223) with no SourceCodingChannel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 0.25824 ∗ 10−3 26.7608 0.9224Poor 33.1 ∗ 10−3 5.6462 0.4801Table 4.19: Results for Fixed Patterns Using RS(255,193/153) with noSource CodingChannel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 0.25824 ∗ 10−3 26.7608 0.9224Poor 0.25824 ∗ 10−3 26.7608 0.9224In fact, for the RS(255,153) code, the algorithm performs ideally underall channel conditions, whether using the pre-generated or randomly gener-ated error sequences in the channel. The RS(255,193) code also performsreally well except when using the randomly generated error sequences in a71poor channel scenario. Therefore, up till now, the RS(255,153), no sourcecoding and no interleaving combination is the scheme of choice!Figure 4.6: No source coding followed by RS code (255,193) in bad channelconditions4.5 No Source Coding Followed by RS Code andInterleavingTable 4.20: Results for Random Patterns Using RS(255,223) with Interleav-ing, no Source CodingChannel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 0.25824 ∗ 10−3 26.7608 0.9224Poor 41.6 ∗ 10−3 10.6303 0.7357Tables 4.20 - 4.23 show the results obtained in this section. All param-eters were improved under all channel conditions. The RS(255,193) code72performance improved immensely from the last section (NMSE: 0.0181 vs0.0693 SNR: 21.2808 vs 9.4839 SSIM: 0.8795 vs 0.7423), but still not goodenough to recommend it as the code of choice for the transmission of CS-EEG signals. The scheme of choice after all the results were obtained isthe one that uses no source coding followed by the RS(255,153) code andinterleaving.Table 4.21: Results for Random Patterns Using RS(255,193) with Interleav-ing, no Source CodingChannel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 0.25824 ∗ 10−3 26.7608 0.9224Poor 18.1 ∗ 10−3 21.2868 0.8795Table 4.22: Results for Random Patterns Using RS(255,153) with Interleav-ing, no Source CodingChannel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 0.25824 ∗ 10−3 26.7608 0.9224Poor 0.25824 ∗ 10−3 26.7608 0.9224Table 4.23: Results for Fixed Patterns Using RS(255,223/193/153) withInterleaving, no Source CodingChannel NMSE SNR(in dB) SSIMGood 0.25824 ∗ 10−3 26.7608 0.9224Average 0.25824 ∗ 10−3 26.7608 0.9224Poor 0.25824 ∗ 10−3 26.7608 0.9224734.6 General ComparisonsTable 4.24: Encoder Processing Times of the ApproachesCodeword Data Length Time (With Huffman) Time (Without Huffman)RS(255,223) 1.75 sec 1.65 secRS(255,193) 2.44 sec 2.34 secRS(255,153) 4.4 sec 4.3 secTable 4.24 shows the average processing time of each algorithm at theencoder side (per epoch) under the 3 different RS codes used. This tableonly takes into account the time needed for the quantization, source coding,channel coding and packetization to be processed and implemented. In all3 cases, the no source coding scheme performed slightly faster than theircounterparts using Huffman. The Huffman code processing time is 0.1 sec-onds. The quantization processing time is 0.03 seconds. The interleavingprocessing time is 0.001 seconds. This means that the bulk of the processingtime was due to the RS code. As the data length of the RS code decreases,its processing time increases. This is because the number of RS codewordsgenerated increases, which implies that more RS codewords need to be pro-cessed. There is no escaping the use of a channel encoder when transmittingin non-ideal channel conditions. Therefore, the processing times of the RScode have to be tolerated.Table 4.25 compares the time it takes for the encoder to be processedwith and without Huffman, as well as for the encoder part of the CS al-gorithm. It compares those 3 for two different values of the number of74Table 4.25: Encoder Processing Times Comparison with CS Algorithm usingRS(255,223)qbits CS Encoding time Approach w. Huffman Approach w/o Huffman6 bits 4 sec once, 0.0016 sec/ep 1.75 sec/ep 1.65 sec/ep7 bits 4 sec once, 0.0016 sec/ep 2.1 sec/ep 1.96 sec/epquantization bits qbits. The first thing we notice is that the processing timefor the CS algorithm is the same in both cases. This is because the CSencoder part is processed before quantization is applied to the data. Thesecond thing we observe is that the CS encoder has 2 values in it. The firstvalue of 4 seconds is a one time value for the whole block of data (i.e. for allthe epochs). The second value is the same for every epoch, 0.0016 seconds.This is a very small processing time. It was expected to be small becauseone of the main advantages of the CS algorithm is that its computationsare quick at the encoder side, leaving most of the work to be done at thereceiver end.The third column of table 4.25 gives the processing time at the encoder ofthe Huffman and RS(255,223) combination. The processing time of the RScode is once again evident. The processing time increased by 0.35 secondsper epoch when the number of quantization bits was increased. The increasein processing time is also the case in the fourth column of that same table.This column gives the encoder processing times for the no source coding andRS(255,223) scheme. As it can be seen, the increase in processing time is0.31 seconds per epoch.Table 4.26 shows the difference in the number of bits sent per epoch with75Table 4.26: Number of bits sent per epochCodeword Data Length With Huffman Without HuffmanRS(255,223) 2170 2295RS(255,193) 2434 2550RS(255,153) 2938 3060and without the use of the Huffman code. The difference is approximately125 symbols in the 3 cases of the RS code used. That difference takes intoaccount the number of symbols it requires to transmit the dictionary of theHuffman code with every epoch. It is possible to use just one dictionaryfor all the epochs, but this would not be the optimal compression scheme.It is better to send with each epoch its own dictionary of codewords, eventhough this leads to added symbols being transmitted. This table gives theonly area that the scheme with Huffman coding actually outperformed thescheme without Huffman coding. The addition of the Huffman code achieveda 5.45 % compression ratio when compared to the other scheme. However,if we consider that the better reconstruction results occurred when Huffmanisn’t used, Huffman is not a very suitable algorithm for the transmission ofCS-EEG signals in WBSNs, a sentiment echoed in [17].The total data available is two hours of EEG data from a patient sufferinga seizure. The total processing time of this data (encoding and decoding)is over 34 hours (the CS decoding algorithm takes about 16 seconds perepoch to reconstruct the 23 channel EEG signals). We used 5 minutes ofthis data for our purposes. The total processing time for 5 minutes was 1hour 30 minutes approximately. The data was transmitted through different76channel conditions for a total of 132 times.In conclusion, after all the results were presented and analyzed, the algo-rithm of choice would be the one presented in section 3.5, i.e. the one whereno source coding was applied, followed with the RS(255,153) code and aninterleaver. True that using the RS(255,153) means sending more encodedsymbols than the RS(255,223) code because it encodes every 153 data sym-bols into 255 symbols in comparison to 223 data symbols encoded into 255symbols, but the data arrives correctly and is properly reconstructed, whichin the end is the most important condition. There is always a small andsometimes inexpensive trade-off to achieve this condition, and this time isone of them.Other schemes were considered at first. Concatenated Huffman codingwas an idea under consideration, but once the results of the conventionalcombination of algorithms came out, the idea was dropped. Another poten-tial was to study the correlation of the EEG signals and exploit it, but sincethe CS algorithm used did that very well, there wasn’t any correlation leftto exploit, so this idea was let go as well. A third approach was to applylow-density parity-check codes (LDPC), but for lack of time, this approachwas not implemented.77Chapter 5Summary and Conclusions5.1 Main ResultsThis thesis is concerned with elongating the battery life at the EEG sensornode. To save on the energy consumed by wireless transmission, the acquiredEEG is first compressed using the block-sparse Bayesian learning (BSBL)compressed sensing (CS) approach and quantized. We then studied theperformance of two methods for transmitting the CS-quantized EEG signals.The first method presented in section 2 of chapter 3 proposed the useof a Huffman code followed by a Reed-Solomon (RS) encoder to protectthe transmitted data. The codeword length of the RS code was varied tofind a suitable length capable of correcting enough errors caused by thechannel. The results were acceptable under very good and average channelconditions but deteriorated quickly when the channel conditions worsened.Another crucial point that we observed is the computation time needed atthe encoder for this whole scheme to be processed, which proves to be costlyin terms of energy.In section 2 of chapter 3 we add interleaving right after the packetiza-tion of the data. The results yielded a better performance in all cases whencompared to the scheme without interleaving. The only advantage of incor-78porating Huffman coding was the compression gain (5.45%) it achieved onthe data that was already compressed using CS. This means that the totalcompression achieved including CS and Huffman is 90.545 %. This gainhowever is not enough to justify the use of Huffman, since its performanceunder noisy or not ideal channel conditions is unsatisfactory.The second method (presented in section 4 of the same chapter) proposedtransmitting the CS compressed data without any source or further losslesscoding. It simply applies the RS code before transmitting the data. Theresults showed a noticeable improvement in performance when compared tothe first proposed method, as evidenced by the results under poor channelconditions where the reconstruction of the received data was closer to theactual transmitted data.The final method applied interleaving to the above scheme (no furthercompression followed by an RS encoder). Its results were again superiorto the case where no interleaving was applied. In fact, this scheme is theone that outperformed all other schemes and is the scheme of choice forthe transmission of compressed sensing EEG data, for the case where theRS(255,153) is used.Before starting the experiments, we expected that the Huffman codewould not perform well in very noisy channels, but we did not expect theresults to be as underwhelming as they ended up being. They were in factworse than our original expectation. From the point of view of decreas-ing the number of bits transmitted, the Huffman code would be optimalunder ideal conditions because it can perfectly reconstruct the compresseddata. As mentioned above, it decreased the total number of bits transmitted79per epoch by 5.45%. However, its poor performance under noisy channelconditions would hinder its application for our purposes. The processingtimes of the encoder are given for every scenario, showing that the schemeof sections 3.4 and 3.5 (using BSBL CS compression, 6 bits quantization,(255,223/193/153) Reed Solomon coding followed by interleaving) were pro-cessed slightly faster than the schemes in sections 3.2 and 3.3 (using BSBLCS compression, 6 bits quantization, Huffman coding, (255,223/193/153)Reed-Solomon coding followed by interleaving).The bulk of the processing time was from the RS encoder. The Huffmancode account for only 0.1 seconds of processing time. Quantization andinterleaving accounted for only 0.031 seconds. The processing times of theRS encoder increased when the data length in each codeword decreased.Throughout the thesis, we had to design the schemes such that theywould suit the improved BSBL algorithm of [15]. To the best of our knowl-edge, this algorithm which uses CS is close to being optimal, even thoughwe can not prove it mathematically. But the results obtained using thisalgorithm were impressive and they formed the upper limit that the work inthis thesis was trying to reach. We came extremely close falling only 0.9%off the upper limit value of the structural similarity index and 3.19% off theupper limit value of the normalized mean square error.Considering that the encoder stage of the wireless body sensor network(WBSN) must have little processing complexity due to the constraint onenergy, we improved that stage through the removal of the whole sourcecoding block since already the 90 % compression ratio achieved using thealgorithm of [15] is quite high. Usually, lossy algorithms are faster in terms of80running time and achieve higher compression rates than lossless algorithmssuch as the Huffman code, however, they do not reconstruct the signal atthe receiver end perfectly and they would still add more processing timecompared to cases where source coding is not present at all.5.2 Future WorkThe channel coding algorithm used is a very efficient algorithm, both interms of speed and its error correction and detection capabilities. Lately low-density parity-check (LDPC) codes are being applied in many applicationsthat have bandwidth constraints. They are used most notably in the satellitetransmission of digital television. Trying to implement them into the worldof WBSNs could prove to be useful and hence is one way that could possiblyimprove the results obtained in this work.The actual energy required for transmitting the bits at the sensor nodeshould be studied in detail and in conjunction with building a hardwareprototype for this problem.81Appendix AThe whole structure of the thesis started with the data being sampled withcompressed sensing applied to it, then quantizing it, applying a source cod-ing algorithm followed by a channel encoder, transmitting it, then applyingchannel and source decoding to every epoch, before all epochs were recon-structed using the algorithm of [15].The results given in chapter 4 were for all the schemes that started beforethe quantization of the data up until source decoding. They did not includethe blocks before and after, which is basically the work done in [15]. ThisAppendix gives the results of the whole framework including that of [15],i.e. from before the data was sampled till after it was reconstructed usingthe improved BSBL-BO algorithm.According to [15], the best results achieved by the algorithm used for a90 % compression ratio using a discrete cosine transform dictionary (withoutany source coding, channel coding or interleaving, and in ideal channel cases,that is there weren’t any errors introduced by the channel) were as follows:NMSE = 0.0533SNR = 12.7347SSIM = 0.8523.These results are the upper limit that this thesis attempted to reach asclosely as possible, and were it not for the quantization error introduced atthe encoder, it would have been possible to reach those numbers. The tables82Table A.1: Results for Random Patterns Using RS(255,223)Channel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.1482 8.1202 0.7596Poor FAILED FAILED FAILEDTable A.2: Results for Random Patterns Using RS(255,193)Channel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.0924 10.1137 0.8213Poor 0.6986 1.4722 0.3401below show the results achieved for each scheme applied.An important note is that the results above are slightly different thanthe values found in [15]. This is because the total data processed was lesssince the processing time of the total data available was over 34 hours. Afterrunning the algorithm once and then running the same algorithm for a lesseramount of data, it was clear that the values would remain extremely close,but the processing time of the data would be only an hour and a half.Table A.3: Results for Random Patterns Using RS(255,153)Channel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.0597 11.8943 0.8412Poor 0.4847 1.9502 0.391283Table A.4: Results for Random Patterns Using RS(255,223) and InterleavingChannel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.0915 10.1271 0.8067Poor FAILED FAILED FAILEDTable A.5: Results for Random Patterns Using RS(255,193) and InterleavingChannel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.055 12.5925 0.8446Poor 0.5736 2.2068 0.3082Table A.6: Results for Random Patterns Using RS(255,153) and InterleavingChannel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.055 12.5925 0.8446Poor 0.2586 6.0035 0.6995Table A.7: Results for Random Patterns Using RS(255,223) with no SourceCodingChannel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.0893 10.0151 0.8249Poor 0.3568 4.1059 0.515284Table A.8: Results for Random Patterns Using RS(255,193) with no SourceCodingChannel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.055 12.5925 0.8446Poor 0.2998 5.0786 0.6026Table A.9: Results for Random Patterns Using RS(255,153) with no SourceCodingChannel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.055 12.5925 0.8446Poor 0.055 12.5925 0.8446Table A.10: Results for Random Patterns Using RS(255,223) with Inter-leaving, no Source CodingChannel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.055 12.5925 0.8446Poor 0.2133 6.0178 0.6142Table A.11: Results for Random Patterns Using RS(255,193) with Inter-leaving, no Source CodingChannel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.055 12.5925 0.8446Poor 0.1029 10.0081 0.812485Table A.12: Results for Random Patterns Using RS(255,153) with Inter-leaving, no Source CodingChannel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.055 12.5925 0.8446Poor 0.055 12.5925 0.8446Table A.13: Results for Fixed Patterns Using RS(255,223) with and withoutInterleavingChannel NMSE SNR SSIMGood FAILED FAILED FAILEDTable A.14: Results for Fixed Patterns Using RS(255,193) with and withoutInterleavingChannel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.055 12.5925 0.8446Poor FAILED FAILED FAILEDTable A.15: Results for Fixed Patterns Using RS(255,153)Channel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.055 12.5925 0.8446Poor 0.0681 10.876 0.844686Table A.16: Results for Fixed Patterns Using RS(255,153) and InterleavingChannel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.055 12.5925 0.8446Poor 0.055 12.5925 0.8446Table A.17: Results for Fixed Patterns Using RS(255,223) with no SourceCodingChannel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.055 12.5925 0.8446Poor 0.343 4.6471 0.480187Table A.18: Results for Fixed Patterns Using RS(255,193/153) with noSource Coding, with and without InterleavingChannel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.055 12.5925 0.8446Poor 0.055 12.5925 0.8446Table A.19: Results for Fixed Patterns Using RS(255,223) with Interleavingno Source CodingChannel NMSE SNR SSIMGood 0.055 12.5925 0.8446Average 0.055 12.5925 0.8446Poor 0.055 12.5925 0.844688Bibliography[1] A.M. Abdulghani, A.J. Casson, and E. Rodriguez-Villegas. Quantifyingthe performance of compressive sensing on scalp EEG signals. AppliedSciences in Biomedical and Communication Technologies (ISABEL),2010 3rd International Symposium on, pages 1–5, 2010.[2] C.O. Arinze, V.E. Idigo, C.O. Ohaneme, and V.N. Agu. Simulationand evaluation of high density cognitive hotspot network based on IEEE802.11 minipop access point series. International Journal of Computersand Distributed Systems, 4(2):1–10, 2014.[3] S. Aviyente. Compressed sensing framework for EEG compression.IEEE/SP Statistical Signal Processing 14th Workshop 2007, pages 181–184, 2007.[4] World Bank. Population ages 65 and above (% of total). Ac-cessed: 06/06/2014.[5] E. J. Candes and M. B. Wakin. An introduction to compressive sam-pling. IEEE Signal Processing Magazine, 25(2):21–30, 2008.[6] Jose Luis Cuevas-Ruiz Diana Alejandra Sanchez-Salas and Miguel89Gonzales-Mendoza. MATLAB - A Fundamental Tool for ScientificComputing and Engineering Applications - Volume 2. InTech, MornHill, 2012.[7] S. Fauvel. Energy-efficient compressed sensing frameworks for the com-pression of electroencephalogram signals, 2013.[8] S. Fauvel, A. Agarwal, and R. Ward. Compressed sensing and energy-aware independent component analysis for compression of EEG signals.Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE Inter-national Conference on, pages 973–976, 2013.[9] S. Fauvel and R.K. Ward. An energy efficient compressed sensing frame-work for the compression of electroencephalogram signals. Sensors,14(1):1474–1496, 2014.[10] Simon Haykin. Communication Systems. Wiley, Hoboken, New Jersey,2001.[11] F.J. Hermann. Randomized sampling and sparsity: getting more infor-mation from fewer samples. Geophysics, 75:173–187, 2010.[12] H.H. Hung and L.J. Chen. An analytical study of wireless error modelsfor bluetooth networks. Advanced Information Networking and Appli-cations, pages 1317–1322, 2008.[13] M. Khazi, A. Kumar, and V. M-J. Analysis of EEG using 10:20 elec-trode system. International Journal of Innovative Research in Science,Engineering and Technology, 1(2):185–191, 2012.90[14] D. Lechuga, O. Escandn, M. Corona, C. Montoya, R. G., A. Anguiano,C. Garca, and J. Moctezuma. Electroencephalographic abnormali-ties in patients with idiopathic insomnia. Neuroscience and Medicine,2(3):178–184, 2011.[15] H. Mahrous, R. Ward, and Z. Zhang. Multi-channel compression ofEEG signals via compressed sensing. In the process of being published.[16] H. Mamaghanian, N. Khaled, D. Atienza, and P. Vandergheynst. Com-pressed sensing for real-time energy-efficient ECG compression on wire-less body sensor nodes. IEEE Transactions on Biomedical Engineering,58(9):2456–2466, 2011.[17] R. Mc Sweeney, C. Spagnol, E. Popoviv, and L. Giancardi. The im-pact of source and channel coding in the communications efficiency ofwireless body area networks. International Journal on Advances inSoftware, 2(4):337–348, 2009.[18] James-A.B. Milner, B.P. Analysis and compensation of packet loss indistributed speech recognition using interleaving. Proceedings of IN-TERSPEECH, pages 1–4, 2003.[19] M. Moudgill, A. Iancu, and D. Iancu. Galois field instructions in thesandblaster 2.0 architecture. International Journal of Digital Multime-dia Broadcasting, pages 1–5, 2009.[20] University of Virginia. Switching technology. Accessed: 14/08/2014.91[21] Charles L. Phillips and John M. Parr. Signals, Systems, and Trans-forms. Prentice Hall, Upper Saddle River, New Jersey, 2008.[22] John Proakis. Digital Communications. McGraw-Hill, New York, 2000.[23] P. Rmeily and H. Mahrous. Bayesian learning algorithm for compressivesensing of non-sparse EEG signals. -. Class project.[24] N. Sireesha and V. Prasanth. Iterative multivariate interpolation forlow complexity Reed-Solomon codes. International Journal of ModernEngineering Research, 2(5):3769–3772, 2012.[25] Highland Communications Technologies. Introduction to Reed-Solomoncodes. 04/08/2014.[26] M. Unser. Sampling—50 Years after Shannon. Proceedings of the IEEE,88(4):569–587, 2000.[27] Z. Zhang, T. Jung, S. Makeig, and B. Rao. Compressed sensing ofEEG for wireless telemonitoring with low energy consumption and in-expensive hardware. IEEE Transactions on Biomedical Engineering,60(1):221–224, 2013.[28] Z. Zhang, T.P. Jung, S. Makeig, and B.D. Rao. Compressed sensingfor energy-efficient wireless telemonitoring of non-invasive fetal ECGvia block sparse Bayesian learning. IEEE Transactions on BiomedicalEngineering, pages 1–4, 2014.92[29] Z. Zhang and B. D. Rao. Extension of SBL algorithms for the recoveryof block sparse signals with intra-block correlation. IEEE Transactionson Signal Processing, 61(8):2009–2015, 2013.[30] W. Zhou, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. Image qualityassessment: From error visibility to structural similarity. IEEE Trans-actions on Image Processing, 13(4):600–612, 2004.93


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items