UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Estimation of the brain's hemodynamic response from fMRI images using the Eigenvector Based Algorithm.. Gadala, Marwa 2007

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_2007-0406.pdf [ 7.5MB ]
[if-you-see-this-DO-NOT-CLICK]
[if-you-see-this-DO-NOT-CLICK]
Metadata
JSON: 1.0100861.json
JSON-LD: 1.0100861+ld.json
RDF/XML (Pretty): 1.0100861.xml
RDF/JSON: 1.0100861+rdf.json
Turtle: 1.0100861+rdf-turtle.txt
N-Triples: 1.0100861+rdf-ntriples.txt
Original Record: 1.0100861 +original-record.json
Full Text
1.0100861.txt
Citation
1.0100861.ris

Full Text

ESTIMATION OF THE BRAIN'S HEMODYNAMIC RESPONSE FROM FMRI IMAGES USING THE EIGENVECTOR BASED ALGORITHM FOR MULTICHANNEL DECONVOLUTION by MARWA GAD ALA BASc, The University of British Columbia, 2004  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES (Electrical and Computer Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA May 2007 © Marwa Gadala, 2007  ABSTRACT  The ability to determine the brain's hemodynamic response without relying on an input function would be an extremely valuable asset in a large number of medical applications, and today, functional Magnetic Resonance Imaging (fMRI) is one of the leading methods in developing a better understanding of the human brain. Assuming the linear timeinvariant model for the observed fMRI response ([1], [2], [4]), this work provides an estimate of the hemodynamic brain response both on a regional and on an individual voxel level, as well as provides an estimate of the input signal that excited the brain's response. The solution to this problem is achieved using the Eigenvector-Based Algorithm for Multichannel Blind Deconvolution (EVAM) ([5], [6]) combined with Independent Component Analysis (ICA) [23]. The resulting estimate of the input signal produced by the proposed method could prove to be a valuable insight into the actual signal that triggered the brain during the experiment, and not the ideal signal that should have triggered it based on experimental observations. Also, contrary to previous works, no prior assumptions regarding the shape or order of the brain's response are made. When compared to non-blind identification algorithms traditionally used in the literature, the results show a significant improvement as the shape of the hemodynamic brain response conforms with current medical understandings. Furthermore, the estimated hemodynamic brain response is then used as a basis to determine active and inactive voxels. Two clustering methods, K-Means Clustering and Correlation-Based Clustering, are compared. Correlation-Based Clustering is found to be superior and is thus used to spatially map the active and inactive voxels. Spatial maps of important brain regions  yield promising results where spatial sparseness is not characteristic of the images. Finally, a preliminary comparison between a healthy subject and a subject inflicted with Parkinson's disease yields promising differences, especially in the left primary cortex where very little activation was observed. Interestingly, symptoms of Parkinson's disease are thought to be a result of decreased stimulation of the motor cortex. Although no major medical conclusions can be made due to theriskof incorrectly attributing intersubject variability to differences due to Parkinson's disease, this preliminary comparison shows promising results that encourage future research in this area.  TABLE OF CONTENTS  Abstract  ii  Table of Contents  iv  List of Figures  vi  Acknowledgements  viii  Dedication  ix  C H A P T E R 1: LITERATURE REVIEW  1  1.1 The Linear Time-Invariant fMRI Response Model  2  1.2 Experimental Design and Optimization  4  1.3 Preprocessing of FMRI Data  9  1.4 Algorithms Used to Estimate the Hemodynamic Brain Response  12  1.4.1 Least Squares Method  12  1.4.2 Maximum Likelihood Estimator  14  1.4.3 Blind Deconvolution Methods  16  1.5 Problem Statement  18  C H A P T E R 2: PROPOSED METHODS  19  2.1 Data Preprocessing  19  2.1.1 Principal Component Analysis  19  2.1.2 Applying ICA  22  2.1.3 Selecting Task-Related Components  22  2.1.4 Projecting the Original Data onto the Task-Related Components .... 24 2.1.5 Defining Regions of Interest  25  2.1.6 Implementation of Preprocessing Procedure  26  2.2 The Eigenvector-Based Algorithm for Multichannel Blind Deconvolution ... 28 2.2.1 Introduction to the Algorithm 2.2.2 Unknown System Model 2.2.3 The Adaptive System Model 2.2.4 Relation between Unknown and Adaptive Channel Roots  28 ....29 30 33  2.2.5 The Elimination of Extraneous Roots Using A Priori Information ... 36 2.2.6 Algorithm Implementation  40  2.2.7 Algorithm Testing  44  C H A P T E R 3: RESULTS AND DISCUSSION  50  3.1 Description of Data Set  50  3.2 Outputs of the Proposed Algorithm  51  3.3 Comparison of EVAM Results with Least Squares Estimates 3.4 Clustering of Active and Inactive Voxels  52 54  3.4.1 K-Means Clustering  57  3.4.2 Correlation Clustering  60  3.4.3 Spatial Mapping of Active and Inactive Voxels  63  3.5 Comparison between Healthy and Parkinson's Diseased Patient C H A P T E R 4: CONCLUSION  65 68  Future Considerations  69  References  71  v  LIST OF FIGURES Figure-l.l: Model of the Input Signal (x(t))  8  Figure-2.1: Flowchart Describing Implementation of the Preprocessing Stage  27  Figure-2.2: Block Diagram of the Unknown System Model  29  Figure-2.3: Block Diagram of the Adaptive System Model  31  Figure-2.4: Effect of Choosing Incorrect Filter Order  39  Figure 2.5: Graphical Representation of the Data Simulation Procedure  45  Figure-2.6: Comparison between the Actual and the Estimated Brain Response and Input Signal in Channel Two of the Simulated Data  46  Figure-2.7: Comparison between the Actual and the Estimated Measured Response in Channel Two of the Simulated Data 47 Figure-2.8: Comparison between the Actual and the Estimated System Responses in Channel Two of the Simulated Data with SNR= - 15dB  48  Figure-2.9: Comparison between the Actual and the Estimated System Responses in Channel Two of the Simulated Data with SNR = +15dB  49  Figure-3.1: Comparison between the Modeled Brain Excitation Signal and the Estimated Brain Excitation Signal Figure-3.2: Actual and Convolved fMRI Response  51 52  Figure-3.3: Comparison Between the Estimated Brain Response Using EVAM and Using the Least Squares Method Figure-3.4: Illustration of Active and Inactive Voxels  54 56  Figure-3.5: Difficulty of Visually Identifying Active and Inactive Voxels Based on the Measured fMRI Time Response  57  vi  Figure 3.6: Mean Response of the Two Clusters in Region One where the K-Means Clustering Method was Effective  59  Figure 3.7: Mean Response of the Two Clusters in Region Five where the K-Means Clustering Method was Ineffective  60  Figure 3.8: Mean Response of the Two Clusters in Region Eight for the K-Means and Correlation Clustering Methods  62  Figure 3.9: Mean Response of the Two Clusters in Region Five for the K-Means and Correlation Clustering Methods  63  Figure 3.10: All Active and Inactive Voxels in the Comprehensive Brain Slices 16 and 20  64  Figure 3.11: Active and Inactive Voxels in the Right Cerebral Hemisphere  64  Figure 3.12: Active and Inactive Voxels in the Left Supplementary Motor Area  65  Figure 3.13: Active and Inactive Voxels in the Left Primary Motor Cortex  65  Figure 3.14: Preliminary Comparison between Parkinson's Diseased Patient Responses (Left Column) and Healthy Subject Responses (Right Column).. .67  J  vii  ACKNOWLEDGEMENTS  This worfcjwouCdnot have been possible without the guidance and encouragement of my supervisors (Dr. <Ra6a6 Ward and <Dr. Jane Wang. Than^youfor sharing your knowledge, time, and experience with me.  DEDICATION  I dedicate this wor^to my dear parents MohamedGadaCa and Omnia Gheta, and to my Coving siblings Mariam and Ibrahim, and to my supportive husbandM/aCeedTantawy.  Thanhjyou aCCfor standing by my side and encouraging me atong my journey in search o knowledge. May JAttah bless you aCC.  ix  CHAPTER 1: LITERATURE REVIEW  Functional Magnetic Resonance Imaging (fMRI) is a noninvasive medical imaging technique that can be used to study brain anatomy and physiology. During an fMRI experiment, a subject lying in a cylindrical tube housed with a powerful electro-magnet will be asked to perform a certain cognitive task or to observe a certain stimulus [7]. Then, individual slices of the brain having a spatial resolution of a few millimeters are typically acquired at a rapid sampling rate of approximately 0.5 to 1Hz [17]. The difference between oxygenated hemoglobin which is diamagnetic and deoxygenated hemoglobin which is paramagnetic alters the T2* weighted MRI signal and thus is exploited by fMRI to detect neural activity in the brain ([24], [30]).  This signal  difference is known as the blood oxygen level-dependent (BOLD) hemodynamic response and is related to neuronal activity in response to environmental stimuli. The exact nature of the relationship between neural activity and the BOLD signal is known as neurovascular coupling and is a subject of ongoing research; however, studies show that hemodynamic changes are proportional to, and therefore constitute a linear measure of, neural activity [31].  Today, fMRI is one of the leading methods in developing a better understanding of how the brain functions and coordinates different regions to perform sensory, motor, and language-related tasks [13]. More importantly, fMRI is helping scientists understand the pathology of neural diseases and assess the effects and potential risks of medication, surgeries or other forms of treatment.  Its most significant advantage is that it is  1  noninvasive and does not require injections of radioactive isotopes; thus, eliminating the risk of radiation inherent in other popular imaging techniques such as Computerized Tomography (CT) [15]. FMRI experiments can also be designed to have a very short scan time on the order of two minutes per run. Most importantly, compared to other common scanning methods, fMRI has unmatched spatial resolution, generally 1.5mm and as small as 1mm [15]. However, the temporal resolution is inherently limited, unless combined with high temporal resolution techniques such as EEG [15].  1.1  T H E LINEAR TIME-INVARIANT F M R I RESPONSE M O D E L  The observed fMRI response can be modeled by a linear time-invariant model ([1], [2], [4]). According to this model, the response to an arbitrary sequence of stimuli is equal to the summation of responses to each individual stimulus.  This can be represented  mathematically as:  y(t) = x(t)*h(t) + n(t)  (1.1)  where y(t) is the measured fMRI signal of a particular voxel (three-dimensional equivalent of a pixel), x(t) is a model of the input signal that stimulated the brain, h(t) is the brain's unknown hemodynamic response (HDR) to each individual stimulus, n(t) represents additive noise, and the symbol "*" is the linear convolution operator. [3]  However, the fMRI signal is not sampled continuously in time. In fact, it is a discrete time signal determined by the repetition rate, T . If the hemodynamic response is R  2  assumed to have finite duration THDR and can be represented adequately by a piecewise constant function with a discretization interval of  ATHDR,  then the continuous time  representation in equation (1.1) can be converted into the following discrete time matrix model [3]:  (1.2)  y = Xh + n  where y is a 1 x N vector representing the measured time course of a single voxel and tp  where Ntp is the number of time samples, h is a 1 x  NHDR  discrete-time vector  representation of the continuous time hemodynamic response h(t) and where NHDR = THDR •/ A T D R , H  and finally X is an N x tp  N DR H  stimulus convolution matrix.  Essentially, X is a matrix operator representation of the time-discretized convolution with the continuous time signal representing the event sequence that stimulated the brain, x(t). The element in the n* row and m column of the stimulus convolution matrix, X, is given th  by [3]  (1.3)  It should be noted that it is desirable to choose the discretization interval, ATHDR, to be shorter than the fMRI sampling interval, TR, SO that the estimated hemodynamic brain response can be represented with a finer temporal resolution. [3] In the end, the final goal of the analysis is to obtain an accurate estimate of the hemodynamic brain response h(t), based on the measured fMRI signals y(t) and a model of the input signal x(t). This problem can be divided into three sub problems. Firstly, it  3  is necessary to determine the optimal experimental design and correspondingly accurate model of x(t). Secondly, the observed fMRI must be carefully denoised to ensure accurate estimation of the brain's hemodynamic response. Once x(t) and y(t) are optimized and corrected, the analysis becomes analogous to a channel parameterization problem in communication theory in which one of many algorithms can be used to estimate the hemodynamic brain response, h(t).  1.2 EXPERIMENTAL DESIGN AND OPTIMIZATION  There are two main forms of fMRI experimental designs: block designs and event-related designs. In block designs, the trial types are blocked together and presented to the subject for an extended period of time (ex, AAAAA BBBBB AAAAA) [4]. However, recent advances in image acquisition have made it possible to exploit the high temporal resolution available in fMRI experiments in order to design event-related experiments. In these experiments, the stimulus is briefly presented to the subject in an interleaved, random fashion (ex, AABABBAB) ([11]).  Researchers today are moving towards event-related experiments for optimal design [11]. Event-related designs improve temporal resolution such that it is possible to look at events on a shorter time scale [4]. They also allow researchers to capture temporal information related to the hemodynamic brain response [15]. Furthermore, event-related experiments are versatile in their design [4]. Most importantly, the random presentation  4  of stimuli reduces potential confounds such as habituation, learning, and anticipation [4]. However, there are several disadvantages to event-related designs. One obvious problem is the loss of signal-to-noise ratio due to the transient nature of the response and the time needed between trials. Although event-related designs avoid artifacts due to predictability, they are, in theory, statistically less powerful than block designs [7]. Another major problem with event-related designs is in determining the exact timing of the events in order to correlate them with their corresponding brain response. Unfortunately, researchers today mostly model the events with little consideration to timing errors [7]. While event-related experiments are optimal for estimating the hemodynamic brain response resulting from individual events, the traditional blockrelated approach, which is also found in other imaging techniques such as Positron Emission Topography (PET), is optimal for detecting brain activity ([11], [28], [29]).  A major area of research today involves the design of event-related experiments in order to optimize the accuracy of the estimated hemodynamic brain response. The stimulus onset asynchronies (SOAs), also known as the inter-stimulus intervals are the units of time between subsequent events [2]. It was once argued based on empirical evidence that the SOAs must be at least 15 seconds and that the accuracy of the hemodynamic brain estimate is based on the mean of SOAs used in the experimental design [38]. Today, it is believed that in general, the estimator efficiency depends not on the mean, but the distribution of SOAs ([2], [3]). Research also shows that smaller SOAs that are exponentially distributed are the most efficient ([2], [3]). More specifically, long mean SOAs of greater than twenty seconds result in similar estimator efficiencies regardless of  5  whether the SOAs are constant or variable [2], However, for shorter mean SOAs, the estimator efficiency of constant SOAs falls off dramatically while that of variable SOAs increases monotonically [2]. Despite these results, there is a practical limit on the minimal SOA that can be used due to the nonlinearities resulting from habituation that become prominent at very short SOAs [2]. Furthermore, very short SOAs of less than one second are not advisable as the predicted additive effects upon the Hemodynamic response of two closely occurring stimuli break down [4].  Determining the total imaging time, the exact minimum interstimulus interval, as well as the optimal sequence and timing of events for optimal experimental design is still an area of ongoing research. One of the most promising recent advancements includes a quantitative objective criterion to help measure the efficiency of the biased maximum likelihood estimate in which variations of the total imaging time, and minimum interstimulus intervals can be varied for optimal experimental design [3]. This advancement is based on a criterion known as estimator efficiency, E, which is a measure of the expected accuracy of an estimator. Mathematically, estimator efficiency is defined as  E = (H-H) ' 2  (1.4)  where H is the estimated value of H using the given estimator, and ||.. .|| represents expected value [3]. Maximizing estimator efficiency is critical in obtaining more accurate estimates of the hemodynamic, brain response. The significance of this criterion is further amplified by the fact that increasing estimator efficiency is equivalent to  increasing the magnetic field of the fMRI scanner. Estimator efficiency depends on a number of factors including the nature of fMRI noise, the scan duration, the number and sequence of events as well as the number of sampling points in the estimate of the hemodynamic response [3]. To optimize the estimator efficiency researchers cannot easily change the nature of fMRI noise, but can manipulate experimental parameters to achieve an optimal design.  A common method for choosing an optimal event-related sequence is to generate a large number of random sequences and to then use the one that yields maximum estimator efficiency from the large, but finite number of possible sequences. Unfortunately, obtaining this sequence can take a prohibitive amount of computational resources. Thus, researchers have recently extended the use of m-sequences (maximum length shift register sequences) from encryption and error-correcting to fMRI in order to define a class of sequences which provide estimator efficiencies superior to those of random sequences ([11], [27]). Moreover, this bonus is achieved at a low computational cost [11].  The discussion so far is regarding the design of the fMRI experiment in terms of the total scanning time, the interstimulus intervals, and the overall optimal sequence and timing of events. However, after optimally designing and conducting the experiment there is still the question of how to mathematically model the input signal, x(t). Accurate modelling of the input signal is a critical step in accurately estimating the brain's hemodynamic response. During the fMRI scan, the patient is stimulated or asked to perform certain  7  tasks at given time intervals in order to observe his/her reactions. The time at which these stimulations occur, and the strength of each stimulation are known and can be used to model the input function, x(t), as a sum of scaled and time-shifted delta functions centred at the onset of each event ([2], [3],[15]). The result is an estimate that appears as the one in Figure-1.1, and which corresponds to the data set used in this thesis. However, it is important to note that this model of the input signal is only an estimate that is based on the expected timing and strength of the subject's reactions based on the external environment and the design of the experiment. This expectation, although similar, is most certainly different than the actual signal that excites the human brain.  -0.1 | -0.2 J n/oL_  0  '——  1  100  50  150  Time (seconds)  Figure-1.1: An Example of the Designed Stimuli Input Signal (x(t))  8  1.3 PREPROCESSING OF F M R I D A T A  A single functional magnetic resonance imaging signal is comprised of many signals resulting from various ongoing brain activities such as heart beat, breathing, and head motion. There are also less well understood sources including localized motion of brain tissue and low frequency drifts that can be measured in deceased brains [37]. In many other popular imaging techniques such as Positron Emission Tomography (PET), measurements represent physiological quantities that can be quantitatively compared with other measurements [36]. However, fMRI signals have neither a simple quantitative physiological explanation nor a well-defined reference for comparison. In fact, most fMRI experiments use the signal value at a given spatial location (voxel) during rest to be a reference point for comparison, despite the fact that this base control contains ongoing neural activity [15]. Thus, it is essential before analyzing fMRI signals to isolate the signals of interest, namely those task-related to the stimuli presented to the subject.  The algorithms proposed in the literature to achieve this extraction can be divided into two main classes: (1) hypothesis-driven methods, and (2) data-driven methods. In the former, the time course of each voxel is tested against one or more hypotheses such as in the Cross-Correlation Method [15]. In the Cross-Correlation Method, a voxel's time course is simply cross-correlated with a signal describing the event-related experiment. The most popular software package that uses this approach is Statistical Parametric Mapping (SPM) ([24], [25], [33]). On the other hand, data-driven methods analyze the data and search for common features. Popular examples of this class are clustering  9  methods in the time domain and Independent Component Analysis (ICA) ([8], [9]). Hypothesis-driven methods are more commonly used; however, their main disadvantage they require the user to predict or model the behavioral signal based on a priori information, which is often difficult and inaccurate. Data-driven methods require no prior hypotheses and can be used to identify confounds such as head motion, or to validate a given model. However, data-driven methods may amplify intersubject variability due to their sensitivity to the underlying data structure. [15]  In 1998, Independent Component Analysis (ICA) wasfirstintroduced as a powerful method in fMRI analysis [35]. ICA is a family of blind signal separation methods based on the assumption that signal sources are statistically independent. Besides assisting in the extraction of task-related activations embedded in the fMRI signal among the complex mixture of unknown brain signals, ICA has also allowed for the detection of unexpected responses to stimuli including random responses or transiently task related responses ([26],. If linear mixing is assumed, then the fMRI signal can be represented by a space time data matrix of measurements Xj defined as [15] t  =I>,A+£;,  (1-5)  where j=l,...,J where J is the number of voxels, and t=l,...,T where T is the number of time points. The K independent components form the mixing matrix, A, and the source matrix, S, where the columns of A represent component maps and the rows of S represent time courses of the respective component maps. Finally, E represents spatially and  10  temporally white noise. In spatial ICA, the columns of the mixing matrix, A, are assumed to be statistically independent while in temporal ICA the rows of the source matrix, S, are assumed to be statistically independent [15]. The difference between the two approaches is substantial and to this day there is discussion on which method is better ([39], [40], [41]).  To completely determine the mixing matrix, A, and the source signals, S, higher order statistical methods are required. The most popular of these algorithms include Infomax, JADE, and FastICA [15]. Infomax and FastICA produce similar results. However, Infomax works well when looking for spatially independent patterns, but is less efficient when searching for temporally independent waveforms. Also, studies using FastICA uncovered varying quality in the correspondence between spatial and temporal modes depending on the spatial proximity between active regions [15].  The independent components can be ordered according to a statistical criterion or according to an a priori feature of interest. Often times the components are sorted according to the amount of variance explained [17]. Another common choice is ordering the components by comparing each component's time course with a mathematical description of the experimental stimuli. This comparison can be done by simple visual inspection or using cross-correlations [17]. Ordering the components by their frequency content or power spectrum has also proved to be efficient in identifying task-related components [8]. More elaborate methods include identifying components with asymmetric histograms or spatial clustering [15].  11  1.4 ALGORITHMS USED TO ESTIMATE T H E HEMODYNAMIC BRAIN RESPONSE  Sections 1.2 and 1.3 dealt with the issue of how to optimize and model the input signal and then how to denoise the measured fMRI response. After these critical preprocessing steps, the analysis becomes analogous to a channel parameterization problem in communication theory in which one of many algorithms can be used to estimate the hemodynamic brain response, h(t). In communications theory, several algorithms have been implemented to determine unknown channel parameters from a known input signal and a received output signal. Among the most popular algorithms extended from communication theory to medical image processing are: (1) Least Squares, (2) biased and unbiased Maximum Likelihood, and (3) blind deconvolution methods such as Cross Relations (CR), Iterative Quadratic Maximum Likelihood (IQML) and the EigenvectorBased Algorithm for Multichannel Blind Deconvolution (EVAM). ([20], [21], [22])  1.4.1 Least Squares Method  By far, the most common method used to estimate the hemodynamic brain response is the traditional Least Squares method ([18], [20], [21], [23]). This method is based on the idea of minimizing an error cost function to obtain a solution to the unknown signal parameters. Assuming that there is an observed set of inputs, X, and outputs, Y, which can be described as  Y = HX  (1.6)  12  where H represents the true parameters, then it is possible to formulate the error equation as  Y = HX + e  (1.7)  where e represents the error due to estimating H as H. The goal is to minimize the square of the error e. Thus, a cost function is formulated as follows  J(H) = e = e e = (Y-XH) (Y-XH) 2  T  T  (1.8)  The optimal estimate of //will be the one to minimize this cost function. In other words, setting dJ I BH = 0 and solving fori/, the result is  H =(X Xy'X Y T  T  LS  (1.9)  Therefore, to minimize the square error, the hemodynamic brain response can be estimated according to equation (7). In the fMRI case, and taking a univariate approach, X is the stimulus convolution matrix that models the set-up of the experiment, Y is the blood oxygenation level-dependent imaging (BOLD) responsefromthe voxels over time, and H is a matrix of column vectors describing the hemodynamic response from each voxel.  It is true that the Least Squares method presents an unbiased estimator, which is simple to implement and understand. However, it puts certain limitations on the definition of the stimulation convolution matrix, X, because it must be invertible. More importantly, due to the high temporal correlation that exists in fMRI noise, the Least Squares estimate is  13  less efficient than other estimates such as the Maximum Likelihood estimate discussed in the next section [4].  1.4.2 Maximum Likelihood Estimator  The maximum likelihood estimator, for a linear model with Gaussian noise, is optimal in the sense that it has the smallest variance, or equivalently the greatest estimator efficiency, defined in equation (1.4), among all unbiased estimators  ([2], [3]).  An  estimate of the hemodynamic response, HML, using the Maximum Likelihood method is given by  =(x c- xy x c- Y  H  r  ml  l  l  r  ]  (1.10)  As before, X and Y are known. The only new parameter in the equation is the noise covariance matrix, C [2]. This is a challenge to estimate because it varies across voxels, experimental runs, and subjects. However, a promising method for obtaining an accurate estimate of C is based on the original estimated hemodynamic brain response, HLS, obtainedfromthe Least Squares method. An error term, e, is defined as [3]  e = Y - XH  LS  (1.11)  This error term is modeled as a Gaussian random variable with zero mean and noise covariance matrix C. Under the assumption of temporally uncorrected noise, where C = a I, the Maximum Likelihood estimator is reduced to the Least Squares estimate [3]. 2  14  The Maximum Likelihood estimator can be taken a step further by incorporating a priori information about the expected shape of the hemodynamic brain response as a bias in the linear estimation. This requires carefully determining an Np-dimensional linear subspace, L, spanning all possible hemodynamic responses for a given experiment. Using this subspace, the hemodynamic response of an individual voxel, i, can be uniquely parameterized as  (1.12)  hj = Lpi  where L is an Nh x N matrix whose columns form an orthonormal basis for the subspace p  L and where Nh is the dimensionality of the embedding space while Np is the dimensionality f the subspace, and where the elements of p; are the projection of hj onto the corresponding basis vectors [2]. The resulting biased Maximum Likelihood estimate for the hemodynamic response is then [2]  = L{U X c; XLY U X C-JY T  H  BML  x  X  T  (1.13)  This biased estimator is generally more efficient than the unbiased version, especially if N « Nh. However, this method must be used with caution as the resulting estimates of p  the hemodynamic brain response will be inaccurate and distorted unless they lie within the specified subspace for every event type, in every voxel, in every subject of the experiment [3].  15  1.4.3 Blind Deconvolution Methods  Blind deconvolution methods are unique in that they do not require information regarding the input signal, x(t), in order to obtain an estimate of the hemodynamic response, h(t). In such algorithms both the input signal and the hemodynamic brain response are treated as unknowns, and the basis assumption is that all channels are excited by the same input signal. Such algorithms have been used in the estimation of kinetic parameters in Positron Emission Tomography (PET) ([20], [21]). In PET, a sequence of images is acquired over time to track the regional uptake and washout of a radioactive isotope injected into a subject. The measured PET signal is a graph of the concentration of the isotope in the brain tissue over time, and is referred to as a time-concentration curve (TC) [21]. This signal can be linearly modelled as  TC(t) = b(t).p(t) + n(t)  (1.14)  where b(t) is the blood input function, p(t) is the regional tissue response, n(t) represents additive noise, and the symbol "*" is the linear convolution operator ([20], [21]). The similarity between the above equation and equation (1.1) corresponding to fMRI analysis is apparent. Furthermore, as in fMRI, accurate modeling of the input signal is problematic. In fact, researchers in PET often resort to inaccurate, complicated and even health risky methods such as direct arterial blood sampling or cardiac blood pool to model the input function. As in fMRI, very little work has been done to estimate the brain's response without information regarding the input signal.  16  Researchers in PET that have extended blind deconvolution methods to medical imaging rely on a key assumption that opens the door to a wide range of algorithm choices. With certain assumptions, the data is fitted into a physiological two (or three) compartment model ([20], [21]). This assumption assumes the tissue response follows an exponential graph and reduces the problem to the determination of two unknown parameters. In other words, the tissue impulse response is given by  h(t) = k,e-' ° k  (1.15)  where the diagnostically clinical information provided by kt, the uptake constant, and ko, the washout constant is the only unknown ([20], [21]). This step allows the use of numerous algorithms such as the Cross Relation Method, Iterative Quadratic Maximum Likelihood, and the Eigenvector-Based Algorithm for Multichannel Blind Deconvolution. Of these three popular algorithms, only the latter is theoretically and computationally feasible without binding the shape of the hemodynamic brain response to a certain function group.  1.5 P R O B L E M STATEMENT  The ability to determine the brain's hemodynamic response without relying on an input function would be an extremely valuable asset in a large number of medical applications. Assuming the linear time-invariant model for the observed fMRI response ([1], [2], [4]), this work will attempt to estimate the hemodynamic brain response both on a regional  17  and on an individual voxel level, as well as provide an estimate of the input signal that excited the brain's response. The solution to this problem will be achieved using the Eigenvector-Based Algorithm for Multichannel Blind Deconvolution (EVAM) ([5], [6]) combined with Independent Component Analysis (ICA) ([17] [23], [18]). The resulting estimate of the input signal that EVAM produces could prove to be a valuable insight into the actual signal that triggered the brain during the experiment, and not the ideal signal that should have triggered it based on experimental observations. Furthermore, unlike the limitation imposed in the PET case ([20], [21]), compartment modeling will not be implemented and thus there will be no prior assumptions regarding the shape or order of the brain's response. The results of this system will be compared with the traditional Least Squares Method. They will also be used to analyze subject responses on a regional and voxel-level. Two different clustering methods, K-Means and Correlation-Based Clustering, will be used to identify active and inactive voxels for spatial mapping. Finally, a comparison between the responses of healthy subjects and subjects diagnosed with Parkinson's disease will be compared.  18  CHAPTER 2: PROPOSED METHODS  2.1  D A T A PREPROCESSING  This chapter is a summary of the methods used to tackle the problem statement. However, before the algorithm is applied to real fMRI data, it is necessary, as previously discussed, to denoise the data. This preprocessing stage is covered in Section 2.1 and is divided into six sub-procedures: (1) Principal Component Analysis, (2) Independent Component Analysis, (3) selection of task-related components, (4) projection of the original data onto the task-related components, and finally (5) the definition of regions of interest.  2.1.1 Principal Component Analysis  The first step in the preprocessing procedure is to reduce the dimensions of the data to allow for analysis. This reduction is achieved using Principal Component Analysis. Principal Components Analysis (PCA) is a statistical technique that seeks to achieve dimension reduction by projecting the original data into a few orthogonal linear combinations (principal components) which identify key patterns that highlight the similarities and differences in the data. Such patterns can be very difficult to find in data sets of high dimension where the luxury of graphical representation is not available, and thus PCA is considered a powerful tool. [34]  19  Technically speaking, PCA is an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. PCA can be used to reduce the dimensions of a data set while retaining those characteristics of the data set that contribute most to its variance. This is achieved by keeping lower-order principal components and ignoring higher-order ones. [34]  PCA has the advantage of being the optimal linear transformation in the sense of mean square error. This advantage, however, comes at the price of greater computational requirements if compared, for example, to the discrete cosine transform. Unlike other linear transforms, PCA does not have a fixed set of basis vectors, and its basis vectors depend on the given data set. [34]  Mathematically, Y is defined as a t x v data matrix of measured voxel time courses where for the experiment used in this work t = 149 is the number of temporal observations in each voxel and v = 5074 is the number of voxels considered in the experiment. Principal Component Analysis will reduce the number of voxels from y to m. To achieve this, the following steps are executed.  1. Start by ensuring the data matrix has zero empirical mean; thus, the empirical mean of the distribution must be subtracted from the data set.  20  2. Get the covariance matrix of the data set.  3. Get the eigenvalues and corresponding eigenvectors, e; for i=l,.. .,v, of the covariance matrix.  4. Order the eigenvectors from most significant to least significant, where the most significant eigenvector is the one that corresponds to the largest eigenvalue.  5. Retain the first m eigenvectors where m is the new, reduced dimension of the data set. The choice of m is based on the number of components the user would like to ignore, and it is best determined by eliminating the eigenvectors whose corresponding eigenvalues are significantly smaller than the largest eigenvalue. In this work, m=50, as suggested in [15].  6. Form the feature vector, A, which is defined as a matrix whose columns are comprised of the first m eigenvectors. In other words,  A = [e,  (2.1)  Finally, the new, reduced data set will be:  Yi = YA  (2.2)  where Yi is the t x m reduced data matrix, and A is the v x m feature vector of eigenvectors used in the PCA reduction.  21  2.1.2 Applying ICA  As shown in Chapter 1, Independent Component Analysis's strong reputation in effectively identifying task-related components made it an ideal choice in the preprocessing stage. ICA is applied to the fifty PCA components identified in the previous step. This can be described in a single equation as [23]  S = YiA = YAA  (2.3)  where A is the mxm unmixing matrix, and Yj and Y are as defined in the Principal Component Analysis stage. Following this step, Infomax ICA was then used to separate the eigenimages into spatial independent components [23]. Finally, the corresponding' time courses were evaluated as  TC(t) = Y (A-Sy T  1  ]  (2.4)  where TC(t) is txm matrix of time courses, Yi is the mxt dimensionally-reduced data matrix defined by Principal Component Analysis, A is the ICA unmixing matrix, and S is the ICA source signal matrix. [23]  2.1.3 Selection of Task-Related Components  As discussed previously, various methods have been proposed to select which independent components are task-related or event-related and thus should be retained for further analysis. The selection of task-related components can be done by cross-  22  correlations with the underlying event-related experiment, or by simple visual comparisons of the static spatial map and the associated time course to determine whether they are consistent with prior knowledge about brain activations [15]. However, a major problem of using the static spatial map to select task-related components is the risk of overfitting, because ICA may fragment a large area of activation into several sparse spatial maps, each with similar corresponding time courses [15]. Thus, in this work, the former method is preferred.  From the time courses, TC(t), derived in the previous preprocessing step, only those that are deemed task-related will be selected. Naturally, task-related components will follow the signal model outlined in equation (1.2) as they will exhibit excitations according to the experimental design described in the model. To isolate these event-related components, the subsequent procedure is followed.  1. Denote the i ICA component by the vector f,. th  2. If fj is event-related, then it is expected to follow a similar formulation to the form  fi = Xei + ni  (2.5)  where X is the stimulus convolution matrix determined by the event sequence x(t), ej is the original event-related signal, and nj represents noise [15]. More specifically, X is a txt Toeplitz matrix representation of the time-domain input  23  signal, x(t) that models the experimental design. Using equation (2.3), each component vector is fitted to the model to estimate ei in the least-square sense. 3. Next, the relative fitting error, di, is defined as [15]  (2.6)  4. A graph of the relative fitting errors can then be used to determine the smallest errors whose corresponding ICA components should be retained, thus reducing the number of components from m to M. If the column components in S are sequentially ordered from most-task related to least-task related by the dj criterion, then this notion can be mathematically represented via the identity matrix I as M  [23]:  S! = S£2  (2.7)  where Q is a m x M matrix equal to (IM, 0).  2.1.4 Projection of the Original Data onto the Task-Related Components  Now that a matrix, Si, containing all the task-related components has been determined, the final step in the preprocessing procedure is to project the original data onto these selected components. This projection is achieved using a single equation.  Y^,y ,...,y ] 2  v  =  S^S,y S Y ]  T  (2.8)  24  where Y is the denoised data set, Si represents the M task-related components, and Y is the.original, noisy data set. It is important to note that the denoised data set has the same dimensions as the original data set, t x v.  2.1.5 Defining Regions of Interest  The discussion so far has involved applying the proposed methods to each voxel. However, before the data can be analyzed by a blind deconvolution method such as the Eigenvector Based Algorithm for Multichannel Blind Deconvolution, the data must be divided into a reasonable number of unknown and independent channels. One possibility is to treat the independent components from ICA as independent channels. Another choice is to define regions of interest based on medical knowledge and to treat each region of interest as a single channel. A region of interest (ROI) is a set of voxels grouped together based on medical understanding. Using the latter method of regions of interest, not independent components, proved to yield better results. Thus, eighteen regions of interest were kindly defined, manually, by Dr. Martin McKeown. Each region was treated as a single channel whose response is defined as  5>,(o R  '  (0 = J = L  F"  ( 2  '  9 )  where r;(t) is the time response representing region i, N is the number of voxels manually defined in region i, and yi(t) is the fMRI signal corresponding to voxel i in the denoised  25  data matrix Y. These regional responses are the key input into the blind deconvolution algorithm described in Section 2.2. In the computation, all eighteen regions were used; however, in the analysis of results only three of the eighteen regions were considered as they were labeled as medically significant by the neurologist. These regions are: (1) the right cerebral hemisphere, (2) the left supplementary motor area, and (3) the left primary motor cortex.  2.1.6 Implementation of Preprocessing Procedure  The preprocessing procedure discussed in the previous sections was used to denoise the data and prepare it for further analysis by the Eigenvector Based Algorithm for Multichannel Blind Deconvolution. The program consists of several M-files written in Matlab. Figure-2.1 is a flowchart designed to summarize the procedural implementation. On the left-hand column of the figure there are short headings describing each step in the procedure, and to the right of each step is the exact code used to implement it. The complete code for each function is encapsulated for simplification.  26  Corresponding Code  Step in the Program  ? Jo;a S g i f o r 3 a n e n O O 1 • ^•iti^fi^  :  J  2: Apply Principal Component Analysis (PCA) (reduced to 50 components)  [e j q , r e d u c e d  5. Select task-related components  :  [ w M a t y s M a t , aMat:} = m e x i c a ( r e d u c e d  3. Apply ICA on reduced components  4. Get corresponding time courses of ICA components  d a t a , res] -- pica (ROIs_raw, i }  *tec!pgii;re duced :  data  data)  i n v ( w M a t ;*-sMat)  IllSlllftl:® ^1  6; Denoise the observations by projection  Y  denoised  = S * i n v (S ' * S ) *<S ' * R O I s r a w ;  f;;|iii;;ir2>.s3v;;E4:v r 5 , r 6 , r 7 , r 8 , r 8 , r 9 , r l 0 , r l l :  :  pggg^^g;.(;Y^0endised)';. ,  ::  '  Figure-2.1: Flowchart Describing Implementation of the Preprocessing Stage  27  2.2  T H E EIGENVECTOR-BASED A L G O R I T H M FOR M U L T I C H A N N E L BLIND DECONVOLUTION  2.2.1 Introduction to the Algorithm  The Eigenvector-Based Algorithm for Multichannel Blind Deconvolution (EVAM) is a high-complexity, high-performance algorithm suggested in 1995 by Gurelli et al. [5]. EVAM can be used to deconvolve an unknown, possibly colored, Gaussian signal that is observed through two or more unknown channels. The algorithm is capable of estimating the unknown orders and root (pole and zero) locations of the channel transfer functions as well as provide an estimate of the unknown input signal that excited the channels. The estimates of the channel transfer functions are acquired based on eigenvalue decomposition of a sample correlation matrix, and the input signal is estimated by multichannel finite impulse response (FIR) filtering. [5]  There are other multichannel adaptive system identification algorithms in the literature that could have been implemented in this work, but many of these generally fail in accurately determining the roots and orders of the channels with sufficient accuracy. One important reason for this is that the performance surface may be highly ill-conditioned which leads to very slow convergence rates for stochastic gradient type algorithms. [5] Furthermore, when EVAM is compared with Separable Least Squares (SLS), one of the more traditional methods in the field, it proves to yield highly more accurate results. [5]  28  2.2.2 Unknown System Model  EVAM involves two models: one of the unknown system and one of the adaptive system. The unknown system can be modeled as shown in the block diagram in Figure-2.2. ni(t) C.Cz ) 1  r.(t)  1 YRi(t)  -> y R2(t)  x(t) n (t) P  CpCz) 1  y p(t) R  Figure-2.2: Block Diagram of the Unknown System Model  All the channels are fed by the same input signal x(t), which may be wide-sense stationary and possibly colored Gaussian, or even nonstationary. The input signal can also be finite or infinite in duration. It is assumed that the number of channels, p, is greater than one. [5] Each channel of the unknown system is denoted by Cj(z ) where _1  c,(z-V^nr L»,(z )  '" = W  (2-10)  and Nj(z"') and Dj(z ) are polynomials in z" of orders Nj and Dj, respectively. It is _1  1  assumed that the channel transfer functions Cj(z"') have no common poles or zeros, and  29  that both the orders and roots of the channels are unknown. The channel transfer functions may be non-minimum phase as well as non-causal. [5]  The channel output signals are denoted by rj(t) where again i=l,.. .,p. These noiseless channel outputs are assumed to be excited solely by the input signal x(t). These signals are corrupted by additive noise ni(t), i=l,.. .,p. The noise processes are all assumed to be zero-mean white sequences with variances Oj , where the variances are allowed to be different for each channel to allow for a realistic model. [5]  The noisy channel output signals, yi(t), are the only observations in this multichannel system, and are given by [5]  yni(t) = ii(t) + nj(t)  i = l,...,p  (2.11)  Using only these output signals, the algorithm must recover the original input signal, x(t), as well as determine the orders and roots of the channel transfer functions, Cj(z ), _1  i=l,.. .,p. The first step to achieving this is to develop a multichannel adaptive system based on the unknown system model.  2.2.3 The Adaptive System Model  EVAM is built on an adaptive system model where adaptive FIR channels are manipulated to minimize a cost function and then referred back to the unknown channel  30  functions. The basic multichannel adaptive system can be described by the block diagram in Figure-2.3.  YRi(t) yR2(t)  • E(W) YRp(t)  Figure-2.3: Block Diagram of the Adaptive System Model  The inputs to the adaptive system are the noisy channel outputs, yi(t), of the unknown system [5]. The adaptive channel transfer functions are finite impulse response (FIR) filters of order Oj. That is,  WAz- ) = w +w,. z- +... + yv z-°' }  1  lfi  lj0  1  i = l,...,p  (2.12)  where the coefficients w,^ are adaptable via the algorithm. For convenience, a vector Wj is defined to represent the polynomial in equation (3) using only its coefficients [5]. That is,  W, =[w w ,...,w ]  T  lfit  i;i  IJOi  (2.13)  If the coefficient vectors for all p channels are combined, then a vector of vectors, W, can be defined as [5] 31  W = [^,Wj ,...,W ?  (2.14)  T  2  The output of the adaptive system model is the error signal E(W). To define E(W), vectors Yj(t) are defined first as samples of the data [5]  I,(0 = b,(0^,0-l)v..,>',^-O,.)r  (2.15)  Combining these data vectors for all channels results in the composite data vector, Y(t), defined as [5]  Y(t) = [Z(t)lY(t) 2,-,Z(t) ] T  r  T  p  (2.16)  Then, for the two-channel case, the mean-squared error (MSE) is [5]:  E(K) = Y\K Y(t) T  2  (2.17)  (=0  where x is the length of the time interval. The error defined in equation (2.17), or other norms of it, can be minimized using an adaptive control method such as Recursive-Least Squares (RLS) or Least-Mean Squares (LMS). However, simulations show that stochastic gradient type algorithms, such as LMS, may have extremely poor performance, due mostly to large eigenvalue spread which leads to very slow convergence rates. On the other hand, although algorithms based on the least-squared method, such as RLS, are generally unaffected by the eigenvalue spread, they do not provide a method to estimate channel orders. [5]  32  2.2.4 Relation between Unknown and Adaptive Channel Roots  There is a simple relation between the roots of the unknown channels in the Unknown System Model and the roots of the adaptive channels in the Adaptive System Model. If the adaptive channel orders are chosen such that,  Oi = N + Di 2  and 0 = Ni + D 2  (2.18)  2  then the unique family of solutions for Wi(z ) and W^z") that will result in an error _1  1  value of zero, E(W) = 0, will be given by [5]  W (z- ) = 2  l  (2.19)  -aN (z- )D (z- ) l  x  2  1  where a is an arbitrary constant. Thus, if some vector W can be determined such that E(W) = 0, it would be trivial to obtain, by factorization, the unknown channel transfer functions, Cj(z~'). However, EVAM assumes that the channels orders, Ni, N2, Di, D2, are unknown and thus the adaptive filter orders cannot be chosen as in equation (2.19). Instead, EVAM assumes that the channel orders of the adaptive system can be overestimated such that [5]  Wi = N2 + Di + K,  and W = N i + D + K 2  2  2  (2.20)  where Ki and K2 are positive constants that account for the extraneous zeros ofWi(z') and W2(z"') respectively. Now with an overestimation of the orders, a similar relation as that in equation (2.19) is needed between the roots of the unknown system and that of the  33  adaptive system. To start, we note that the error signal will now have a measure of zero for the following set of adaptive channel transfer functions [5]:  W (z- ) =  aN (z- )D,(z- )0 (z-')  l  l  l  2  l  l  (2.21)  W (z ) = -aN (z~ )D (z )G (z ) _1  2  _1  l  x  _1  2  2  where a is an arbitrary constant, and the polynomials G^z") and 02(z"') have orders Ki 1  and K2 respectively. These polynomials have equal factorizations except for the fact that the one with the larger order has all of its extraneous zeros at the origin. Without loss of generality, assume that K2 > Ki, and define AK = (K2 - Ki). Since the polynomials have equal factorizations except for the extraneous zeros at the origin, we can then write [5]:  0 (z- ) = e (z- )0 (z- ) 1  1  2  1  (2.22)  1  AK  where 0 K (Z" ) is a polynomial of order AK having all its roots at the origin. This then 1  A  allows us to rewrite the family of solutions given in equation (2.21) as [5]:  W (z->) = l  aN (z-')D (z- )0 (z- ) 2  l  l  l  1  W (z- ) = -aN, (z- )D (z- )6 (z~ )0 (z~ ) 1  2  1  1  2  X  ]  AK  (2.23)  ]  As before, in the case where the channel orders were known, the error signal will have a measure of zero only if the transfer functions of the adaptive channels are of the form in equation (2.23) where there will be K common zeros appearing at the same, arbitrary locations. The other zeros will be at different locations since EVAM assumes that there are no common poles or zeros between channels. [5]  34  In order to obtain the set of solutions of the form in equation (2.23), we define two important matrices: the data matrix, A , and the sample correlation matrix, R [5]: y  y  ^=[^ (^l/(^-l)v..,^ (l)] [^/(r),^ '(r-l),...,7/(l)] r  r  r  7  (2.24)  and  (2.25)  y = l >  R  A  A  where Y R(t) is the composite data vector defined in (2.16). In addition, we define K = min{Kl, K2}. Mathematically, R will have exactly (K+l) zero eigenvalues and y  (K+l) orthonormal eigenvectors corresponding to these eigenvalues. We denote these eigenvectors by g j , i=l,...,(K+l). [5]  The eigenvectorsflj,i=l,.. .,(K+1) are partitioned into p blocks of equal length as in equation (2.14). These partitions are denoted byfly,i = 1,...,K+1, j=l,...,p. The equivalent polynomials of these vectors are defined as [5]  QiA ~ > l j-\> ~ >-> ~° ] z  ly  =  z  X  z  1  i  i = l,.~,K + l and j = l,...,p  (2.26)  For the two channel case, these polynomials can then be rewritten as [5]:  fi (z-)=^ .[i 2-',...,z-^r IJ  l  >  Q {z- ) = ql \,z-\...,z- ] u2  x  2  01  (2.27) i=l,...,* + l  35  This set of transfer functions provides a basis for the family of solutions for Wi(z") and W2(z"') given in (2.23). Thus, for a given i e {1,.. .,K}, the minimal roots of Qjj(z") and 1  Qi,2(z"') will be at different locations, while the extraneous zeros will be at exactly the same locations in the complex-plane. This corollary is the basis of the EigenvectorBased Algorithm for Multichannel Blind Deconvolution. Further mathematical proofs of the above steps can be found in [5].  2.2.5 The Elimination of Extraneous Roots Using A Priori Information  After obtaining a set of solutions for the adaptive channel transfer functions, the extraneous roots in the solution must be eliminated. However, no information is known about the channels, including their orders. In previous works [6], the channel orders were determined by limiting the brain response tofita certain physiological model; thus, conforming it to a certain order. Since we chose not to apply this limitation, a new method was required to determine the filter order. At this point in the algorithm, two new inputs are entered into the program: (1) a range offilterorders to be tried, and (2) a rough estimate of the general shape of the input signal, x(t). This latter information about the input signal is used to iteratively reduce the orders of the adaptive channels until the extraneous roots are eliminated.  When estimating the hemodynamic brain response of a subject, the input signal is not completely unknown and mysterious. The time at which the experimental stimulations occur, and the strength of each stimulation are known and can be used to model the input  36  function, x(t), as a sum of time-shifted and scaled delta functions centered at the onset of each event. This estimate can easily be obtained from the experimental setup, and a sample of such a function was previously graphed in Figure-1.1. However, it is important to note that this model of the input signal is only an estimate that is based on the expected timing and strength of the subject's reactions based on the external environment and the design of the experiment. This expectation, although similar, is most certainly different than the actual signal that excites the brain. Contrary to other methods in the literature ([15], [2], [3]), this estimate of the input signal will be indirectly used in the algorithm only as useful a priori information to optimize the solution, and not to directly estimate the hemodynamic brain response.  Naturally, EVAM's estimate of the input signal, x(t), should follow a similar shape to the model. It should be excited at similar time intervals and with similar strengths. To achieve this criterion, the algorithm iteratively reduces the orders of the adaptive channel transfer functions until the difference between EVAM's estimated input signal and the model of the input signal achieves a minimum. The difference is measured based on the normalized projection misalignment (NPM) value which is defined as  ,X X,  X~(-jr-)x  NPM =  X X  (2.28)  where |...| represents the norm of the enclosed vector. NPM values closer to zero are preferred. Thisfinalstep of comparing between x(t) and x(t) to reduce the order of the  37  channel transfer functions, and thus eliminate their extraneous roots, serves several important advantages. Firstly and most importantly, a certain model for the tissue response is no longer assumed as in previous works. For example, in Riabkov et al. [20], a two-compartment tissue model was assumed and thus the order of the channels was automatically set to two. However, using this method, there is no longer a need to limit the order of the channels tofita certain model, and any errors due to the incorrect choice of the channel orders are eliminated. This is especially important when we note that the underestimation or overestimation of the orders has a strong and direct effect on the estimates of the channel responses. Figure-2.4 shows an example to illustrate this point.  38  Result of Choosing the Correct Filter Order  0.5 h 0 -0.5 -1  _l  J  2  -  fa  8  L.  10  _J  12  14  1_  16  18  Result of Underestimating the Correct Filler Order (13 Rather than 1S)  i l l 0.5 0 -0.5  •il  _i  2  i  4  6  8  10  12  14  ^6  '8  Result of Overestimating the Correct Filter Order (19 Rather than 16)  2  4  Channel's Actual Time Response Channel's Estimated Time Response  8 10 Time (seconds)  12  14  16  18  Figure-2.4: Effect of Choosing Incorrect Filter Order  Thus, by choosing the correct filter order, the algorithm is capable of exactly reconstructing the channel's frequency response. On the other hand, by choosing an order of 13 rather than 16, a drastic difference is observed, although the error in the filter order is only about 20%. On a similar note, an overestimation of the order will also result in inaccurate results as seen in the final graph.  39  Besides the ability to correctly determine the correct filter order for each channel, thus, eliminating any errors due to incorrect filter order, this step in the algorithm serves a number of other important advantages. As mentioned in previous works [6], it is important to make appropriate use of the a priori information available about the input signal. It would in fact be wasteful not to make use of this information in order to enhance the results. It is important to note that because the input signal x(t) is limited in its accuracy as an experimental expectation, it is also used in the algorithm with limitations. It is not directly used to calculate the brain response as in non-blind identification methods, and is used only as a guide to help enhance the results by eliminating extraneous roots.  2.2.6 Algorithm Implementation  The sections above described the theory behind the modified version of the Eigenvector Based Algorithm for Multichannel Blind Deconvolution. This section is concerned with taking the theory described above and implementing it as several M-files in Matlab. To start, the user must provide the algorithm with the five inputs listed below and described in more detail in the following paragraph: 1. the denoised channel outputs of the unknown system Y(t), 2. an overestimation of the channel lengths No, 3. an estimated difference, AN , between the overestimation and actual 0  channel lengths,  40  4. a range of orders to be tried by the algorithm, 5. an estimate of the overall shape of the input signal, x(t).  Thefirstinput is obtained from the fMRI signals after passing through the preprocessing theory described in the previous chapter. The denoised data is fed into the algorithm as a composite data vector as defined in equation (2.16). The selection criterion for the second input, No, is still under investigation. However, it must be at least greater than the largest expected order, which requires some a priori information regarding the maximum value the channel orders can be [5]. In other words, No > max{Oi, O2,.. -,O } where p is p>  the number of channels in the system. The order reduction amount, ANo, is used to reduce the impulse response of the adaptive channels such that the new length will be Ni = No - ANo. The selection criterion for ANo is arbitrary. However, it can be better determined by observing a plot, in ascending order, of the eigenvalues of the sample correlation matrix, R , and checking for the number of smallest eigenvalues [5]. Finally, y  the last input into the system is the sum of time-shifted and scaled delta functions used to describe the time and strength of the experimental stimulations. An example of such a function was previously illustrated in Figure-1.1.  Using the four inputs described, the program will execute the steps outlined below in order to provide the user with an estimate of: (1) the input signal that stimulated the brain, x(t), (2) the channel orders of the unknown system, O;, (3) and the channel frequency responses, hj(f) where i=l,.. .,p.  41  1. To start, set the initial lengths of the adaptive filters to No.  2. From the composite data vector, Y(t), calculate the (2No x 2No) sample correlation matrix, R , as defined in equation (2.25). y  3. Find the (Ko+1) eigenvalues of the sample correlation matrix, R , that are y  significantly smaller than the other eigenvalues, where Ko = ANo+1. In this work, significantly smaller was defined in the code as less than 0.1% of the maximum eigenvalue.  4. Denote the corresponding eigenvectors of the (Ko+1) largest eigenvalues as <_ where i=l,...,Ko+l.  5. Define a matrix, M, as [5]:  1,2  1,1  (2.29) M  M  where each sub-matrix is defined as [5]:  42  o o  1>®  q (N -AN ) k  0  q (N -AN -l)  0  k  q(N -AN +l) 0  0  0  q(N -AN )  0  0  £ (3)  0  4  M = kJ  £*W>). 0 0  q (N -l) k  ...  0  ? (A^ +2) t  0  <7,TO  £ t  (Atf +l) 0  ?,(Atf„+2)  0  £  t  £ (AA^ +3)  ( ^ o )  4  0  0  ?,(JV„)  (2.30)  and where £k(j) denotes the j entry of the vector gk and the first entry starts at 1. th  6. Define a new matrix F=M M, and calculate the smallest eigenvector s T  corresponding to the smallest eigenvalue of the matrix F.  7. Partition the vector s into p vectors of equal length where p is the number of channels in the system. The length of each partition will then naturally be Ni. These partitions are the coefficients representing the adaptive channel's transfer functions, hj(f) for i= 1,... ,p.  8. After zero padding and applying the inverse of thefilterin the frequency domain, obtain an estimate of the input signal x(t) using equation (2.31) [5].  43  x(t) = y (t)*h (t) + y (t)*h (t) l  1  2  2  (2.31)  9. Loop through steps three to eight for all orders, Oj, within the range defined by the user. Retain the channel order, Oi, that results in the strongest relation between x(t) and x(t). The strength of the relation is measured based on the NPM value defined in equation (2.28). This order, O;, and its corresponding channel transfer functions and estimated input signal, will be the outputs of the system.  10. The results from the EVAM algorithm can be extended further by combining them with the Least Squares Method previously described in Section 1.4.1. In this case, the input signal obtained as an output of the EVAM algorithm, x(t), will be used along with the denoised fMRI observations, Y, to determine the hemodynamic brain response of each individual voxel, according to equation (1.9).  2.2.7 Algorithm Testing  To test the robustness of the proposed algorithm, a set of simulated data was created. This data was produced by: (1) randomly creating four fictitious channels of order 16 each, (2) using the estimate of the input signal, x(t), found in a real data set and graphed in Figure-1.1, (3) convolving each channel with the input signal x(t) to obtain four signals  44  used as input into the system. Below is a graphical illustration of the procedure used to create the simulated data, using Channel Two as an example.  Second Channel's Simulated Brain Response (h2(t))  Time (seconds)  Figure 2.5: Graphical Representation of the Data Simulation Procedure  If the four signals created above are inputted into the system as the channel outputs, Y(t), along with an overestimation of the channel lengths, No = 25, an estimated difference between the overestimation and actual channel lengths, AN = 3, as well as a range of 0  45  orders to be tried by the algorithm, (5 - 23), the system is able to perfectly reconstruct the channel signals (NPM = 10" ), and closely reconstruct the input signal (NPM = 0.242). 15  The results are shown in the following two figures, using Channel Two as an example.  Second Channel's Actual & Estimated Brain Response (h2(t))  Figure-2.6: Comparison between the Actual and the Estimated Brain Response and Input Signal in Channel Two of the Simulated Data  46  Time (seconds)  Figure-2.7: Comparison between the Actual and the Estimated Measured Response in Channel Two of the Simulated Data  More tests were performed to test the robustness of the program to noise. The noise parameters are chosen to yield a signal-to-noise ratio (SNR) of approximately -15dB [23] in order to replicate the SNR level observed in real fMRI data, where SNR is defined as  mo^(  ) noise power  s i g n a l p o w e r  In this highly corrupted signal, the system is less efficient, but still performs reasonably well as shown in Figure-2.8 where the normalized projection magnitude between the  47  actual and estimated brain response is 0.0155. Also, the correlation coefficient between the estimated and actual brain response is found to be 0.69.  Figure-2.8: Comparison between the Actual and the Estimated System Responses in Channel Two of the Simulated Data with SNR= - 15dB  For comparison, a similar test was performed, but after adding white Gaussian noise with an SNR=15dB, in which case the program is able to closely reconstruct the channel signals (NPM = 0.003). The correlation coefficient between the estimated and actual brain response is also found to be very high at 0.95. Furthermore, in all test cases, including the ones with highly corrupted signals, the program is able to return the correct filter order.  48  Original Data and Noisy Data 50  Actual & Estimated Brain Response (h2(l)) 1i  —  40  Original Data Noisy Data  30 20 10 •'0 -10  50  100  5  10  15  20  Time (seconds)  Time (seconds)  Actual & Estimated Input Signal (x(t))  Original (Withou! No.se) & Estimated Convolved Signal (x(t)'(h(t))  Figure-2.9: Comparison between the Actual and the Estimated System Responses in Channel Two of the Simulated Data with SNR = +15dB  After testing the program to ensure that it performs accurately, in the absence and presence of noise, to calculate the input signal, x(t), the correct order of the channels, O, and the channel signals, hj(f) for all channels i within reasonable and acceptable correlation coefficients and NPM values, the same system was applied to real fMRI data sets for analysis.  49  CHAPTER 3: RESULTS AND DISCUSSION  3.1  DESCRIPTION OF D A T A S E T  The real data sets used in this work are a complement of Dr. Martin McKeown from the Neurology Department at the University of British Columbia. There are two sets of data, one from a healthy subject and one from a subject inflicted with Parkinson's disease. Parkinson's disease is a chronic and progressive degenerative disorder of the nervous system that impairs a patient's motor skills and speech. The disease is characterized by muscle rigidity, tremor, a slowing of physical movement, and, in extreme cases, a loss of physical movement and language problems. The primary symptoms are a result of decreased stimulation of the motor cortex by the basal ganglia, which is normally caused by insufficient formation of dopamine. Parkinson's disease is considered idiopathic; however, in some cases, it is a result of drugs, genetic mutation, or head trauma. [19]  In the current event related fMRI study, patients inflicted with Parkinson's disease and age matched controls squeeze a pressure responsive bulb to match three different levels of force amplitude (5, 25, and 50% of maximum voluntary contraction) as quickly and as accurately as possible. The participants are guided by a screen in the scanner which displays bar graphs representing the desired and actual force amplitudes. The experiment duration is 298 seconds (149 TR intervals) and each random event lasts for two seconds where the inter-stimulus interval (rest period) is ten seconds. Finally, it is important to note that the label of the x-axis in all the figures of Chapter 3, refers to time in terms of the sampling index, not absolute time.  50  3.2 OUTPUTS OF T H E PROPOSED A L G O R I T H M  The algorithm described in Chapter 2 was applied on the real data set described in Section 3.1. There are three main outputs to the algorithm: (1) an estimate of the input signal that excited the brain, (2) an estimate of the hemodynamic brain response in each brain region, and (3) an estimate of the channel orders. The estimated input signal is shown in Figure-3.1 along with the modeled input signal for comparison. As expected, we note a difference between the two signals as one represents the signal that presumably excited the brain while the other is an estimate of the signal that actually stimulated the brain. Ir.pjt Signal Modeled by Experimental Setup  Figure-3.1: Comparison between the Modeled Brain Excitation Signal and the Estimated Brain Excitation Signal 51  The hemodynamic brain response estimated for each region is discussed in the following section. However, for reference, a sample of the estimated convolved signal and the measured and denoised fMRI signal are graphed in Figure-3.2. As seen, the proposed method closely follows the original fMRI signal.  Time (seconds) Figure-3.2: Actual and Convolved fMRI Response  3.3 COMPARISON OF E V A M RESULTS WITH L E A S T SQUARES ESTIMATES  The hemodynamic brain response obtained using the Eigenvector Based Algorithm for Multichannel Blind Deconvolution is compared with the hemodynamic brain response obtained using the traditional and popular Least Squares Method discussed in Section 52  1.4.1. A significant improvement is noted in the actual shape of the estimated hemodynamic brain response. In the Least Squares case, spurious multiple peaks are present. This is in contrast to the response estimated by EVAM where as expected, there is a single peak corresponding to the blood flow increase to meet oxygen demand resulting from the subject's response to the experimental stimulus. This period of blood flow increase reaches a peak and is then followed by a fall back to baseline which is often accompanied by a small post-stimulus undershoot. This medical understanding is well depicted in virtually all the EVAM responses. Figure 3.3 illustrates this comparison between the two methods for six of the eighteen regions of interest. The red signal corresponds to the response estimated by EVAM and the green signal is the response by Least Squares.  53  ill  ROI 5  0.5  i  0 -0 5  "E  -1  V  ROI 9  ROI6  \  y  10  0  10  0  o> to  2E  5  10  Time (x2 Seconds)  Figure-3.3: Comparison Between the Estimated Brain Response Using EVAM and Using the Least Squares Method  3.4 CLUSTERING OF A C T I V E AND INACTIVE V O X E L S  The hemodynamic brain response obtained in the previous sections can be analyzed to determine the probability of activation in individual voxels. Previous work involves clustering directly on the fMRI time series using a variety of techniques [12]. Some of these techniques include the t-test implemented in SPM, the non-parametric KolmogorovSmirnov test, and various types of clustering methods including crisp clustering such as in k-means, fuzzy clustering such as in c-means, and soft competitive learning such as in  54  neural gas [12]. In a study by Barth et al. [10], fifty runs were performed on simulated data where among several parameters, the correlation coefficient between the center of the found activation cluster and the center of the known activation region was compared for different clustering methods. They found that neural gas and k-means clustering exhibited the best performance compared to six other clustering methods. Although the k-means algorithm is a simple method with a fast convergence, it has a number of limitations based on its underlying parametric assumptions. Thus, recently, many researchers have moved towards using the correlation between the measured fMRI signal and the signal modeling the experimental stimuli in order to determine whether an individual voxel is active [12]. However, all these contributions perform clustering directly on the fMRI data series, and a serious limitation of this is the high noise level in the raw data, which leads to stability problems and which does not necessarily group data according to similarity in response to experimental stimuli.  Thus, in this work we suggest extending two of the more powerful clustering methods previously suggested, to identify active and inactive voxels based not on the measured fMRI signal, but on the estimated hemodynamic brain response. One of the main motivations behind this choice is an observation of the individual voxel responses estimated from the algorithm described in Chapter 2. The voxel response is either a shape resembling that of the expected brain response, or another shape resembling a random, small-magnitude signal. To illustrate this observation, the hemodynamic brain responses of six random voxels are graphed below. It should be noted that all six voxels are located within a single brain region. This suggests that activity should not be labeled  55  and studied simply on a regional basis, but at the more-detailed voxel level. In Figure3.4, the first row shows samples of active voxels that exhibit the shape of the expected hemodynamic brain response previously discussed in Section 3.3. The second row illustrates the variety of low-magnitude, random signals characteristic of the inactive voxels.  Figure-3.4: Illustration of Active and Inactive Voxels  Thus, a quick visual inspection of the estimated hemodynamic brain responses gives immediate insight into voxel activity. For comparison, the measured fMRI response of these voxels is graphed in Figure-3.5 to exhibit the difficulty of obtaining such an insight from the raw fMRI data. This is true even after the preprocessing procedure. In Figure3.5, the response from voxel 4, defined as active from the previous figure, and the  56  response from voxel 53, defined as inactive from the previous figure, are shown. Despite this significant improvement, an automatic method is still required to cluster the data as it is impractical to go through thousands of voxels per data set to visually and manually identify active voxels.  Figure-3.5: Difficulty of Visually Identifying Active and Inactive Voxels Based on the Measured fMRI Time Response  3.4.1 K-Means Clustering  The first method used to cluster the active and inactive voxels is the K-Means Clustering Method because of its popularity in fMRI analysis as well as its efficient performance as  57  shown by Barth et al [10]. K-Means Clustering is a method that partitions a given data set into k mutually exclusive, single level clusters. As described in Matlab, each observation in the data set is treated as an object having a location in space. Based on these locations, the K-Means Method uses a two-phase iterative algorithm to minimize the sum of point-to-centroid distances, summed over all k clusters. The algorithm starts by finding a partition in which objects within each cluster are as close to each other as possible, and as far from objects in other clusters as possible. Then, each cluster in the partition is defined by its member objects and its centroid. The centroid for each cluster is the point to which the sum of distances from all objects in that cluster is minimized. Finally, an iterative algorithm is used to move objects between clusters until the sum of distances from each object to its cluster centroid is minimized, over all clusters.  The K-Means Clustering Method is applied to each of the eighteen brain regions to identify the active and inactive voxels in each. Of the eighteen regions, only sixteen could be clustered because two regions were less than fifteen voxels each, and thus did not contain enough voxels for clustering using the K-Means method. In three brain regions, the clustering was effective in identifying two unique clusters that exhibit clearly different mean responses. In these regions, a manual check reveals that the rate of incorrect classifications, where an inactive voxel is classified as active or vice versa, does not exceed 20%. To illustrate, the mean of the hemodynamic brain responses in each cluster of Region 1 is graphed in Figure-3.6. As expected, one cluster exhibits the shape of the hemodynamic brain response, while the other cluster shows a random signal response representing inactivity.  58  •_-| 51  i  i  i  i  i  i  i  I  Time (x2Seconds)  Figure 3.6: Mean Response of the Two Clusters in Region Fifteen where the K-Means Clustering Method was Effective  However, using the K-Means Clustering Method, thirteen of the eighteen regions showed an ineffective separation of voxels. In these regions, the mean of the two clusters are similar, to the extent that the active cluster is indistinguishablefromthe inactive cluster. In such cases, the rate of incorrect classifications is higher than 50%. One such example is displayed in Figure-3.7. Due to this high rate of inaccuracy, a new method was explored.  59  |ii I i  ^I^SR^^I^^^HIIill^  Time (x2Seconds)  Figure 3.7: Mean Response of the Two Clusters in Region Five where the K-Means Clustering Method was Ineffective  3.4.2 Correlation-Based Clustering  The correlation coefficient between two random variables indicates the strength and direction of a linear relationship between them. Mathematically, the correlation coefficient, p , between two variables X and Y, with corresponding expected values xY  |4,x and ay is obtained by dividing their covariance by the product of their standard deviations, ax and ay, as shown in equation 3.1.  cov(X,Y) P  X  Y  —  _E((X-JU  X  )(Y-  — (Jy<Jy  M  Y  )) \->- ) 1  C>yO-y  where E(...) is the expected value operator and cov(...) represents covariance. Its values range between -1 and +1, and by the Cauchy-Schwarz Inequality, the correlation coefficient cannot exceed one. Values closer to +1 indicate an increasing linear relationship while those closer to -1 represent a decreasing linear relationship. Recent studies consider the absolute values of the correlation coefficient between the measured fMRI time response and the modeled experimental input signal to determine voxel activation.  In this work, the correlation coefficient for each voxel is a measure of the linear dependence between the voxel's estimated hemodynamic brain response and the mean brain response of the region of interest it is contained within. At first, the hemodynamic brain response estimated by EVAM was used instead of the mean response of the region of interest. However, because most of the voxels in a region are active, the latter choice yielded either the same or better results. There is an apparent improvement in the clustering based on correlation as compared to the clustering based on the K-Means method as thirteen regions showed an effective separation of active and inactive voxels, in comparison with three regions in the K-Means case. Furthermore, the correlation clustering method was able to cluster the small regions which were a problem in the KMeans method. Figure-3.8 is a visual comparison between the two methods for two example regions where the mean response of each cluster within the region is plotted for each clustering method.  61  1i  I  1  Result of K-Means Clustering in RO113 1  1  2  1  r  1  1  1  1  I  I  I  1  I  I  3  4 5 6 Time (x2Seconds)  /  8  I  9  Figure 3.8: Mean Response of the Two Clusters in Region Thirteen for the K-Means and Correlation Clustering Methods  For reference, the example of poor K-Means clustering in regionfiveshown in the previous section is plotted again here with its corresponding correlation-based clustering. Although the correlation clustering in this example is not as efficient as in other regions, it illustrates the improvement in results that exists even in difficult regions.  62  Result of K-Means Clustering in ROI 5  •i I  i  i  i  i  i  i  i  l  Time (x2Seeonds) Figure 3.9: Mean Response of the Two Clusters in Region Five for the K-Means and Correlation Clustering Methods  3.4.3 Spatial Mapping of Active and Inactive Voxels  The results from the clustering methods can be analyzed on a spatial level by mapping the active and inactive voxels onto the magnetic resonance image of the subject's brain. The three regions of biological interest in this study, the right cerebral hemisphere, the left supplementary motor area, and the left primary motor cortex, are the ones chosen for spatial analysis. One comprehensive slice is chosen for each region and the active voxels are shown in yellow while the inactive voxels are shown in blue. Before the three regions  63  of interest are shown, a figure showing all the active and inactive voxels for two comprehensive brain slices is shown.  Figure 3.10: All Active and Inactive Voxels in the Comprehensive Brain Slices 16 and 20  Figure 3.11: Active and Inactive Voxels in the Right Cerebral Hemisphere  64  Figure 3.12: Active and Inactive Voxels in the Left Supplementary Motor Area  Figure 3.13: Active and Inactive Voxels in the Left Primary Motor Cortex  The resulting images are promising as they show a clustered, not sparse, distribution. In other words, as expected based on current medical understandings, the voxels responding to the experimental stimuli are spatially grouped and not distributed across the brain. We also notice complementary reactions in the right and left hemispheres of the brain as shown in the first two images.  3.5 COMPARISON BETWEEN H E A L T H Y AND PARKINSON'S DISEASED PATIENT  A preliminary comparison was made between a Parkinson's diseased patient and a healthy subject. The comparison is illustrated in Figure-3.13 where the left column indicates the responses of the diseased patient in the right cerebral hemisphere, the left supplementary motor area, and the left primary motor cortex respectively, while the right column is the corresponding reactions in the healthy patient. Primarily, we notice a major difference in the left primary motor cortex where there is very little activation in the diseased patient. Interestingly, symptoms of Parkinson's disease are thought to be a result of decreased stimulation of the motor cortex by the basal ganglia, which is normally caused by insufficient formation of dopamine. Although the results support this theory, no major conclusions can be made at this stage due to the risk of incorrectly attributing inter-subject variability to differences due to Parkinson's disease. However, these promising results are an encouraging step to proceed with further research in this area.  66  Figure 3.14: Preliminary Comparison between Parkinson's Diseased Patient Responses (Left Column) and Healthy Subject Responses (Right Column)  67  CHAPTER 4: CONCLUSION The ability to determine the brain's hemodynamic response without relying on an input function would be an extremely valuable asset in a large number of medical applications, and today, functional Magnetic Resonance Imaging (fMRI) is one of the leading methods in developing a better understanding of the human brain. Assuming the linear timeinvariant model for the observed fMRI response ([1], [2], [4]), this work provided an estimate of the hemodynamic brain response both on a regional and on an individual voxel level, as well as provided an estimate of the input signal that excited the brain's response. This was achieved using the Eigenvector-Based Algorithm for Multichannel Blind Deconvolution (EVAM) ([5], [6]) combined with Independent Component Analysis (ICA) used for preprocessing [23],  Thus, in contrast to previous literature, the brain's excitation signal was not modeled according to the experimental setup, but was instead treated as an unknown in the blind deconvolution problem. The resulting estimate of the input signal produced by the proposed method could truly help eliminate the issues arising from inaccurately modeling the excitation signal. It could also prove to be a valuable insight into the actual signal that triggered the brain during the experiment, and not the ideal signal that should have triggered it based on experimental observations.  In this work, no physiological models or prior assumptions regarding the shape or order of the brain's response were made. When compared to non-blind identification algorithms traditionally used in the literature, the results show a significant improvement  68  as the shape of the hemodynamic brain response conforms with current medical understandings.  Furthermore, the estimated hemodynamic brain response was used as a basis to determine active and inactive voxels. Two clustering methods, K-Means Clustering and Correlation-Based Clustering, were compared, and correlation-Based Clustering was found to be superior and was thus used to spatially map the active and inactive voxels in each brain region. Spatial maps of important brain regions yielded promising results where spatial sparseness was not characteristic of the images, and regions of activity/inactivity were closely clustered. Finally, a preliminary comparison between a healthy subject and a subject inflicted with Parkinson's disease yielded promising differences, especially in the left primary motor cortex where the diseased patient showed little activation. Although this observation cannot be generalized at this stage, it interestingly conforms with current medical hypotheses regarding the cause of the disease symptoms, and these promising results are most certainly an encouraging step to proceed with further research in this area.  Future Considerations  Most certainly the next most important step is applying the proposed algorithm on more fMRI data sets to compare the responses of healthy subjects and patients suffering from Parkinson's disease. This will allow for meaningful medical conclusions to be made as the trend between the individual comparisons is observed. Other areas for future  69  consideration include considering other data reduction methods that are more efficient than Principal Component Analysis. Other more efficient clustering methods may also be analyzed.  70  REFERENCES [1] Boynton GM, Engel SA, Glover GH, Heeger DJ. Linear systems analysis of functional magnetic resonance imaging in human VI. J Neurosci, 16:4207-4221, 1996. [2] Dale AM, Buckner RL. Selective averaging of individual trials using fMRI. Hum Brain Mapp, 5:329-340, 1997. [3] Dale AM. Optimal experimental design for event-related fMRI. Hum Brain Mapp, 8:109-114, 1999. [4] Friston KJ, Jezzard P, Turner R. Analysis of functional MRI time series. Hum Brain Mapp, 1:153-171, 1994. [5] Mehmet I. Gurelli. EVAM: An Eigenvector-Based Algorithm for Multichannel blind Deconvolution of Input Colored Signals. IEEE Transactions on Signal Processing, Vol. 43, No. 1, January 1995. [6] Dmitri Y. Riabkov and Edward V.R. Di Bella. Estimation of Kinetic Parameters Without Input Functions: Analysis of Three Methods for Multichannel Blind Identification. IEEE Transactions on Biomedical Engineering, Vol. 49, No. 11, November 2002. [7] A.M. Wink and J.B.T.M. Roerdink. Denoising Functional MR Images. IEEE Trans. Medial Imaging, 23/3:374-387, 2004. [8] Yasser M. Kadah, Stephen LaGonte and Xiaoping Hu. Denoising of EventRelated fMRI Data Based on a Rician Noise Model For Robust Analysis Using ICA. Proc. Intl. Soc. Mag. Reson. Med. 10, 2002.  71  [9] Kanyan Yang, Jagath C. Rajapakse. Denoising of Functional MRI Using ICA. Proceedings of the 3rd International Symposium on Image and Signal Processing and Analysis, Vol.1: 561 - 566, Sept. 2003. [10] M. Barth, E. Dimitriadou, K. Hornik, E. Moser. Comparison of clustering methods in fMRI analysis by ranking association coefficients. In Proc. 11th Scientific Metting of the Int. Society of Magn. Res. in Medicine, page 1817, Toronto, Ontario, Canada, July 2003. [11] Giedrius T. Burac as and Geoffrey M. Boynton. Efficient Design of Eventv  Related fMRI Experiments Using M-Sequences. Neurolmage 16: 801-813, 2002. [12] Ashish A Rao, Thomas M Talavage. Clustering of fMRI Data for Activation Detection Using HDR Models. Proceedings of the 26th Annual International Conference of the IEEE EMBS San Francisco, CA, USA . September 1-5, 2004. [13] Scott A. Huettel, Martin J. McKeown, Sarah Hart, Truett Allison, Allen W. Song, and Gregory McCarthy. The Effects of Stimulus Duration upon Visual Cortical Activation: Evidence from Functional MRI and Intracranial ERPs. Cerebral Cortex, 14:165-173, February 2004. [15] Martin J. McKeown, Yong-jie Hu, and Z. Jane Wang. ICA Denoising for Event-Related fMRI Studies. Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference Shanghai, China, September 1-4, 2005. [16] Rui Liao, Martin J. McKeown, Jeffrey L. Krolik. Motion-corrected independent component analysis for robust functional magnetic resonance imaging. Acoustics, Speech, and Signal Processing. IEEE International Conference, Vol. 2: 17-21, May 2004.  72  [17] Martin J McKeown , Lars Kai Hansen and Terrence J Sejnowski. Independent component analysis of functional MRI: what is signal and what is noise?. Current Opinion in Neurobiology, 13:620-629, 2003. [18] Ingo R. Keck, E J. Theis, P. Gruber, E. W. Lang, Karsten Specht. 3D Spatial Analysis of fMRI Data A Comparison of ICA and GLM Analysis on a Word Perception Task. IEEE International Joint Conference. Vol. 3: 2495- 2499, July 2004. [19] S Marceglia, G Foffani, A M Bianchi, G Baselli, F Tamma, M Egidi, and A Priori. Dopamine-dependent non-linear correlation between subthalamic rhythms in Parkinson's disease. Physiol. 579-591, 2006 March 15. [20] Dmitri Y. Riabkov and Edward V.R. Di Bella. Blind Identification of the Kinetic Parameters in Three-Compartment Models. Phys. Med. Biol. 49:639-664, 2004. [21] Dmitri Y. Riabkov and Edward V.R. Di Bella. Theoretical Analysis of Blind Identification of Kinetic Parameters in a Three-Compartment Models. Biomedical Imaging IEEE International Symposium, 2002. [22] G. H. Gloub and V. Pereyra. The differentiation of Peudo-Inverse and Nonlinear Least Squares Problems Whose Variables Separate. SIAM Journal on Numerical Analysis, Vol. 10, No.2:413-432, 1973. [23] Martin J. Mckeown, Z. Jane Wang, Rafeef Abugharbieh and Todd Handy. Increasing the Effect Size in Event-Related fMRI Studies. IEEE Engineering in Medicine and Biology Magazine. Vol. 25: 91-101, March-April 2006. [24] P. A. Bandettini, A. Jesmanowicz, E.C. Wong, and J. S. Hyde. Processing strategies for time-course data sets in functional MRI of the human brain. Magn Reson Med, 30, 161- 173, 1993.  73  [25] K. J. Friston. Statistical parameter mapping. In R. W. Thatcher, M. Hallett, T. Zeffi ro, W. R.John, and M Huena, editors, Functional Neuroimaging: Technical Foundations. Academic Press, 1994 [26] M. J. Mckeown, A. J. Bell, T. J. Sejnowski, and etc. Analysis of fMRI data by blind separation into independent spatial components. Human Brain Mapping, 6 16&188, 1998. [27] Fize, D., Boulanouar, K., Chatel, Y., Ranjeva, J.-P., Fabre-Thorpe, M., and Thorpe, S. 2000. Areas involved in rapid categorization of natural images: An eventrelated fMRI study. Neuroimage 11: 634—643. [28] Friston, K. J., Zarahn, E., Josephs, O., Henson, R. N., and Dale, A. M. 1999. Stochastic designs in event-related fMRI. Neuroimage 10: 607-619. [29] Liu, T. T., Frank, L. R., Wong, E. C , and Buxton, R. B. 2001. Detection power, estimation efficiency, and predictability in eventrelated fMRI. Neuroimage 13: 759-773. [30] S. Ogawa, T. M. Lee, A. R. Kay, and D. W. Tank. Brain magnetic resonance imaging with contrast dependent on blood oxygenation,, in Proc. Natl. Acad. Sci. USA, 1990, vol. 87, pp. 9868.9872. [31] K. K. Kwong, J. W. Belliveau, D. A. Chesler, et al. Dynamic magnetic resonance imaging of human brain activity during primary sensory stimulation,, in Proc. Natl. Acad. Sci. USA, 1992, vol. 89, pp. 5675.5679. [32] P. A. Bandettini, E. C.Wong, R. S. Hinks, R. S. Tikofsky, and J. S. Hyde, .Time course EPI of human brain functional during task activation,. Magn. Reson. Med., vol. 25, pp. 390.397, 1992.  74  [33] K. J. Friston, P. J. Jezzard, and R. Turner. Analysis of functional MRI timeseries,. Human Brain Mapping, vol. 1, pp. 153.171, 1994. [34] S. H. Lai and M. Fang, .A novel local PCA-based method for detecting activation signals in fMRI. Magn. Reson. Imag., vol. 17, no. 6, pp. 827.36, 1999. [35] McKeown, M.J., et al., Analysis of fMRI data by blind separation into independent spatial components. Human Brain Mapping, 6(3): p. 160-88, 1998. [36] Buchanan J: Principles and Practice of Positron Emission Tomography. Edited by Wahl R: Lippincott, Williams & Wilkins Publishers; 2002. [37] Smith AM, Lewis BK, Ruttimann UE, Ye FQ, Sinnwell TM, Yang Y, Duyn JH, Frank JA: Investigation of low frequency drift in fMRI signal. Neuroimage 1999,9:526-533. [38] Cox RW, Bandettini PA. Int Soc Magn Reson Med Sixth Sci Meeting Exhib, Sydney, p 244. [39] Petersen K, Hansen L, Koleda T, Rostrup E, Strother S: On the independent components of functional neuroimages. Proceedings of the Third International Conference on Independent Component Analysis and Blind Source Separation (ICA2000): 2000:615-620. [40] Friston KJ: Modes or models: a critique on independent component analysis for fMRI. Trends Cogn Sci 1998, 2:373-375. [41] Calhoun VD, Adali T, Pearlson GD, Pekar JJ: Spatial and temporal independent component analysis of functional MRI data containing a pair of task-related waveforms. Hum Brain Mapp 2001, 13:43-53.  75  

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
United States 5 1
China 4 0
India 2 0
France 2 0
Germany 2 4
Russia 1 0
City Views Downloads
Shenzhen 3 0
Ashburn 3 0
Unknown 2 5
Nuremberg 2 0
Saint Petersburg 1 0
Redmond 1 0
Los Angeles 1 1
Kollam 1 0
Bangalore 1 0
Beijing 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0100861/manifest

Comment

Related Items